- Data scraping: AI companies use automated tools, or "scrapers," to collect vast quantities of images and associated text from the web. This includes social media platforms, blogs, stock photo websites, and image-hosting sites.
- Dataset creation: The collected images and their metadata (tags, captions) are used to build massive training datasets, such as the LAION-5B set, which contains billions of images.
- Model training: AI systems analyze these datasets to learn patterns and relationships between images and their descriptions. This process teaches the AI to generate new images based on text prompts.
三花好凶呀,還是又又隨和,我把他變成半人半貓也不在乎 :D
Yes, AI models are commonly trained on images from the internet
, but this practice is a source of significant legal, ethical, and privacy controversy. While tech companies argue that using publicly available images is transformative and falls under "fair use," many artists, photographers, and other creators disagree and have filed lawsuits. The process of using online images
所有跟帖:
•
管球!隻要不是俺家老魏 :D
-有個用戶名-
♀
(0 bytes)
()
10/08/2025 postreply
09:08:48