abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Esta página no está disponible en Español y está siendo mostrada en English

Artículo

19 Jun 2023

Autor:
Drew Harwell, The Washington Post

Generative AI complicates accountability for online child sexual exploitation

"AI-generated child sex images spawn new nightmare for the web", 19 June 2023

...Generative-AI tools have set off what one analyst called a “predatory arms race” on pedophile forums because they can create within seconds realistic images of children performing sex acts, commonly known as child pornography.

Thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the internet visible only with special browsers, with some participants sharing detailed guides for how other pedophiles can make their own creations...

...The flood of images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, not detect newly generated ones. It also threatens to overwhelm law enforcement officials who work to identify victimized children and will be forced to spend time determining whether the images are real or fake...

...The models, such as DALL-E, Midjourney and Stable Diffusion, were fed billions of images taken from the internet, many of which showed real children and came from photo sites and personal blogs. They then mimic those visual patterns to create their own images...

...It’s not always clear from the pedophile forums how the AI-generated images were made. But child-safety experts said many appeared to have relied on open-source tools, such as Stable Diffusion, which can be run in an unrestricted and unpoliced way.

Stability AI, which runs Stable Diffusion, said in a statement that it bans the creation of child sex-abuse images, assists law enforcement investigations into “illegal or malicious” uses and has removed explicit material from its training data, reducing the “ability for bad actors to generate obscene content.”..

...Stable Diffusion’s main competitors, Dall-E and Midjourney, ban sexual content and are not provided open source, meaning that their use is limited to company-run channels and all images are recorded and tracked. OpenAI, the San Francisco research lab behind Dall-E and ChatGPT, employs human monitors to enforce its rules, including a ban against child sexual abuse material, and has removed explicit content from its image generator’s training data so as to minimize its “exposure to these concepts,” a spokesperson said...