AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There

AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There

A South Korean platform called GenNomis recently found itself in a scandal that drew widespread attention for all the wrong reasons. This site featured a controversial software application named Nudify, which was abruptly taken offline after a cybersecurity expert uncovered a staggering number of AI-generated pornographic images stored in an unsecured database. Alarmingly, these explicit visuals included representations of celebrities, politicians, random women, and even minors.

The researcher, Jeremiah Fowler, who stumbled upon this troubling collection, quickly alerted GenNomis and its parent organization, AI-Nomis, about the security flaw. Following his notification, the database was secured to prevent any access by the public. However, in a strange twist, both GenNomis and AI-Nomis disappeared from the internet shortly after Wired inquired for comments on the situation.

Sadly, GenNomis is merely one example among numerous AI startups that are generating tools for the creation of fake pornographic material. This disturbing trend is largely driven by the increasing use of generative AI technology, often referred to as “deepfakes” due to their uncanny realism. The proliferation of these fake images and videos online presents serious dangers, particularly for women, who are disproportionately targeted by such content.

The ramifications of deepfake pornography are extensive, encompassing privacy violations, extortion attempts, and even the creation of materials depicting child sexual abuse. In South Korea, the home country of GenNomis, women represent the majority of victims of deepfake porn, with an alarming 53 percent affected by this malicious activity.

As generative AI continues to advance, the exploitation of women through deepfake pornography has only intensified. This concerning trend coincides with a rise in sexist attitudes and gender-based violence in South Korea, fueled by misguided notions that blame feminism for various societal challenges.

There is a growing chorus of calls for stricter regulations governing generative AI, yet the industry largely operates under self-regulation. While China has taken proactive measures to label all AI-generated content, lawmakers in Western countries are beginning to tackle the criminalization of deepfake pornography. However, the laws and penalties regarding this issue differ significantly from one state to another in the United States.

For the many women who have been affected by companies like GenNomis, these regulatory initiatives may come too late. The pressing need to address the ethical challenges posed by AI technology has never been clearer.