OpenAI's Dangerous Text-Generating AI Goes Live For Potential New Breed Of Fake News

artificial intelligence woman

Deepfake photos and videos have fooled many people, but can text-generated articles be just as convincing? OpenAI recently published the full version of “GPT-2: 1.5B”, a text-generating artificial intelligence system. The research lab warns that the tool could potentially be dangerous if it is misused.

OpenAI released a paired down version of its GPT-2 model this past winter. This model included 124 million parameters, while the latest release features more than 1.5 billion parameters. OpenAI originally believed that its GPT-2 model was too dangerous, but there has since been “no strong evidence of misuse.” The full version was released earlier this week.

Researchers used more than 8 million text documents to train the AI system. The system is now able to generate articles and other documents from small prompts. For example, if you give the system a headline, it will generate a news article.

openai gpt 2 sample
Sample produced by GPT-2

The articles that are produced by GPT-2 are rather convincing. Cornell University conducted a test where they asked participants to read and rate articles that were generated by GPT-2. The documents received a credibility score of 6.9 out of 10. Users have also noted that the GPT-2 often struggles to write long, coherent articles. One user noted that the system was unable to keep track of the names of the people that were the subject of one generated article.

Unfortunately, other systems are not always able to detect and eliminate fake articles. OpenAI has created its own detection model and noted that it has roughly a 95% detection accuracy rate. They argue that this number is too low and that “metadata-based approaches, human judgment, and public education” are also required.

OpenAI insists that the tool could potentially be abused with some “fine-tuning”. The Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) demonstrated that the system could create convincing propaganda from extremist groups. GPT-2 could be used to produce fake news, spam, and impersonations of writers as well. OpenAI also noted that language models like GPT-2 have their own biases. The lab is therefore looking for ways to more easily detect fake articles and to better understand language biases.

OpenAI’s publication coincides with Microsoft CEO Brad Smith’s discussion of the dangers of AI. Smith spoke at the 2019 Web Summit and warned that AI could become a “formidable weapon” if it was not used with caution. He called upon governments and tech industry leaders to implement protective measures.