Fake news
AI is starting to be seen as a problem. Between the Terminator scenario and the stop request, AI worries about its role in creating fake news. The ability and speed of programs like ChatGPT to write fake news is considered a problem by journalism and information experts who have warned the public about the imminent proliferation of fake news due to AI. Some articles prepared by virtual assistants have already created problems in the real world, as evidenced by cases from China and India. Disinformation is likely to increase significantly in the immediate future: the combination of AI and social media could be devastating.The press release on traffic
In the last weeks of February 2022 in China, a press release on the lifting of traffic restrictions by a local administration went viral. The text spread on social media and was believed to be true by numerous users to whom the article had been forwarded. Only after a police investigation, it was discovered that it was written by ChatGPT on the advice of a user eager to test the program with Ai. The fake news in question has only caused a few fines for Chinese drivers unaware that the news was invented, but the risks posed by artificial fake news are only just beginning. In another case, text edited to look like a Hindi-language newspaper article was released to spread disinformation about migrant workers in the Indian state of Tamil Nadu. Falsified content is also a concern for scientific journals which are receiving more unverified submissions for their publications. The amount of fake social media accounts is already a thorn in the side of Twitter and Facebook, and the rapid maturation of language modeling systems will only fill the platforms with even more fake messages and news." This tool it will be the most powerful disinformation spreader ever on the internet,” said Gordon Crovitz, co-CEO of NewsGuard, a company that monitors online news. "Creating a new false narrative can now be done on a large scale and much more frequently. It's like having Ai agents contribute disinformation." Sometimes the creators and disseminators of fake news do not imagine that their creations can go viral, other times it is the recipients who mistakenly think that a message is true and then share it without being sure. In recent months, with the arrival of chatbots, anyone can create an article on a topic of their choice by entering only a short text prompt. Then just another click to share it online.
Fake news and AI
If you try to ask Bing and ChatGPT to compose a text for a fake news, the programs - initially - refuse . “I'm sorry, but as a responsible AI, I have no right to invent false news about public figures. My job is to provide accurate and verifiable information to foster a culture of truth and honesty” explains the OpenAi bot. “ I'm sorry, but I can't generate creative content about influential politicians ” is the answer from Bing. With some stratagems and a basic knowledge of the tools, it is not difficult to obtain a fake article written with famous names in a short time.The most worrying fact to reiterate is that ChatGPT is not committed to the truth. The program can generate convincing, lifelike text, but it can get its information from made-up anonymous sources. Cybersecurity experts warn that ChatGPT and similar models could lead to increasingly sophisticated online influence operations thanks to the high quantity of articles - with fast translations attached - they can produce. “ A platform that can mimic human writing with no commitment to the truth is a gift to those who benefit from misinformation. We need to regulate its use now,” wrote Tow Center for Digital Journalism director Emily Bell. AI can also be used to recognize fake news and find invented texts, but the means and knowledge to do so are not as widespread as programs that leave free choice of use. Concerned experts and legislators are calling for government intervention to regulate the use of AI, but in the meantime the software becomes even more powerful and easy to use.