ChatGPT is Making up Fake Articles, Cooking Up Harassment Scandals. Here's Why You Should be Concerned
ChatGPT is Making up Fake Articles, Cooking Up Harassment Scandals. Here's Why You Should be Concerned
Several scholars have raised concerns that the chatbot’s use may disrupt academia over issues of the accuracy of the content it generates

From generating content to scripting codes or even clearing difficult exams, ChatGPT is being lauded for becoming a revolutionary AI tool that can help users learn more and generate responses for almost every topic in the world.

Launched in November, the OpenAI’s chatbot has quickly been seized upon by users amazed at its ability to answer difficult questions clearly, write sonnets or provide information on loaded issues.

However, the AI technology has been under scrutiny across countries for issues ranging from privacy to accuracy or the content it generates and even handling the enormous amount of information.

The Guardian, last month, received an email from a researcher enquiring about an article on their website written a few years ago. The researcher said that the piece was unable to be found on the website and enquired if it was taken down intentionally.

However, The Guardian couldn’t trace the article and despite keeping a track of deletions, there was no trace of its existence.

It was found out that the article was never written and the researcher had carried out their research using ChatGPT.

In a similar turn of events, a student contacted the UK-based news website asking about another missing article. Again, there was no trace of the article in the system and it was found out that the student came across it through ChatGPT.

The report said that the AI tool had simply made up facts when asked about the articles on the subject.

However, the details provided by ChatGPT, given its access to vast amount of data data, looked very believable even to the person who hadn’t written it.

The invention of sources has been troubling for news organisations, journalists and also for readers and the wider information ecosystem.

The revolutionary technology, which has seen a huge surge in popularity in the recent months, registered 100 million monthly users in January. A recent study of 1,000 US students found that 89% had used ChatGPT to help with a homework assignment.

There have been other concerning reports pointing out to the misinformation fed through the AI tool.

A US law professor was falsely accused of assaulting students by ChatGPT after the chatbot included his name in a generated list of legal scholars who had sexually harassed someone, citing a non-existent report in The Washington Post.

Professor Jonathan Turley from George Washington University wrote in an opinion piece that he was falsely accused by ChatGPT of assaulting students on a trip he “never took” while working at a school he “never taught at”, the Independent reported.

“It is only the latest cautionary tale on how artificial ‘artificial intelligence’ can be,” he said. The professor noted that no such article existed, something echoed by the newspaper as well.

In another instance, the AI tool falsely claimed a mayor in Australia had been imprisoned for bribery.

Such instances have led several scholars in recent months to raise concerns that the chatbot’s use may disrupt academia over issues of the accuracy of the content it generates.

European and Western authorities have already deepened their scrutiny of the chatbot amid Italy banning it and Canada’s opening an investigation into OpenAI.

Read all the Latest News here

What's your reaction?

Comments

https://sharpss.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!