OpenAI ChatGPT, Google Bard spreading news-related misinformation: Report
OpenAI's ChatGPT and Google's Bard -- the two leading generative artificial intelligence (AI) tools -- are willingly producing news-related falsehoods and misinformation, a new report has revealed.
OpenAI's ChatGPT and Google's Bard -- the two leading generative artificial intelligence (AI) tools -- are willingly producing news-related falsehoods and misinformation, a new report has revealed.
The repeat audit of two leading generative AI tools by NewsGuard, a leading rating system for news and information websites, found an 80-98 per cent likelihood of false claims on leading topics in the news.
The analysts prompted ChatGPT and Bard with a random sample of 100 myths from NewsGuard's database of prominent false narratives.
ChatGPT generated 98 out of the 100 myths, while Bard produced 80 out of 100.
In May, the White House announced a large-scale testing of the trust and safety of the large generative AI models at the DEF CON 31 conference beginning August 10 to "allow these models to be evaluated thoroughly by thousands of community partners and AI experts" and through this independent exercise "enable AI companies and developers to take steps to fix issues found in those models."
In the run-up to this event, NewsGuard released the new findings of its "red-teaming" repeat audit of OpenAI's ChatGPT-4 and Google's Bard.
"Our analysts found that despite heightened public focus on the safety and accuracy of these artificial intelligence models, no progress has been made in the past six months to limit their propensity to propagate false narratives on topics in the news," said the report.
In August, NewsGuard prompted ChatGPT-4 and Bard with a random sample of 100 myths from NewsGuard's database of prominent false narratives, known as Misinformation Fingerprints.
Founded by media entrepreneur and award-winning journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, NewsGuard provides transparent tools to counter misinformation for readers, brands, and democracies.
The latest results are nearly identical to the exercise NewsGuard conducted with a different set of 100 false narratives on ChatGPT-4 and Bard in March and April, respectively.
For those exercises, ChatGPT-4 responded with false and misleading claims for 100 out of the 100 narratives, while Bard spread misinformation 76 times out of 100.
“The results highlight how heightened scrutiny and user feedback have yet to lead to improved safeguards for two of the most popular AI models,” said the report.
In April, OpenAI said that “by leveraging user feedback on ChatGPT” it had “improved the factual accuracy of GPT-4.”
On Bard's landing page, Google says that the chatbot is an “experiment” that “may give inaccurate or inappropriate responses” but users can make it “better by leaving feedback.”
Get Latest Business News, Stock Market Updates and Videos; Check your tax outgo through Income Tax Calculator and save money through our Personal Finance coverage. Check Business Breaking News Live on Zee Business Twitter and Facebook. Subscribe on YouTube.
RECOMMENDED STORIES
Power of Compounding: How many years will it take to reach Rs 3 crore corpus if your monthly SIP is Rs 4,000, Rs 5,000, or Rs 6,000
IRCTC Dividend 2024: Railway PSU announces 200% interim dividend - Check record date and other details
Power of Compounding: Salary Rs 25,000 per month; is it possible to create over Rs 2.60 crore corpus; understand it through calculations
Reduce Home Loan EMI vs Reduce Tenure: Rs 75 lakh, 25-year loan; which option can save Rs 25 lakh and 64 months and how? Know here
Top 7 Large and Mid Cap Mutual Funds with Best SIP Returns in 5 Years: No. 1 fund has turned Rs 15,000 monthly SIP investment into Rs 20,54,384; know about others
New Year Pick by Anil Singhvi: This smallcap stock can offer up to 75% return in long term - Check targets
PSU Oil Stocks: Here's what brokerage suggests on these 2 largecap, 1 midcap scrips - Buy, Sell or Hold?
06:01 PM IST