AI-Powered Search Engines Driving Concerns Over Accuracy, Bias
This article is part of a series on generative artificial intelligence, aimed at creating a foundational understanding of consumer attitudes on the emerging technology.
Read more of our coverage: AI in Search | Interest in Other AI Applications | Who's Using Generative AI
The introduction earlier this month of artificial intelligence chatbots from Microsoft Corp. and Alphabet Inc.’s Google that will integrate into traditional search engines to provide conversational-style responses to search queries marked the latest in the mainstream adoption of generative AI. Despite initial fanfare, including OpenAI’s ChatGPT’s milestone of achieving more than 100 million monthly users in just two months, most of the public has some concerns about AI, both in search engines and more broadly, according to a Morning Consult survey.
At Least 7 in 10 Americans Have AI-Related Concerns About Data Privacy, Foreign Influence, Misinformation
Accuracy, bias and misinformation in search results among most Americans’ concerns about AI systems
- Among all U.S. adults, 63% expressed at least some concern over the accuracy of results in search engines that use AI. More than 2 in 3 said they are concerned with the possibility of misinformation in search results, and about 3 in 5 are worried about bias.
- The public’s top concern regarding AI systems more broadly is personal data privacy, with 3 in 4 adults expressing at least some concern, including nearly half who say they are “very concerned” about the issue. Seven in 10 adults said they are worried about the spread of misinformation, and an equal share said they’re concerned about foreign powers’ potential use of AI against U.S. interests.
- The only area that did not draw concern among a majority of respondents is job loss in their given industry — though a 49% plurality still reported at least some concern about AI’s impact on their field. Another 2 in 3 said they were concerned about job loss across all industries as a result of AI.
Early experiences with AI search unlikely to lessen fears of users
In the first week that Microsoft’s AI-powered Bing search engine was made available to a limited portion of the public, users reportedly gave the AI-produced answers a “thumbs up” response 71% of the time, according to the company. This may help to quell some concerns about result accuracy and misinformation, which are among top concerns for adults in regards to search engines that use AI — though the fact both Bing and Google’s chatbot service Bard displayed incorrect results during demos highlight why users may be worried.
Those who carried on conversations with the AI-powered Bing eventually ran into strange and often disconcerting responses, including threatening remarks. These experiences are unlikely to comfort users: 63% of adults said they worry that AI will encourage harmful behavior, and nearly 2 in 3 adults are concerned about AI applications’ potential to learn to function independently from humans.
Experiences of conversational AI tools “hallucinating” — a term used to describe the AI system responding in unpredictable ways or with convincing but made-up information — have led some people to believe the chatbots are sentient, though this isn’t the case. Microsoft announced it would introduce new limits on its Bing chatbot in order to prevent these kinds of potentially upsetting interactions.
The Feb. 17-19, 2023, survey was conducted among a representative sample of 2,205 U.S. adults, with an unweighted margin of error of plus or minus 2 percentage points.