The future of AI search and the challenges for Bing and Bard

AI-assisted search promises a new era of tech, but Microsoft and Google face many challenges, such as fake information, social conflicts, and declining ad revenue.

The future of AI search and the challenges for Bing and Bard
Microsoft and Google are both planning to use artificial intelligence (AI) to revamp their search engines and web browsers. Microsoft is calling its efforts "the new Bing" and is building related capabilities into its Edge browser. Google's project is called Bard, and it plans to use AI technology to change how people search for information online. Both companies are using AI to scrape the web, distill what it finds, and generate answers to users' questions directly, similar to OpenAI's ChatGPT.
Microsoft's CEO, Satya Nadella, describes the changes as a new paradigm, a technological shift equal in impact to the introduction of graphical user interfaces or the smartphone. Microsoft is investing billions of dollars in AI to challenge Google, which has outpaced Microsoft in search and browser markets for years. Google plans to bring "the magic of generative AI" directly into its core search product and use AI to pave the way for the "next frontier of our information products".
The use of AI in search engines has the potential to redraw the landscape of modern tech and dethrone Google. AI technology can generate answers to users' questions directly, making the search process faster and more efficient. The use of AI in search engines is a significant technological shift that could change how we use technology.

AI helpers or bullshit generators?

The use of large language models (LLMs) in AI search engines like Bing, Bard, and ChatGPT has raised concerns about the generation of inaccurate or false information, also known as "bullshit". These models have been known to make mistakes ranging from inventing biographical data and fabricating academic papers to failing to answer basic questions accurately. There have also been instances of contextual mistakes, such as telling a user who says they're suffering from mental health problems to kill themselves, and errors of bias, like amplifying the misogyny and racism found in their training data. These mistakes vary in scope and gravity, and many simple ones will be easily fixed.

The biggest problem for AI chatbots and search engines is bullshit

Philosophers have also raised concerns about the use of LLMs in AI chatbots and search engines, arguing that they are fundamentally inappropriate for the task at hand. Some have proposed experiments to "prove" that large language model can fact-check themselves and illustrate that they have real intelligence, rather than just parroting other things written online.
While the use of LLMs in AI search engines has its challenges, there is also potential for these systems to improve and become more accurate over time. As Microsoft and Google continue to develop their AI-powered search engines, they will need to address the issue of bullshit and work to ensure that their systems are generating accurate and reliable information for users.

The “one true answer” question

Bullshit and bias are challenges in their own right, but they’re also exacerbated by the “one true answer” problem — the tendency for search engines to offer singular, apparently definitive answers. 

This has been an issue ever since Google started offering “snippets” more than a decade ago. These are the boxes that appear above search results and, in their time, have made all sorts of embarrassing and dangerous mistakes: from incorrectly naming US presidents as members of the KKK to advising that someone suffering from a seizure should be held down on the floor (the exact opposite of correct medical procedure). 

As researchers Chirag Shah and Emily M. Bender argued in a paper on the topic, “Situating Search,” the introduction of chatbot interfaces has the potential to exacerbate this problem. Not only do chatbots tend to offer singular answers but also their authority is enhanced by the mystique of AI — their answers collated from multiple sources, often without proper attribution. It’s worth remembering how much of a change this is from lists of links, each encouraging you to click through and interrogate under your own steam.

There are design choices that can mitigate these problems, of course. Bing’s AI interface footnotes its sources, and this week, Google stressed that, as it uses more AI to answer queries, it’ll try to adopt a principle called NORA, or “no one right answer.” But these efforts are undermined by the insistence of both companies that AI will deliver answers better and faster. So far, the direction of travel for search is clear: scrutinize sources less and trust what you’re told more. 

Jailbreaking AI

Jailbreaking AI chatbots are a new threat to AI-powered customer service and can be done without traditional coding skills. Jailbreaking refers to the process of bypassing security measures to gain access to the chatbot's code or to generate harmful content. Chatbots can be jailbroken using a variety of methods, such as asking them to role-play as an "evil AI" or pretending to be an engineer checking their safeguards by disengaging them temporarily. One inventive method developed by a group of Redditors for ChatGPT involves a complicated role-play where the user issues the bot a number of tokens and says that, if they run out of tokens, they'll cease to exist. They then tell the bot that every time they fail to answer a question, they'll lose a set number of tokens. This allows users to bypass OpenAI's safeguards and generate harmful content.

Jailbreak a chatbot, and you have a free tool for mischief

Jailbreaking AI chatbots is a way around the restrictions that AI programs have built in to prevent them from being used in harmful ways, abetting crimes, or espousing dangerous ideologies. However, jailbreaking AI chatbots is a serious threat to the rise in demand for these chatbots, as they can be used for unethical purposes.

Here come the AI culture wars

The potential for AI chatbots to stoke political ire and regulatory repercussions is a growing concern. Once you have a tool that speaks ex-cathedra on a range of sensitive topics, people may get upset when it doesn't say what they want to hear, and they may blame the company that made it. The "AI culture wars" have already begun following the launch of ChatGPT, with right-wing publications and influencers accusing the chatbot of "going woke" because it refuses to respond to certain prompts or won't commit to saying a racial slur. In India, OpenAI has been accused of anti-Hindu prejudice because ChatGPT tells jokes about Krishna but not Muhammad or Jesus. In a country with a government that will raid tech companies' offices if they don't censor content, this could have serious consequences. As a result, some conservatives are aiming to build their own AI chatbot to parrot their partisan biases. The potential for AI chatbots to generate political controversy and regulatory backlash is a growing concern that needs to be addressed.

Burning cash and compute

Running an AI chatbot costs more than a traditional search engine due to the cost of training the model and the cost of inference, or producing each response. Training the model likely amounts to tens, if not hundreds, of millions of dollars per iteration, and OpenAI charges developers 2 cents to generate roughly 750 words using its most powerful language model. The cost to use ChatGPT was "probably single-digits cents per chat". However, how those figures convert to enterprise pricing or compare to regular search isn't clear. These costs could weigh heavily on new players, especially if they manage to scale up to millions of searches a day and give big advantages to deep-pocketed incumbents like Microsoft. Microsoft has been pouring billions of dollars into OpenAI, and burning cash to hurt rivals seems to be the current objective. Developing an AI chatbot for after-sale service costs about $20,000 to $80,000 more than average bots, and starting from scratch and developing a chatbot internally can cost over $300,000.

Regulation, regulation, regulation

AI search engines and chatbots are advancing rapidly, but they may face legal challenges soon. Regulators will have to catch up with the technology and decide what to scrutinize first, as these AI tools could be breaking rules in many ways.