What the launch of Google’s 'Apprentice Bard’ reminds us and how to fix it.

What the launch of Google’s ‘Apprentice Bard’ reminds us and how to fix it.

It’s been more than a few weeks since Google’s Bard returned an incorrect output during a demonstration of its capabilities. That doesn’t mean that tools like this have no place being integrated into society – it means there is still some work to be done to realize its potential.

These systems are classified as weak (or narrow) AI. They work as well as the data input or learning model. What we are seeing with some of these tools is that if the data isn’t updated or exists – these tools return incorrect answers.

When you look at the FAQs for Microsoft’s AI-driven search engine Bing, Microsoft clearly acknowledges this – they state “Use your own judgment and double check the facts before making decisions or taking action based on Bing’s responses.” Recently, Microsoft’s AI-powered search has had its own issues where the AI took on a personality that was somewhat alarming to users of the system. Microsoft responded by limiting the number of queries that can happen in a day.

As Big Tech embraces and integrates AI-powered search, we think of the famous quote ‘with great power, comes great responsibility.’ Google and other search providers are where they are today because of the ease of use, level of accuracy, and relevance of results. So we know accuracy and relevance matter.

That’s not to say that the big search engines are pillars of accuracy, but it’s one thing to search and find the wrong results – it’s another for an AI to generate and provide incorrect information in order to always have an answer.

We won’t go into a detailed technical discussion on how ChatGPT and Bard are built, but we can say that ChatGPT and the large language model it was based on, GPT3.5, is different than Bard and its associated large language model roots, LaMDA. ChatGPT is a pre-trained model versus Bard which is a supervised learning model. The difference is that Bard’s responses are more human-like and ChatGPT’s knowledge base goes back to 2021 as they state on their website.

Some of the challenges with these models are:

  • Data issues: Preprocessing, missing values, data not available
  • Issues with testing – not selecting a wider representation of data in testing
  • Issues with the model – underfitting or overfitting a model
  • Lack of governance

These challenges are not insurmountable, but it’s up to the team developing these models to have a structure that addresses some of these potential challenges and issues.

This technology is ever-evolving, and Google and Microsoft will continue to innovate, update and improve on current offerings – it’s in their business interests. These companies are investing heavily in conversational AI, and as a result, users can expect more conversational responses from search engines – instead of just straightforward search results. Google’s CEO, Sundar Pichai, stated that these models will be available in the coming weeks or months and serve as a “companion to search.”
Conversational AIs aren’t exactly new, we’ve been using them for years… don’t believe us? When was the last time you asked Siri, Alexa, or Google Assistant to change a song or answer a question?

In conclusion, these conversational AIs are here (to stay), and they have the potential to make a huge impact on existing technology, processes, and user activities that will improve results and provide a more ‘human’ experience. We are excited to see what the future holds.

Taryn Talley

March 31, 2023

By Taryn Talley