Here’s what Google Search boss has to say on ChatGPT, AI chatbots – Times of India
AI can “hallucinate”
In a recent interview with Germany’s Welt am Sonntag newspaper (via news agency Reuters), Raghavan pointed out the potential dangers of artificial intelligence (AI) in chatbots. He warned of “hallucination” in AI, wherein it can give a convincing but completely made-up answer.
“This kind of artificial intelligence we’re talking about right now can sometimes lead to something we call hallucination. This then expresses itself in such a way that a machine provides a convincing but completely made-up answer,” he said.
Raghavan added that one of the fundamental tasks while working with AI was keeping this to a minimum. The executive reportedly also said Google felt the urgency of launching Bard to the public but also there’s a great responsibility not to mislead the people.
Google says AI comes with responsibility
This is not the first time Google has highlighted the potential dangers of AI. In a meeting in December, Google CEO Sundar Pichai and Jeff Dean, head of Google’s AI division, told employees that the cost of releasing an AI chatbot based on the company is greater as compared to a startup [OpenAI] because people trust the answers they get from Google.
The executives also talked about AI chatbots having problems like bias and factuality which makes them unsuited to replace web search.
Google’s ‘reputational risk’
Google is a much larger company as compared to OpenAI, and as per Dean, it has much more “reputational risk” in providing wrong information. He said that this is the reason why the company is moving “more conservatively than a small startup.”
The AI “can make stuff up. If they’re not really sure about something, they’ll just tell you, you know elephants are the animals that lay the largest eggs or whatever,” Dean was quoted as saying by CNBC.
“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date. But, it’s super important we get this right,” Dean added.
Google’s LaMDA
Google may have fallen behind in releasing an AI-powered chatbot but there seems to be some credibility in the comments made by the company’s top executives and the company’s cautious approach.
The company has been working on AI for some time now. Google’s recently-launched AI conversational chatbot Bard uses Language Model for Dialogue Applications (LaMDA). It is the same model that powered the company’s chatbot that reportedly became sentient – meaning it could feel things.
It was reported that a Google engineer associated with the development of the chatbot claimed that it became so sophisticated that if someone didn’t know that they were talking to an AI chatbot, they would think “it was a seven-year-old, eight-year-old kid that happens to know physics.”
Soon after this development became public, Google placed the engineer on paid leave saying that he made a number of “aggressive” moves.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.