Tech leaders including Google search chief Prabhakar Raghavan and Apple cofounder Steve Wozniak warn that ChatGPT-like programs keep making errors, and no one may notice


ChatGPT has stunned users with its potential in the months since its November launch. It has proved to be competent, if not successful, in taking business school tests and writing a State of Union speech as Elvis Presley, and within two months of launch, ChatGPT had over 100 million monthly active users—a feat that TikTok achieved in nine months and Instagram, in two and a half years.

ChatGPT’s big splash has been followed by a heightened A.I. interest among tech giants. Microsoft invested $10 billion into ChatGPT’s parent, OpenAI in January and shortly after, announced an upgraded search engine and web browser with ChatGPT included. Google launched its own version of a chatbot called Bard, and China’s Baidu announced that it would unveil its A.I.-powered “Ernie Bot” by March. 

Now, however, major players in the tech industry are warning that these seemingly all-knowing bots can go wrong, too—and mistakes are beginning to pile up.

“This type of artificial intelligence we’re talking about can sometimes lead to something we call hallucination,” Google’s senior vice president and search engine head Prabhakar Raghavan told Welt Am Sonntag, a German newspaper, on Saturday. He added that this ‘hallucination’ could result in the technology yielding a “convincing but completely fictitious answer.” 

Raghavan would now, after Google’s Bard stumbled last week. The question it was asked was simple enough—about which satellite took pictures of a planet outside the Earth’s solar system first. Bard, which will open up to the public in a few weeks, got the answer wrong in Google’s promotional video, as Reuters first pointed out. When the said error was pointed out, the company’s shares tanked 9% during the trading day and cost the company nearly $100 billion of its market value. 

Apple cofounder Steve Wozniak also weighed in on the infallibility of A.I. bots on CNBC’s Squawk Box. Wozniak says that while he found ChatGPT “useful to humans as all computer technology,” he also warned of the short-comings of tools like it.

“The trouble is it does good things for us, but it can make horrible mistakes by not knowing what humanness is,” Wozniak said last Friday. He admitted to being skeptical about technology that closely resembled human abilities, but still thought ChatGPT was impressive.

For his part, billionaire entrepreneur Mark Cuban has described generative A.I. technology used in ChatGPT as “the real deal” even though its development has only just begun. Still, despite its many virtues, there’s still a lot that we don’t know about how these technologies could shape our future, according to Cuban. In December, he said that, over time, the decision-making abilities of chatbot-like technologies can be hard to curb or make sense of.

“Once these things start taking on a life of their own…the machine itself will have an influence, and it will be difficult for us to define why and how the machine makes the decisions it makes, and who controls the machine,” Cuban said in an episode of Jon Stewart’s podcast, The Problem with Jon Stewart. He added that misinformation will only get worse as A.I. capabilities improve.

Representatives at Google, Microsoft and OpenAI did not immediately respond to Fortune’s request for comment sent outside their regular operating hours.

Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *