Google's Bard AI Slammed As A Pathological Liar And Worse Than Useless In Scathing Report

Google building
Several Google employees who tested the company's Bard AI tool prior to its release raised serious concerns to the search giant, with one person calling it a "pathological liar" and another deeming it "worse than useless," according to a concerning report. The employees even warned that Bard's advice could cause serious harm, including death.

The folks at Bloomberg said they reviewed screenshots of internal discussions at Google regarding Bard. Some of the reported remarks are less than flattering, such as calling the AI tool "cringe-worthy" and saying that Bard's advice on how to land a plane could lead to a crash.

"Bard is worse than useless: please do not launch," an employee reportedly wrote in an internal message group last February. Apparently, many of the 7,000 people who viewed the note agreed with the assessment that Bard's answers during testing were often very wrong.

Google ended up releasing Bard to the public anyway, as it likely felt the pressure for OpenAI's ChatGPT tool, which Microsoft is testing in its Bing search engine. Bard is not based on ChatGPT and instead is built around LaMDA (Language Model for Dialogue Applications), a conversational AI model that learns by 'reading' trillions of words.

What we're seeing unfold is an AI arms race of sorts and there's a lot at stake. Shares of Google's parent company, Alphabet, dropped this week after reports surfaced that Samsung may switch its Galaxy phones over to Microsoft's Bing search. That would a costly loss—some estimates peg the current search deal between Samsung and Google at $3 billion.

These chatbots that have surfaced in recent weeks and months only offer a glimpse of the larger AI picture, but they can and do help shape public opinion. It also brings ethical and safety concerns to the broader AI discussion.

"AI ethics has taken a back seat," Meredith Whittaker, president of the Signal Foundation and a former Google manager, told Bloomberg. "If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work."

In a statement provided to the outlet, Google spokesperson Brian Gabriel indicated that responsible AI remains a priority, noting that Google continues to "invest in teams that work on applying our AI principles to our technology."

Even so, Google's AI governance boss Jen Gennai reportedly fielded a risk evaluation report by employees who stated that Bard was not yet ready to release because of the harm it could cause. Despite the issues raised, higher-ups at Google decided to launch Bard anyway under the "experimental" banner with "appropriate disclaimers."

Screenshot of a chat session with Google's Bard charbot.

Out of curiosity, I pinged Bard about the scathing report and asked for its reaction. You can read the response above, in which Bard expresses disappointment while acknowledging that some of its responses are inaccurate or misleading. In a follow-up query, it dismissed concerns that its advice could cause harm.

"I understand that some of my advice on topics such as how to land a plane and answers about scuba diving could lead to injury or death. I am still under development, and I am always learning," Bard stated.

Bard did a little better when I asked why the Boston Bruins lost to the Florida Panthers last night. It accurately highlighted too many turnovers and the absence of Bruins captain Patrice Bergeron, but incorrectly stated the Bruins went 0-4 on power play opportunities (the team went 1-4 compared to the Panthers going 0-3).