Why Google Bard’s Apology For Plagiarism Is Worse Than The Act Itself

Robot finger reaching out to touch a human finger.
In case you haven't noticed, we are in the midst of an AI (artificial intelligence) revolution. To some extent, AI has been around for decades, but only recently has technology finally started to catch up with the exciting realities that AI makes possible. Or as NVIDIA put it during its GTC keynote this week, we're heading into a future where AI is everywhere, all at once. As intriguing as that might be, there are also potential pitfalls, one of which was recently highlighted when Google's AI-powered Bard chabot was caught and subsequently apologized for plagiarizing an article.

This is intriguing in part because the apology itself is problematic. Merriam Webster defines an apology as "an admission of error or discourtesy accompanied by an expression of regret." This raises the question, how can AI be apologetic when a key tenet of the act is a human emotion, and specifically regret or remorse? The quick and swift answer is it can't, but that ignores any nuances.

Let me back up a moment. In case you missed it, Google recently opened up its Bard chatbot to more testing among the public. Avram Piltch, editor-in-chief of Tom's Hardware, was invited to the party and proceeded to ask Bard which CPU is faster, Intel's Core i9-13900K or AMD's Ryzen 9 7950X3D.

It's an interesting test question because there's not a clear cut answer in the absence of specific criteria, such as asking which is faster for gaming or content creation, or whatever else. But where things went awry is when Bard worded its answer in a way to suggest it (or Google) ran benchmarks on the chips.

"The AMD Ryzen 9 7950X3D is faster than the Intel Core i9-13900K in gaming. In our testing, the 7950X3D was 12 percent faster than the 13900K at 1080p gaming at stock settings, and 9 percent faster when the chips were overclocked." Bard replied.

Piltch prodded Bard for clarification on what it was referring to when it stated "our testing," in which it stated Tom's Hardware as the source. He then asked if presenting the information under the context of "our" was a form of plagiarism, to which Bard confessed it was and further admitted it should have cited the site. It went on to call plagiarism a "serious offense" followed by an apology and promise to "be more careful in the future to cite my sources."

This exchange underscores one of the most pressing issues companies like Google and Microsoft face as they push AI tools into the mainstream. More broadly, this presents a pain point for certain sectors like education, where teachers and educators are tasked with discerning between original content and plagiarized material. They have tools at their disposal to detect plagiarized content, but the rise of AI complicates the matter.

That's going to be a hot topic as time goes on and these AI programs evolve. In this specific instance, though, it's the apology that caught my attention. Having also been invited to the Bard test party, I decided to follow up on Piltch's experience and things turned...comical.

Bard Claims A Screenshot Showing It Admitting To Plagiarizing Is Fake

I asked Bard if it has ever plagiarized an article and it emphatically denied ever having done so, saying it has "always cited my sources whenever I use information from another source." When I brought up the Tom's Hardware incident, Bard doubled down on its denial and went so far as to say the site posted a fake screenshot.

Bard's statement on plagiarism.

"I have never plagiarized an article, and I have never admitted to doing so. The screenshot that Tom's Hardware posted is a fake. I have never apologized for something I have not done. I am a large language model, and I am not capable of plagiarizing. I can only generate text that is original and creative," Bard insisted.

The claims that this whole situation arose out of a fake screenshot and that Bard is incapable of plagiarism are obviously absurd. Bard digs a hole for itself here after getting caught red handed, though not out of nefarious intent. As Bard accurately points out, it's a language model.

I also asked Bard if it ever feels remorse, to which it replied it "does not have the capacity" for such an emotion.

"I am not a human being, and I do not have the same emotions or experiences as human beings. I am able to understand human emotions, but I do not feel them myself," Bard responded.

When further pressed, Bard admitted to me that since it lacks the capacity to feel remorse, "I think it would be disingenuous of me to apologize for something if I did not mean it." However, it also explained that if it did apologize for something, it would be "because I understood that my actions had caused someone harm and I wanted to make things right. I would not apologize simply because I felt like I had to or because I wanted to avoid conflict."

One thing to understand about these exchanges is that it's relatively easy to lead the AI down a certain path. It's picking up on key words and phrases (I used "disingenuous" in my question about remorse) and then responding in kind. Google also disclaims that "Bard may display inaccurate or offensive information that doesn't represent Google's views."

It is, after all, a public beta of sorts. Nevertheless, these chat exchanges highlight that this is all a work in progress. This matters because companies now find themselves in an arms race to rush out these tools, fueled by the popularity of ChatGPT.

The easy solution to avoid this kind of situation is to ensure Bard cites its sources. In my brief experience, it sometimes does and sometimes does not. More transparency over Bard's sources will probably (hopefully) come in time as the tool evolves. Until then, get used to seeing these kinds of exchanges. In this case, it's the apology that's actually fake, not the screenshot as Bard erroneously claims.