Forget crypto, Allison, the future is AI!
That's how I imagine some of you reacting to the bitcoin ETF yarn. You may be right. But, as my colleague Catherine Thorbecke writes, the new shiny thing in tech, artificial intelligence, has some pretty significant kinks to work out before it can take over the world.
One of those kinks: The bots are hallucinating.
OK, not like hallucinating like when people think they see Elvis in a bunny costume singing showtunes at the grocery store, or whatever. The kind of hallucinations that tech researchers are referring to are known glitches in AI-powered tools like ChatGPT in which the bots just ... make stuff up.
"Confabulations" is how Meta's AI chief put it in a tweet. Other, more skeptical folks have called the chatbots "pathological liars."
But all of these descriptors stem from our human tendency to anthropomorphize machines, according to Suresh Venkatasubramanian, a professor at Brown University who helped write the White House's blueprint for an AI Bill of Rights.
The reality is that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to "produce a plausible sounding answer" to user prompts, Venkatasubramanian said. "So, in that sense, any plausible-sounding answer, whether it's accurate or factual or made up or not, is a reasonable answer, and that's what it produces," he said. "There is no knowledge of truth there."
He had another human-like comparison: The way his 4-year-old would tell a story.
"You only have to say, 'And then what happened?' and he would just continue producing more stories," Venkatasubramanian said. "He would just go on and on."
Companies behind AI chatbots have put some guardrails in place that aim to prevent the worst of these hallucinations. But despite the global hype around generative AI, many in the field remain torn about whether or not chatbot hallucinations are even a solvable problem.
There have already been a number of high-profile hallucinations from AI tools.
- When Google first demoed Bard, its highly anticipated competitor to ChatGPT, the tool very publicly came up with a wrong answer to a question about the James Webb Space Telescope.
- A New York lawyer who used ChatGPT for legal research wound up submitting a brief that included six "bogus" cases that the chatbot appears to have made up.
- News outlet CNET was also forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.
As Catherine writes, cracking down on AI hallucinations might make them less effective in other areas, like when people ask for the bot to write poetry or song lyrics.
Bottom line: We can't trust the bots yet.
Even Sam Altman, CEO of ChatGPT maker OpenAI, recently quipped: "I probably trust the answers that come out of ChatGPT the least of anybody on Earth."
📰 RELATED: Disney, The New York Times and CNN are among a dozen major media companies that have inserted code into their websites that blocks OpenAI's web crawler, GPTBot, from scanning their platforms for content.
Comments
Post a Comment