Usefully Wrong – The Problem with Generative AI

For the past decade, the tech world has been in a desperate search for the “next big thing.” PCs, the web, smart phones, and the Cloud have all sailed past their hype curve and settled out into commodities, new technology is needed to excite the consumer and liberate that sweet, sweet ARR.

For awhile, we thought maybe it was Augmented Reality — but Google only succeeded in making “Glassholes” and Microsoft’s Hololens was too clunky to change the world. Then we had 2022’s simultaneous onslaught of “metaverse” and “crytpo”, both co-opted terms leveraged to describe realities that proved to be entirely underwhelming: crypto crashed, and the metaverse was just Mark Zuckerberg’s latest attempt at relevance under a veneer of Virtual Reality (Hey Mark, the 90s called and wanted you know that VR headsets sucked then, and still suck now!)

But 2023 brings a new chance for a dystopian future ruled by technology ripe to abuse the average user. That’s right, Chat is back, and this time its with an algorithm prone to hallucinations!

The fact is, we couldn’t be better primed to accept convincing replies from a text-spouting robot that can’t tell fact from fiction: we’ve been consuming this kind of truthiness from our news media for the past 15 years! And this tech trend seems so great that two of the biggest companies are pivoting themselves around it…

Microsoft, while laying off thousands of employees from unrelated efforts, is spending billions with OpenAI to embed ChatGPT in all their major platforms. Bing always wanted to be an “answers engine” instead of a search engine, now it can give “usefully wrong” answers in full sentences! Developers can subscribe to OpenAI access right from their Cloud developer portal. Teams (that unholy union of Skype and SharePoint) can leverage AI to listen to your meetings and helpfully summarize them. And who wouldn’t want a robot to write your next TPS Report for you in Word, or spruce up your PowerPoints?

I have to prove I’m a human before I’m allowed to talk to an AI. Does that count as irony?

Google, who had been more cautious and thoughtful in their approach, is now full steam ahead trying to catch up. Google’s Assistant — already bordering on invasive and creepy — has been reorganized around Bard, their less-convincing chat AI that still manages to be confidently incorrect with startling frequency.

The desperation is frankly palpable: the tech world needs another hit, so ready or not, Large Language Models (LLMs) are here!

That everyone on the inside is fully aware that this technology is not done baking is entirely lost on the breathless media, and a new generation of opportunistic start-ups looking to capitalize on a new wave of techno-salvation. ChatGPT 4 really is impressive in its ability to form natural sounding sentences, and most of the time, it does a good job in drawing the correct answer out of its terabytes of training material. But there’s real risk here, when we conflate token selection with intelligence. The AI is responding to itself, as much as the user, trying to pick the next best word to put into its reply — its not trying to pick a correct response, just one that sounds natural.

Like most technology, the problem is that the average user can’t tell when they’re being abused. YouTube users can’t tell when an algorithm is taking them down a dark path — they’re just playing the next recommended video. Facebook users can’t tell when they’re re-sharing a false narrative — they just respond to what appears in their feed. And the average ChatGPT user isn’t going to fact check the convincing sounding response from the all-intelligent robot. We’ve already been trained to accept the vomit that the benevolent mother-bird of technology force-feeds us, while we screech for more…

I’m not saying ChatGPT, Bard and other generative AI should go away — the genie is out of the bottle, so there’s nothing that can be done about that. I’m saying that we need to approach this technology evolution not with awe and wonder and ignorance, rushing to shove it into every user experience. We need to learn from the lessons of the past few decades, carefully think through the unintended consequences of yet-another-algorithm in our lives, spend time iterating on its flaws, above all, treating it not as some kind of magic, but as a tool, that used intelligently, might help accelerate some of our work.

Neil Postman’s 1992 book “Technopoly” has the subtitle “The Surrender of Culture to Technology.” In it, he asserts that when we become subsumed by our tools, we are effectively ruled by them. LLMs are potentially useful tools (assuming they can be taught the importance of accuracy), but already we’re speaking of them as if they are a new form of intelligence — or even consciousness. A wise Jedi once said “the ability to speak does not make you intelligent.” The fact that not even the creators of ChatGPT can explain exactly how the model works doesn’t suggest an emergence of consciousness — it suggests we’re wielding a tool that we do not fully understand, and should thus exercise caution in its application.

When our kids were little, we enjoyed camping with them. They could play with and learn from all the camping tools and equipment except the contents one red bag, which contained a hatchet, a sharp knife, and a lighter; we called it the “Danger Bag” because it was understood that these tools needed extra care and consideration. LLMs are here. They’re interesting, they have the potential to help us, and to impact the economy: already new job titles like “Prompt Engineer” are being created to figure out how to best leverage the technology. Like any tool, we should harness it for good — but we should also build safeguards around its misuse. Since the best analogies we have to technology like this have proved harmful in ways we didn’t anticipate, perhaps ChatGPT should start in the “Danger Bag” and prove its way out from there…

One thought on “Usefully Wrong – The Problem with Generative AI

Leave a Reply

Your email address will not be published. Required fields are marked *