Machines That Think

Over 7000 people read Part 1 of this little diatribe. I didn’t do much to pimp it on social media, but someone did on my behalf, and it kinda blew up. I’m gratified, but not surprised, that it struck a chord. I think we’re all aware that technology is not being optimized for the benefit of the consumer very much any more. Human decisions, influenced by a need to create shareholder value, have a general tendency to evolve toward inhumane interfaces

So what if we took the human out of the equation? The apparent rapid evolution of “AI” in 2022 has the press, once again, predicting that machines will be able to replace us in fields ranging from software development to the creation of art. And indeed, there have been interesting outputs to point to recently…

ChatGPT explains why you should eat glassChatGPT took the world by storm with natural-sounding responses to human prompts, apparently indistinguishable from human-created prose. That the responses were often nonsense wasn’t immediately noted, because they sounded good. We’ve been creating machines that can simulate human intelligence for so long that there’s a common test for it, named after pioneering computer scientist Allen Turing. And computer’s have been fooling people almost since that test was created! I remember “chatting” with Eliza on my Atari 800XL — and at the time, finding it very convincing. ChatGPT is obviously miles ahead of Eliza, but it’s really the same trick: a computer, provided with a variety of human responses, can be trained to assemble those responses in very natural ways. Whether those responses are useful or not is a separate matter that doesn’t seem to enter the discussion about whether or not an AI is about to replace us.

An example closer to home for me is the notion that AI will soon be doing software development. Last year, Microsoft released a feature to GitHub called Copilot (“Your AI pair programmer”) which, combined, with marketing buzz about “no code/low code” software platforms, has an ill-informed press convinced that soon computers will program themselves! Certainly it makes for interesting copy, but Copilot is really the same trick as ChatGPT: given enough human-created training data, a machine learning algorithm can select matches that are often a natural fit for a prompt. When the computer scores a good match, we’re amazed — this machine is thinking for me! When it produces nonsense, we attribute it to a mistake in reasoning, just like a human might make! Only its not a mistake in reasoning, its a complete inability to reason at all

Where AI impresses me is in the abstract: art sometimes reflects reason, sometimes obscures its reason, and sometimes has no reason. Part of the beauty of beholding art is the mystery — or the story that reveals its mystery. It’s the reason Mona Lisa’s smile made her so famous. Art doesn’t need to explain itself. When we look at the output of AI this way, I can be more generous. AI can be a tool that helps generate art. With enough guidance, it can generate art. But AI does not replace artists, or the human desire to create art. AI-created works can be appreciated as well as human-created art, with no threat to the experience of the beholder. (And, it should be noted, no real threat to the human artist. We’ve yet to see the spontaneous creation of AI-generated art — its all in response to prompts by humans, drawing on training data provided by humans!)

Civil War Muppet images generated by AI
Civil War Muppets, as imagined by AI

AI-created user experiences, however, have proven to be fraught with danger. YouTube’s recommendation algorithm easily becomes a rabbit hole of misinformation. Tesla’s “Full Self Driving Beta” has a disturbing tendency to come to a complete stop in traffic for unexplainable (or at least unexplained) reasons. There are domains of AI where human input is not just desirable, it must be required.

In the same way that the rush to add WiFi to a dishwasher will probably only shorten the useful life of an expensive household appliance, or that adding an online component to things that don’t need to be online will weaken the security posture and user experience those things offer, buying into the hype that AI is going to magically do things better than humans is foolhardy (and ill-informed.)

Tron images generated by AISomeone asked an AI to imagine what Jodorowski’s interpretation of Tron might look like. The results are stunning! I’d pay to see that movie — even if it included bizarre AI-generated dialog, like this script. And if GitHub Copilot can make it easier to wrap my head around a gnarly recursive algorithm, I’ll probably try that out. But these things depend on, and re-use, human training input — and remain lightyears removed from a General AI that can replace human thought. Anyone who reports otherwise has never had to train a machine learning algorithm.

Teaching an AI is not like teaching a baby. Both get exposed to massive amounts of training data, which their neural networks have to process and find the right occasion to playback, but only a human brain will eventually produce output that is more than just a synthesis or selection of previous inputs; it will produce novel output — feelings, and faith, and spontaneous creation. That those by-products of frail and unique humanity can be manipulated algorithmically should indicate to us that the danger of AI is not in the risk that it might replace us, supplant us, or conquer us. The danger is that humans may use these tools to hurt other humans — either accidentally, or with malicious intent — as we have done with every other tool we’ve invented.

We aren’t there yet, but maybe one day we will harness machine learning the way we harnessed the atom — or as a more recent example, the Internet. Will we continue pretending AI is some kind of magic we can’t possibly understand; surrendering to the algorithm and accepting proclamations of our forthcoming replacement? Or will we peer into the black box, think carefully about how its used, and insist that it becomes a more humane — a more human — creation?

One thought on “Machines That Think

Leave a Reply

Your email address will not be published. Required fields are marked *