Usefully Wrong – The Problem with Generative AI

For the past decade, the tech world has been in a desperate search for the “next big thing.” PCs, the web, smart phones, and the Cloud have all sailed past their hype curve and settled out into commodities, new technology is needed to excite the consumer and liberate that sweet, sweet ARR.

For awhile, we thought maybe it was Augmented Reality — but Google only succeeded in making “Glassholes” and Microsoft’s Hololens was too clunky to change the world. Then we had 2022’s simultaneous onslaught of “metaverse” and “crytpo”, both co-opted terms leveraged to describe realities that proved to be entirely underwhelming: crypto crashed, and the metaverse was just Mark Zuckerberg’s latest attempt at relevance under a veneer of Virtual Reality (Hey Mark, the 90s called and wanted you know that VR headsets sucked then, and still suck now!)

But 2023 brings a new chance for a dystopian future ruled by technology ripe to abuse the average user. That’s right, Chat is back, and this time its with an algorithm prone to hallucinations!

The fact is, we couldn’t be better primed to accept convincing replies from a text-spouting robot that can’t tell fact from fiction: we’ve been consuming this kind of truthiness from our news media for the past 15 years! And this tech trend seems so great that two of the biggest companies are pivoting themselves around it…

Microsoft, while laying off thousands of employees from unrelated efforts, is spending billions with OpenAI to embed ChatGPT in all their major platforms. Bing always wanted to be an “answers engine” instead of a search engine, now it can give “usefully wrong” answers in full sentences! Developers can subscribe to OpenAI access right from their Cloud developer portal. Teams (that unholy union of Skype and SharePoint) can leverage AI to listen to your meetings and helpfully summarize them. And who wouldn’t want a robot to write your next TPS Report for you in Word, or spruce up your PowerPoints?

I have to prove I’m a human before I’m allowed to talk to an AI. Does that count as irony?

Google, who had been more cautious and thoughtful in their approach, is now full steam ahead trying to catch up. Google’s Assistant — already bordering on invasive and creepy — has been reorganized around Bard, their less-convincing chat AI that still manages to be confidently incorrect with startling frequency.

The desperation is frankly palpable: the tech world needs another hit, so ready or not, Large Language Models (LLMs) are here!

That everyone on the inside is fully aware that this technology is not done baking is entirely lost on the breathless media, and a new generation of opportunistic start-ups looking to capitalize on a new wave of techno-salvation. ChatGPT 4 really is impressive in its ability to form natural sounding sentences, and most of the time, it does a good job in drawing the correct answer out of its terabytes of training material. But there’s real risk here, when we conflate token selection with intelligence. The AI is responding to itself, as much as the user, trying to pick the next best word to put into its reply — its not trying to pick a correct response, just one that sounds natural.

Like most technology, the problem is that the average user can’t tell when they’re being abused. YouTube users can’t tell when an algorithm is taking them down a dark path — they’re just playing the next recommended video. Facebook users can’t tell when they’re re-sharing a false narrative — they just respond to what appears in their feed. And the average ChatGPT user isn’t going to fact check the convincing sounding response from the all-intelligent robot. We’ve already been trained to accept the vomit that the benevolent mother-bird of technology force-feeds us, while we screech for more…

I’m not saying ChatGPT, Bard and other generative AI should go away — the genie is out of the bottle, so there’s nothing that can be done about that. I’m saying that we need to approach this technology evolution not with awe and wonder and ignorance, rushing to shove it into every user experience. We need to learn from the lessons of the past few decades, carefully think through the unintended consequences of yet-another-algorithm in our lives, spend time iterating on its flaws, above all, treating it not as some kind of magic, but as a tool, that used intelligently, might help accelerate some of our work.

Neil Postman’s 1992 book “Technopoly” has the subtitle “The Surrender of Culture to Technology.” In it, he asserts that when we become subsumed by our tools, we are effectively ruled by them. LLMs are potentially useful tools (assuming they can be taught the importance of accuracy), but already we’re speaking of them as if they are a new form of intelligence — or even consciousness. A wise Jedi once said “the ability to speak does not make you intelligent.” The fact that not even the creators of ChatGPT can explain exactly how the model works doesn’t suggest an emergence of consciousness — it suggests we’re wielding a tool that we do not fully understand, and should thus exercise caution in its application.

When our kids were little, we enjoyed camping with them. They could play with and learn from all the camping tools and equipment except the contents one red bag, which contained a hatchet, a sharp knife, and a lighter; we called it the “Danger Bag” because it was understood that these tools needed extra care and consideration. LLMs are here. They’re interesting, they have the potential to help us, and to impact the economy: already new job titles like “Prompt Engineer” are being created to figure out how to best leverage the technology. Like any tool, we should harness it for good — but we should also build safeguards around its misuse. Since the best analogies we have to technology like this have proved harmful in ways we didn’t anticipate, perhaps ChatGPT should start in the “Danger Bag” and prove its way out from there…

Machines That Think

Over 7000 people read Part 1 of this little diatribe. I didn’t do much to pimp it on social media, but someone did on my behalf, and it kinda blew up. I’m gratified, but not surprised, that it struck a chord. I think we’re all aware that technology is not being optimized for the benefit of the consumer very much any more. Human decisions, influenced by a need to create shareholder value, have a general tendency to evolve toward inhumane interfaces

So what if we took the human out of the equation? The apparent rapid evolution of “AI” in 2022 has the press, once again, predicting that machines will be able to replace us in fields ranging from software development to the creation of art. And indeed, there have been interesting outputs to point to recently…

ChatGPT explains why you should eat glassChatGPT took the world by storm with natural-sounding responses to human prompts, apparently indistinguishable from human-created prose. That the responses were often nonsense wasn’t immediately noted, because they sounded good. We’ve been creating machines that can simulate human intelligence for so long that there’s a common test for it, named after pioneering computer scientist Allen Turing. And computer’s have been fooling people almost since that test was created! I remember “chatting” with Eliza on my Atari 800XL — and at the time, finding it very convincing. ChatGPT is obviously miles ahead of Eliza, but it’s really the same trick: a computer, provided with a variety of human responses, can be trained to assemble those responses in very natural ways. Whether those responses are useful or not is a separate matter that doesn’t seem to enter the discussion about whether or not an AI is about to replace us.

An example closer to home for me is the notion that AI will soon be doing software development. Last year, Microsoft released a feature to GitHub called Copilot (“Your AI pair programmer”) which, combined, with marketing buzz about “no code/low code” software platforms, has an ill-informed press convinced that soon computers will program themselves! Certainly it makes for interesting copy, but Copilot is really the same trick as ChatGPT: given enough human-created training data, a machine learning algorithm can select matches that are often a natural fit for a prompt. When the computer scores a good match, we’re amazed — this machine is thinking for me! When it produces nonsense, we attribute it to a mistake in reasoning, just like a human might make! Only its not a mistake in reasoning, its a complete inability to reason at all

Where AI impresses me is in the abstract: art sometimes reflects reason, sometimes obscures its reason, and sometimes has no reason. Part of the beauty of beholding art is the mystery — or the story that reveals its mystery. It’s the reason Mona Lisa’s smile made her so famous. Art doesn’t need to explain itself. When we look at the output of AI this way, I can be more generous. AI can be a tool that helps generate art. With enough guidance, it can generate art. But AI does not replace artists, or the human desire to create art. AI-created works can be appreciated as well as human-created art, with no threat to the experience of the beholder. (And, it should be noted, no real threat to the human artist. We’ve yet to see the spontaneous creation of AI-generated art — its all in response to prompts by humans, drawing on training data provided by humans!)

Civil War Muppet images generated by AI
Civil War Muppets, as imagined by AI

AI-created user experiences, however, have proven to be fraught with danger. YouTube’s recommendation algorithm easily becomes a rabbit hole of misinformation. Tesla’s “Full Self Driving Beta” has a disturbing tendency to come to a complete stop in traffic for unexplainable (or at least unexplained) reasons. There are domains of AI where human input is not just desirable, it must be required.

In the same way that the rush to add WiFi to a dishwasher will probably only shorten the useful life of an expensive household appliance, or that adding an online component to things that don’t need to be online will weaken the security posture and user experience those things offer, buying into the hype that AI is going to magically do things better than humans is foolhardy (and ill-informed.)

Tron images generated by AISomeone asked an AI to imagine what Jodorowski’s interpretation of Tron might look like. The results are stunning! I’d pay to see that movie — even if it included bizarre AI-generated dialog, like this script. And if GitHub Copilot can make it easier to wrap my head around a gnarly recursive algorithm, I’ll probably try that out. But these things depend on, and re-use, human training input — and remain lightyears removed from a General AI that can replace human thought. Anyone who reports otherwise has never had to train a machine learning algorithm.

Teaching an AI is not like teaching a baby. Both get exposed to massive amounts of training data, which their neural networks have to process and find the right occasion to playback, but only a human brain will eventually produce output that is more than just a synthesis or selection of previous inputs; it will produce novel output — feelings, and faith, and spontaneous creation. That those by-products of frail and unique humanity can be manipulated algorithmically should indicate to us that the danger of AI is not in the risk that it might replace us, supplant us, or conquer us. The danger is that humans may use these tools to hurt other humans — either accidentally, or with malicious intent — as we have done with every other tool we’ve invented.

We aren’t there yet, but maybe one day we will harness machine learning the way we harnessed the atom — or as a more recent example, the Internet. Will we continue pretending AI is some kind of magic we can’t possibly understand; surrendering to the algorithm and accepting proclamations of our forthcoming replacement? Or will we peer into the black box, think carefully about how its used, and insist that it becomes a more humane — a more human — creation?

Old Man Yells at Cloud

Technology has gotten objectively worse in the last few years.

I know I’m dangerously close to becoming an old man yelling at the Cloud. That every generation is uncomfortable with the next generation’s technology — everyone has a level of tech they’re used to, and things introduced later become increasingly foreign. But I’m pretty sure my perspective is still valid: I grew up with the Internet, I helped make little corners of it, and I still move fluidly and comfortably within most technology environments (VR, perhaps, being an exception.) So I think its reasonable for me to declare that cyberspace is kinda crappy right now. A few examples:

Video Games

When I was young, we jammed a cartridge in the Nintendo, and hit the power button and the game started. On a bad day, when it would glitch, we would blow in the cartridge believing we were getting dust out of it (most likely, it was just re-seating the game in its slot that did the trick.) Games were a diversion you could spend hours on, but also ones you could play for a few minutes between homework and bed time. The amount of time they sucked from you was a function of your free time, and your parent’s opinion on the healthiness of staring a glowing tube.

In contrast, the other day Ben and I had an hour together, and wanted to spend some time on a two-player game we’ve been working on. This is what happened when we put the disk in our modern gaming machine:

An hour of free time requires 45 minutes of downloading

Playing together now a means an hour of downloading content from an online service before the game even starts — and this particular game is an entirely offline one! There isn’t even a good reason to be forced to do this download. This is objectively a worse experience than I grew up with (and it costs a whole lot more too.)

Social Media

Its really hard to remember how wonderful Facebook was when it first took off. Its predecessor MySpace made a mess of both design and technology, but it created a place for people to connect. Facebook was a cleaner, more rational place where you could find long lost friends, old classmates, and connect with distant family. I have old phones where Facebook was still positioned as your online phone book — one that was illustrated by people’s latest profile photos, and animated by what they had shared most recently. Facebook was a data source for experiences, it brought personality to technology experiences, through its open APIs.

Now Facebook is pure poison, and its descendants, like TikTok, are tools of nation-state level manipulation. Your friends and family still connected this way consume and share misinformation in a self-affirming echo chamber of increasingly extreme bias and partisanship, while in the US, company’s buy and sell their data to shill crap, and in China, the government uses it to oppress and control. Social media is now an objectively horrible part of the Internet that no one’s children should use (and most adults — including billionaires — should probably abstain as well.)

Vehicles

My dad used to complain about power windows. I’m not sure if this was out of jealousy, because our 14 year-old Buick LeSabre had only manual windows, or if it was another case of an Old Man Yelling at a Cloud, but he would explain that in an emergency, he’d rather have the ability to crank down a window and get out of a car, than be trapped by a mechanism that wouldn’t work in an electrical failure. In truth, the evolution of vehicle technology has not been a good one over-all. Even nice-to-have features have been plagued by poor implementations, and dubious architectural decisions. But there was a point slightly before that of diminishing returns, when it was close to “just right.” A driving experience made more comfortable by technology, but not damaged by it.

New cars have everything on a touchscreen — like auto manufacturers noticed a trend from 2007 of phones moving to touchscreens, and decided a decade later that vehicle controls would benefit from the same evolution, never once considering that fumbling through a touchscreen UI in a rainstorm, trying to find the windshield wiper controls is an objectively worse experience than flipping a lever next to a steering wheel.

And if my dad thought power windows were a bad idea in an emergency, wait’ll he discovers what happens to a Tesla’s door handles if its battery dies (or worse, bursts into flames.)

The common trait amongst all these examples is its not actually the evolution of technology that has made things worse — its the application of that technology that has ruined everything. My modern Xbox is undoubtedly superior to a Super Nintendo in every technical aspect; but its not more fun. The number of humans connected to the Internet has increased, and their connection points have gotten faster, and that should be a good thing — but the tools given them to connect to each other have been optimized wrong, and the result is worse. And modern technology thoughtfully applied to cars is capable of amazing improvements to safety; but it can also be used with astounding stupidity.

I have more examples I’d like to talk about. Things like how Microsoft Office used to be a great product that you’d buy every couple years, and now its a horrible subscription offering that screws over customers and changes continuously, frustrating your attempts to find common UI actions. Or like how Netflix used to be a great place to find all sorts of video content on the Internet, for a reasonable monthly price that finally made it legal to stream. And now its one of a dozen different crappy streaming services, all regularly increasing their prices, demanding you subscribe to all of them, while making you guess which one will have the show you want to watch. I could rant at length about how “smart phones” have gotten boring, bigger, more expensive, and more intrusive, and only “innovate” by making the camera slightly better than last year’s model (but people buy them anyway!) Or how you can’t buy a major appliance that will last 5 years — but you can get them with WiFi for some reason! Or how “smart home assistants” failed to deliver on any of their promises — even the commercial ones — and only got dumber the more skills they added. I could rant about all that, and more, but I won’t, because this is all just a Preface for the real topic: AI, and how its not all its cracked up to be.

But I can’t do it here, because the algorithm that reviews my blog posts for readability says I’ve already gone on too long. So come back for Part 2

Everything is Automatic

“When we design a computer that treats its user or owner as its adversary, we lay the groundwork for unimaginable acts of oppression and terror.” –Cory Doctorow

It wasn’t the pandemic that made me feel uncomfortable with how technology was evolving — that started before Covid-19 did — but it certainly did accelerate my thinking on the subject. I grew up with optimism around this inter-network of computer and human minds, and while I wasn’t ignorant of the dangers and the risks, I’d held out hope that the potential for good outweighed the potential for evil. I should have paid better attention in seminary: the human condition is broken, and anything humans build carry that curse. All it took was a global pandemic, a really crappy President, and a few hate crimes, to reveal just how broken, destructive, and foul the Internet had become.

While I’ll indulge in some nostalgia for the early Internet days, I do acknowledge that the dark side was present from the beginning. What I failed to predict was how mainstream that dark-side would end up being. But I’m genuinely afraid that the average technology user doesn’t see this perversion — not just because of a few years of turmoil, but because while we were all dealing with that, the systems we depended on seized the opportunity to become even worse — even more poisonous.

When we suddenly couldn’t see each other in person, Zoom became a tool to help us connect. We all knew it was a faint shadow of actual connection, but it was something, and it was better than nothing. Zoom is this start-up that just happened to be in the right place at the right time, and while they made some mistakes, they were a helper in a tough time — so they were targeted. Microsoft, whose vastly inferior product, Teams, was refocused specifically to take down Zoom (at least in corporate communication.) Teams is an ecosystem play, designed to trap users in something called “Microsoft 365” — a misguided attempt to bundle Microsoft offerings, both good and bad, into a single subscription that owns your Internet presence and dictates how you use it. Signing into Teams also signs you into Office, OneDrive, SharePoint, Azure Active Directory, and a dozen other services you probably don’t care about, but that are now succeeding because they’re a part of an ecosystem you’re forced to accept so you can connect with your co-workers.

But I’m not picking on Microsoft, because they’re late to this particular predatory activity. Facebook has been tracking you across the Internet for years, thanks to something called the Meta Pixel — a tiny bit of code dropped into thousands of websites that allows them to associate your browsing activity with your Facebook account, so they can optimize the content and advertising algorithm to monetize you on their social apps.

And Apple pretends to care about consumer privacy, but all that really means is that they don’t want to share the precious data they collect about you with anyone else. Fill your iCloud with photos from your iPhone, so you can sign up for their iServices, that won’t work on your old iPhone (because, security!) so you have to buy new iDevices every couple years, using your iCreditCard!

And don’t even get me started on how every web page uses Google Analytics, and therefore Google knows (and influences) everything you ever look at. And while its true that there’s not likely a person at Google watching you individually, the reality that algorithms are making decisions about what you should be able to see and do online is probably worse. The dangers of AI are not like Skynet in the Terminator movie series — the danger is that AI is really, really stupid and we’re increasingly allowing machines to make human decisions on our behalf.

There’s lots bad about the time in history we all just got through, but if there was an up-side, it was how it forced us to change our pace, and re-prioritize our lives. I’m not eager to go fully back to the “normal” we accepted before Covid-19; remote work, new ways to connect, new hobbies and level of self-sufficiency that we had to adapt to were worthwhile side-effects of a “great pause.” But a dependence on technology, and in particular, an unquestioning trust in Big Tech to save us, highlighted for me just how genuinely broken the industry I work in is now.

If you don’t know how it works, or you don’t directly pay someone to make it work for you, then you are the product: you are being harvested for some value you might not even understand — it is the default behavior of tech companies to consume you.

My rule for 2023: if you can’t get out a screw driver and fix it, or look at the source code and adjust it, then the only (relatively) safe alternative is to pay someone who can — there’s no such thing as a free Cloud service.

Making My Own Cloud

Nic and I got our first camera together a few months before our wedding — it was also our last film camera. I had a few digital cameras, but at the time, the quality was low and the prices still high, and we wanted really good pictures that would last. So we got a good scanner to go along with our film, and resolved to began preserving memories.

A couple of kids at Dunn’s River Falls, 2001

The subsequent economies of scale for digital cameras, and later the rapidly improving quality of phone cameras resulted in a proliferation of photographs — and many hours spent on a strategy for organizing and storing them. This got even more complicated with 3 new digital archivists when our kids got phones and began photographing and recording everything.

The result is more than 140GB of photos over 20+ years, organized in folders by year, then by month. The brief metadata I can capture in a folder name is rarely enough to be able to quickly pinpoint a particular memory, but wading through the folders to find something is often a fun walk down memory lane anyway.

Apple's Digital Hub strategy of yester-yearSince I first started out archiving photos, a number of technology solutions have come along claiming to be able to do it better. Flickr, Google Photos, iPhoto, Amazon Photos all made bold assertions that they could automatically organize photos for you, in exchange for a small fee for storing them. The automatic organization always sucked, and the fees were usually part of an ecosystem lock-in play. It seems nothing has been able to beat hierarchical directory trees yet.

Still 140GB is a lot, and 20 years of memories can’t be saved on a single hard drive — there’s too much risk. Some kind of bulk back-up mechanism is important. For the past 8 years, we’ve used Microsoft’s OneDrive. Their pricing is the best, and their sync clients work well on most platforms. They don’t try to force any organization on you, it kinda just works.

Lately, though, they’ve begun playing into the “ecosystem lock” trap. The macOS client is rapidly abandoning older (and more functional) versions of the OS. OneDrive is priced most attractively if you also subscribe to Office, which is also moving to the consumer treadmill model of questionable new features requiring newer hardware. It seems that in order to justify the subscription software model, vendors need us to abandon our hardware every 3-4 years. This artificial obsolescence must be the industry’s answer to the fact that consumer computing innovation has plateaued, and there’s no good reason to replace a computer less than 10 years old any more — and many good reasons not to.

Hidden in an unknown corner of Inner Mongolia is a toxic, nightmarish lake created by our thirst for smartphones, consumer gadgets and green tech. Read the article.

In short, while I’m not unhappy with OneDrive, and consider the current incarnation of Microsoft to be one of the more ethical tech giants, it was high time to begin exploring alternatives. If you’re sensing a theme here lately, it stems from a realization that if us nerds willingly surrender all our data to big companies, then what hope does the average person have of privacy in this connected age?

Fortunately, like with most other tech, there are open source alternatives. Some are only for those who can DIY, but a significant number are available for the less tech savvy willing to vote with their wallets. Both NextCloud and OwnCloud offer sync clients that are highly compatible with a range of operating systems and environments. Both are available as self-host and subscription systems — from a range of providers. At least for now, I’ve decided to self-host OwnCloud in Azure. This is largely because I get some free Azure as a work-perk. If that arrangement changes in the future, I’m very likely to subscribe to NextCloud provided by Hetzner, a privacy-conscious European service that costs less than $5.

My first digital camera, a Kodak DC20, circa 1997. Thanks mom and dad!Right now, our total synced storage needs for the family are under 300GB. I have another terabyte of historical software, a selection of which will remain in a free OneDrive account. The complete 1.3TB backup is on dual hard drives, one always stored offsite. This relatively small size is due to the fact that we stopped downloading video as soon as streaming services became a viable paid alternative — although that appears to be changing.

I started making money on computers as a teen by going around and fixing people’s computers for them. Most made the same simple mistakes, were grateful for help, and were generally eager to learn. In 2022, I’m afraid we’ve all just surrendered to big tech — we’ve decided its too hard to learn to manage our digital lives, so we let someone else do it; in exchange, we’ve stopped being customers and instead we’ve become the products. With our digital presence being so important, maybe its time consumers decided we’re not for sale.

The Next Web

The tech bros have declared a new era of the Internet; they call it Web 3.0, or just web3. They claim this new era is about decentralization, but their claims are suspiciously linked to non-web-specific ideas like blockchain, crypto currency and NFTs. I object to all of it.

I consider myself a child of Web 1.0 — I cut my teeth on primitive web development just as it was entering the mainstream. But I consider myself a parent of Web 2.0. Not that I was never in the running to be a dot com millionaire, but my career has largely been based on the development and employment of technologies related to the read/write web, and many of my own personal projects, such as this website, are an exploration of that era of the Internet. Web 2.0 is the cyberspace I am at-home in. So one might accuse me of becoming a stodgy old man, refusing to don my VR headset and embrace a new world of DeFi and Crypto — and time will tell, maybe I am…

But its my personal opinion that web3 is on the fast track to the Trough of Disillusionment, and that when the crypto bubble bursts, there will be much less actual change remaining in the rubble than the dot-com bubble burst left us with.

Before I dive in, let’s clear up some terms. “Crypto” has always been short for cryptography. Only recently has the moniker been re-purposed as a short form for cryptographic currency. Crypto being equated with currency is a bit of a slight to a technology space that has a wide-variety of applications. But fine, language evolves, so let’s talk about crypto using its current meaning. Crypto includes BitCoin, Dogecoin, Litecoin, Ethereum and all the other memetic attempts at re-inventing currency. Most of these are garbage, backed by nothing except hype. People who are excited by them are probably the same people holding them, desperate to drive up their value. There’s literally no “fundamentals” to look to in crypto — its all speculative. But that doesn’t mean they don’t have value; the market is famously irrational, and things are worth what people think they’re worth. Bitcoin, the original crypto-currency, has fluctuating, but real value, because virtual though it may be, it has increasing scarcity built-in.

I stole this image from somewhere on the web. I assume someone else owns the NFT for it.

I’ve owned Bitcoin twice — and sold it twice. The first time was fairly early on, when I mined it myself on a Mac I rescued. What I mined was eventually worth about $800: enough to buy myself a newer and more powerful computer.

My next Bitcoin experiment was last year, where I bought a dip and sold it when it went high. I made about $300. I made more money buying and selling meme stocks with real companies behind them, so I have no plans to continue my Bitcoin experiments in the near future.

As a nerd who’s profited (lightly) off it, you’d think I’d be more of a proponent of digital currency, but the reality is that none of its purported benefits are actually real — and all of its unique dangers actually are. I won’t cite the whole article, but I need to do more than link to this excellent missive I found on the topic, because its much better written than I could manage, so here’s a relevant excerpt — go read the rest:

Bitcoin is touted as both a secure and non-inflationary asset, which brushes aside the fact that those things have never simultaneously been true. As we’ve seen, Bitcoin’s mining network, and therefore its security, is heavily subsidized by the issuance of new bitcoin, i.e. inflation. It’s true that bitcoin will eventually be non-inflationary (beginning in 2140), but whether it will remain secure in that state is an open question…

The security of bitcoin matters for two reasons. First, because bitcoin’s legitimacy as the crypto store of value stems from it. Out of a sea of coins, the two things that bitcoin indisputably ranks first in are its security and age. In combination, these make bitcoin a natural Schelling point for stored value. If halvings cause bitcoin’s security to fall behind another coin, its claim to the store of value throne becomes a more fragile one of age and inertia alone.

The second reason is the tail risk of an actual attack. I consider this a remote possibility over the timeframe of a decade, but with every halving it gets more imaginable. The more financialized bitcoin becomes, the more people who could benefit from a predictable and sharp price move, and an attack on the chain (and ensuing panic) would be one way for an unscrupulous investor to create one.

Paul Butler – Betting Against Bitcoin

I think a few crypto-currencies will survive, and that blockchain as a technology may find a few persistent use-cases. But I’m also convinced that the current gold rush will eventually be seen as just that. Crypto isn’t that much better than our current banking system: its not faster, it doesn’t offer any real privacy, it isn’t truly decentralized, and blockchain itself is grossly inefficient. But that doesn’t mean people won’t try find uses for it. Just look at NFTs!

NFTs, or Non Fungible Tokens, employ blockchain technology to assert ownership of digital assets that are, by their very nature, fungible. You could, right now, take a screen shot of this website and share it with everyone. You could claim it was your website, and I couldn’t stop you. The website itself would continue to be mine, and DNS records for the domain will be under my ownership as long as I pay the annual renewal. But that screen shot you took can belong to anyone and everyone. This fact has caused no end of grief for media companies, who have been trying, almost since the dawn of the web, to control the distribution of digital media. No wonder NFTs are taking the world by storm right now, both the tech and investment community (who desperately need a reason to justify their blockchain investments) and the media conglomerates, want it to be the answer. Its not — but that reality won’t impact the hype cycle.

A NFT is an immutable record of ownership of a digital asset. The asset may be freely distributed, but the record of ownership remains, preserved in a blockchain that can be added to — but not removed from. Its like a receipt, showing that you bought a JPG or GIF… except that this receipt imparts no rights. I can still freely copy that JPG, host it, post it, email it and share it. The purchase record only serves as evidence that someone stupidly spent money (sometimes incredible amounts of money) for it. Everyone else just gets to benefit from the asset as before. If the stock market indicates the perceived value of holding a share of a company, the NFT market indicates that some people are willing to assign value to holding only the perception itself. Its ludicrous.

You can start to understand why some are calling web3 a giant ponzi scheme. People who bought into ephemeral perceptions of value need to increase the perceived value of their non-existent holdings, so they hype them up and try to get others to buy in, to raise that perceived value — until at some point, the bubble bursts, and not only does all the perception disappear — but the basis of those perceptions mostly go away too. Unlike the dot com bubble, which left behind useful ideas to be gainfully employed in less foolish ways, the web3 bubble has created nothing of real value, and little useful technological innovation will remain.

All that said, I do despair for the state of Web 2.0 — it has been almost completely commercialized, and the individual freedom of expressions the “eternal September” has wrought clearly indicated that our Civilization of the Mind needed some better guardrails. I also believe in some of the concepts of decentralization that are being espoused for the next web. But I’d argue that hyped up ideas of virtual ownership, meta realities, and the mistaken assumption of privacy that comes with cryptographic currency, are not the changes we need. The barrier to entry for the freedom of exchange shouldn’t be based on how much money you’re willing to gamble on dubious claims of hyped-up technologies…

I’m not sure I know what would be a fair and equitable way to approach Web 3.0. Web 1.0’s barriers were around a user’s ability to understand technology — a bar that ensured that most of its participants were relatively intelligent. Web 2.0’s democratization lowered that bar, enabling more diverse conversation and viewpoints (both good and bad), but also enabled a level of exploitation that even the original denizens of the web were unable to stop. The next web can’t be allowed to be more exploitative — and its participants shouldn’t be more easily misled. We need to learn from our mistakes and make something better.

Maybe I’m too old, or too disconnected from the metaverse and tech bro culture to influence much. But I observe that there are still wonderful communities on the web. They’re a little harder to find (sometimes deliberately so), but there are still (relatively small) groups of people who can converse civilly on a topic of shared interest. Since leaving Facebook, I’ve created and joined a few, and it gives me hope that technology can still be enabler for real communication and community. For some people, wearing a funny headset might be a part of that, and that’s OK. For others, it may mean speculating on stocks or new forms of stored value, and if they’re doing that responsibly, maybe that’s OK too. But if Silicon Valley decides that web3 requires crypto currency, NFTs and the metaverse, I think I’m going to skip that update… forever.

Alexa, shut up!

The next tech company we’re redefining our relationship with is Amazon. We’re not breaking up, but we are going to set some more healthy boundaries with the A to Z company.

Amazon has never been as creepy as Facebook, or more recently, Google. As a former employee, I can say with confidence that they are very careful with customer data — the data isn’t the fuel for that particular machine, actual purchases provide that. But data is a valuable lubricant that keeps its gears turning. When I was there, all trackable customer data was immediately anonymized. The encrypted customer ID was a carefully guarded secret that didn’t appear in any reports or analysis. But analysis was very much a part of the business. Data from different classes of customers was aggregated to create predictability and targetability — but never against an individual, only against kinds of shoppers. Its entirely likely that Facebook and Google behave the same way internally — but the difference is the amount of personally identifiable information they store. Amazon mostly just wants your address so they can ship you things.

No, Amazon’s guilt lies more in their impact on its work force. This isn’t entirely their fault, though. We all bought into the amazing convenience of clicking “Buy Now” and having something show up surprisingly fast — next, or even the same, day in some markets. It really is remarkable. At some point, though, to push that convenience to the next level, they have to start pushing against human limitations. Our laws and technology aren’t quite ready for drones to deliver things to our door, so to drive down costs and timelines, we need the people in the process to do more and do it faster. That means warehouse workers working harder, delivery drivers delivering more, and customer service serving more customers. The results are self-evident… the warehouse workers are ready to break, the drivers can’t get one, and the customer service has declined.

At the rate Ben’s toy drones crash, I’m not sure we’re ready for this tech anyway.

Last year we tried ditching Amazon Prime. Initially we expected it to be very difficult — we like getting free 2-day shipping, who doesn’t?! But not having that convenience was a trigger to look other places. We tried to buy local more, or at least spread the purchases to some different big box retailers. Sometimes we paid a little more, usually it took longer to arrive, but at least we weren’t the ones making some fulfillment center employee skip a bathroom break, or some Amazon driver pee in a bottle. It wasn’t more convenient, but it did feel more human. The experiment was a success, and despite Amazon’s attempts to sign us back up at every turn, we won’t be joining Prime again (except maybe for a month when the new Jack Ryan season comes out.)

Also getting significantly cut back at our house is Amazon’s robotic offspring, Alexa. My voice was one of those that helped train her — we had early prototypes in our home, listening to our conversations and performing daily training. Initially they wanted only born-and-raised US English speakers, but I convinced the dev team that my background was so diverse that Alexa wouldn’t be confused by any strong Canadian accents or turns of phrase, so they let me bring her home. It was exciting being a part of making a voice interface an actual reality, and we soon had an Alexa device within earshot of almost every part of our home (although she’s never been allowed in a bedroom.)

However, like many other innovative products from that business unit, Alexa is in a tough spot in 2022. Its great when invention starts out for the pure nerdy joy of creating something new, and I’ll always treasure the memories I have of working on so many secret projects. But all gadgets eventually need a way to support themselves — a business model that justifies their existence. Alexa was created with the distant possibility in mind that customers might one day shop with it, or that Alexa users might become more loyal Amazon customers, and thus influence revenue indirectly. In reality, most orders created by voice were probably mistakes, and Alexa herself is… well, she’s becoming rather annoying.

Shut up, Alexa!

As they’ve added more features, the machine learning algorithm has not improved. This means she gets confused more easily. Our programmed routine to dim the lights for a movie results in instructions on how to make popcorn about 50% of the time.

And if that’s not bad enough, in a desperate attempt at relevance, she’s started notifying us of things… daily. Batteries are on sale! That thing I bought a month ago needs a review! Did we know she could help us with our mental health through daily meditation? Not content to sit and listen quietly, nor to exist just to turn on our lights or spell a word for the kids, Alexa has begun insisting that she needs to be a bigger part of our life. Our reaction to this increasingly pushy robot is to just unplug her. We have three kids, a cat and a bunch of chickens. We haven’t got time for a needy smart speaker too.

So, two Echo devices are being replaced with Apple HomePod Minis. They’re significantly less capable, but also exponentially less demanding. The rest of the Echos, save one in the kitchen, will be decommissioned. Occasionally that’ll meaning turning off a light the old fashioned way. So be it.

Some tech companies are reaching the point of being irredeemable. I don’t think Amazon is there. But I do think its an awfully large commerce engine, barrelling down the information super highway, and it might not be a bad idea for us as consumers to post some speed limits — or for those behind the wheel to tap the brakes every now and then.

Besides, Jeff Bezos is fast becoming a super villain. He doesn’t need any more of my money…

Managing Social Media: Google

The company that started with the motto “Don’t Be Evil” has spent the last decade or so flirting with ideas that are awfully close to evil. That doesn’t mean that the organization is bad — any more than a hang nail means a human being is dying — but Google sure could use a pair of nail clippers.

When GMail first came out, I was ecstatic to get an invite. They were transparent about the trade-off at the time, and we all accepted it as reasonable: Google has automated systems that read your mail so that they can personalize advertisements to your interests. If you send an email to someone about how you burnt your toast that morning, seeing ads for toasters in the afternoon seemed fairly innocuous — even a little amusing! At the time, though, Google’s coverage of your digital life was just search results. Adding e-mail felt natural and not really that intrusive.

Fast forward to today, and what Google knows about you is downright terrifying. They don’t just know where you go on the Internet, they know where you’ve been, how often, and when you’re likely to go there again in the real world. And their influence doesn’t stop at knowledge: because of the virtual monopoly of Chrome, and its underlying tech called Chromium which powers most browser alternatives, Google has started making unilateral decisions about how the Internet should work — all in their favor, of course. They don’t like it when you’re not online, because they stop getting data about you. And it doesn’t matter if you’re not using one of their properties directly, because 70% of the 10,000 most popular Internet destinations use Google Analytics. Its actually a great product; it helps web developers understand their audience and build better offerings — and all Google wants in exchange is to know everything about you.

And let’s talk about YouTube, a virtually unavoidable Google property, full of useful content, and a site that historians might one day determine was a leading cause for the end of our democracy. YouTube is awful — and its entirely by accident. Google deflects privacy concerns by pointing out that the analysis of all this data is done by algorithms, not people. There’s probably no person at Google that actually knows how to gather all the information you’ve given them into a profile of you personally. But there doesn’t need to be: their software is sufficiently empowered to manipulate you in ways you aren’t equipped to resist.

YouTube’s recommendation algorithm has been disowned by its own creator as reckless and dangerous, and while its been tweaked since it was launched on the world like SkyNet, the evil AI from the Terminator movie franchise, and now has human over-seers to guide its machinations towards less destructive content, its still a pernicious and outsized influencer of human thought. Look no further than 2020’s rampant embrace of conspiracy theories for proof positive that recommendation engines are not our friends.

Google set out to do none of these things. I’ve been to their campus, and interviewed for jobs with their teams. To a fault, everyone I’ve met is full of idealism and optimism for the power of the Internet to empower individuals and improve society. I actually still like Google as a whole. But if the Internet is Pandora’s box, Google is the one that pried it open, and can’t quite figure out how to deal with what was inside. Humanity is not inherently good, and accelerating our lesser qualities isn’t having the positive outcome Google’s founders might have hoped for.

So, how do you throw the bath water out, but keep the baby? Can you use Google’s awesome tech, without contributing to the problems it creates? I don’t know, but here’s a few of the ideas we’re trying:

Diversify Your Information Holdings

I’ve said this before, and it bears repeating: don’t put all your eggs in the same basket. If you have a Google Mail account for work, have your personal account with another provider. If you use Google Classroom for school, use OneDrive for your private documents. If you have an Android phone, don’t put a Google Home in your bedroom. This isn’t just good security practice, preventing an attacker from gaining access to everything about you from a single hack, its good privacy practice. It limits the picture of you that any one service provider can make. Beware, though, of offerings that appear to be competitive, but are actually the same thing under the hood. The privacy browser Brave may tell a good story about how they’re protecting you, but their browser is based on Google’s Chromium, so its effectively the same as just using Google’s own browser.

Castrate the Algorithm

The YouTube recommendation engine is getting better. They’ve taken seriously the impact they’ve had, and they have smart people who care about this problem working on it. Until they get it right, though, you can install browser extensions that just turn it off altogether. You can still search YouTube for information you want, but you can avoid the dark rabbit trail that leads to increasingly extreme viewpoints. Choose carefully, because browser extensions are information collectors too — but here again, at least you’re diversifying.

Tighten the Purse Strings

Use an ad-blocker, and contribute less to their bottom line. Ad-blockers are relatively easy to use (again, reputation matters here), and available in multiple forms, from user-friendly browser extensions that can be toggled on-and-off, to nerd-friendly solutions you can run on a Raspberry Pi. We’ve eliminated about 80% of the ads we see on our home Internet by using DNS-level filtering — and its remarkably easy to do.

Do a Privacy Check Up

I’ve been involved in software development for 20 years — data really does make software better — but did you know Google will willingly relinquish older data they have on you? All you have to do is ask. Whether you’re an active Google user, in the form of an Android device or one of their enterprise offerings (like Google Classrom), or just an occasionally searcher with an account, you should take them up on this offer and crank up your privacy settings.

Search Elsewhere

Google is still pretty much the top of heap as far as search results go, but they’re far from the only game in town — and the deltas shrink daily. Bing is remarkably close in the search race, although its backed by an equally giantic corporation that is probably no more altruistic with their data acquisition, and DuckDuckGo does a decent job most of the time. Why not switch your default search engine to something other than Google, and switch back opportunistically if you can’t find what you need?

Check Who’s Watching

Just like Facebook has its fingers in most of the Internet, Google is everywhere. A service called Blacklight lets you plug in the address of your favorite website, then gives you a report on all the data collection services that website is cooperating with. The scariest ones are probably the ones you trust to give you news and information. Use RSS where possible, anonymizers, or different browsers for different purposes… which brings me to my final suggestion.

Stop Using Chrome

Oh man, I could go on for pages about how scary Google’s control over the Internet has gotten — all because of Chromium. If you’re old enough to remember all the fears about Microsoft in the 90s, this should all seem familiar. Just like the PC was Microsoft’s playground, and anyone who tried to compete was in danger of being crushed under the grinding wheel of their ambition, the world wide web has become Google’s operating system, and Chrome is the shiny Start Menu that graced every screen. Everything uses Chromium, even Apple’s browser, and Microsoft’s new Edge. It allows Google to basically dictate how the Internet should work, and while their intentions may be mostly good, the results will not be. I’m practically pleading with you: install Firefox and use it as your default browser; switch to a Chromium-based browser only when you have to.

If I sound like a paranoid old man by now, I’ve earned it. I’ve literally been working on the Internet my entire career — my first experiments in web development date back to 1996. I love this thing called the web, and generally Google has been good for it. But a democracy isn’t democratic if its ruled by dictator, and the Internet isn’t open if its entirely controlled by Google. As citizens of cyberspace, you own it to your community to help it stay healthy, and as individuals, you owe it to yourself to practice safe web surfing.