Plugging In the Car

In our family, we’re preparing for our first teenage driver later this summer. Honestly, I’m excited about it — a significant portion of our afternoons and weekends is driving kids to and from their various activities, so having another driver in the house will be a relief.

But going through the process of making sure we have the right number of vehicles, that Ben learns the basics in preparation for driver’s ed, and thinking through with the kids what this new level of autonomy will mean, gives you a fresh perspective on our vehicle-based society. Especially where we live, things are very spread out, so no car means being cut off — and for a teenager, having a car changes everything.

As we prepared for this, we made the decision to look for a low-cost electric vehicle — no easy feat in this day and age. What we found turned out to be a perfect fit — especially as gas prices started to climb. Our little i3 can do 3-4 round trips into town for kid’s activities, or one round trip airport run, on a single charge. The American mid-west has very little charging infrastructure, but installing the necessary equipment at home was way cheaper and easier than you’d think. This arrangement changes how you think about travel. While on one-hand, you can’t easily extend a trip by “filling up” at a station, on the other, there’s remarkable freedom in not having to use gas stations at all. The other day I realized my combustion-engine-equipped Saab was almost out of gas, and was disappointed to remember that I’d have to drive it somewhere to fix that problem; guzzling gas makes you dependent on someone else. The fact that it costs $70 to fill that tank right now is shocking — especially for teenagers dreaming about freedom.

The BMW i3

Of course, there are various nuances to this…

– We’re dependent on someone else for our electricity as well — but the remedy is a fun future project to install some solar panels. I have a couple friends embarking on multi-month efforts to power their entire home with renewables; I’ll start with learning how to charge the car. Since its equipped with DC-direct ports, I won’t need an inverter, which should save some capital. But that project is strictly for the fun and challenge of it — the impact to our electricity bill from charging our car has been negligible; it barely even registers.

Electric car batteries aren’t great for the environment. Building them requires mining for rare earth minerals, and taking them apart at the end of their life is difficult. But progress is being made at building new types of batteries that are more environmentally friendly, and improvements are being made to recycling. One great use I’ve seen for old EV batteries recently is as a back-up for home solar systems.

– Our electric car isn’t fully electric. It has a built-in backup generator called a range extender (Rex). The Rex isn’t connected to the drive train, so its not technically considered a hybrid vehicle. In the US, it was intended to kick-in when the battery charge is low (around 7%), and charge the battery directly. In Europe, the Rex could hold the state of charge at any level below 75%, so of course I reprogrammed the car’s computer to think its European! Since the car is from 2015, some of the electronics need work, and we often need a phone handy to clear error codes to get the Rex to kick-in, but having it available creates confidence that we’re not going to face a dead battery in the middle of a country road somewhere. Filling the generator’s tank costs about $6 once every month or two.

For now, we’ll have three cars. Two with combustion engines, and the electric. The electric car gets the most use, the SUV is for big hauls or long trips, and I’m trading down my Saab for one that’s more appropriate for the kids to learn on. Some day, I’d love to go all electric. It’ll take time to phase out gasoline vehicles: prices need to go down, ranges need to be longer, and infrastructure needs to be built out. But both our governments (US and Canada) have set ambitious goals for the transition, and after a few months with an electric car I’m convinced, the future of transportation should be electric because of one main reason: energy diversification.

A traditional combustion engine has one energy source, and individuals and communities can’t make it ourselves. Bio-fuel was a false lead; its worse for the environment than oil. Electricity can be made from dirty sources, like oil and coal — and we’ll probably need those sources for longer than we’d like — but it can also be made from wind, or water or sun or the heat of the earth, and some of those things can be harnessed at home, or in smaller regional projects. Diversifying our energy sources gives us options, and allows us to respond faster when we learn new things, or face new challenges. There’s no better illustration of this than sailing past a gas station, smiling as those suckers pump their hard earned cash into their tanks…

Smart Phones Strike Back

Smart phones are getting interesting again — and I’m not talking about next week’s Apple event, where they’re sure to announce yet another set of incremental changes to the iPhone, or Samsung getting caught throttling apps on their most recent set of samey Galaxy devices. I’m talking about choice re-emerging. Slowly, mind you, but without the participation of big data gobbling companies. Instead, its brought to you by the same type of nerd that railed against the desktop duopoly in the 90s: the Linux crowd.

While Desktop Linux never really had its day, Linux has succeeded in a myriad of different ways: on embedded devices, web servers, and in tiny computers. It turns out there is a place for an alternate operating system — it maybe just looks a little different than the computers you see at Walmart or Best Buy.

As consumers we’re fairly bereft on hardware choice — except for a few expensive experiments, smart phones all pretty much look the same these days. Software choice, too, is basically non-existent. Do you want to buy into Apple’s ecosystem, or Google’s? If you don’t want either, then you’re basically cut off from the world. No banking, no messaging, and thanks to the 3G shutdown, soon no phone calls either.

Those tackling the issue of hardware choice are a relatively brave few. The FairPhone isn’t cheap, or terribly different looking, but its a good effort at making a repairable handset. The PinePhone is a truly hacker-friendly device, with hardware switches to protect against snooping software, and an open bootloader capable of handing off to multiple operating systems. Both devices still look like rectangular slabs, but have important features that the hegemony is unwilling to consider.

It’s also interesting to see big manufacturers explore different form factors. I’m glad to see folding phones making a comeback — although they remain too expensive for my blood. There’s nothing approaching the 3G-era of creative and exploratory hardware design, though. When my Veer finally gets cut off from T-Mobile’s doomed 3G network this summer, there is unlikely to be a replacement quite so svelte.

But while its true that hardware remains mostly yawn-worthy, smart phone operating systems are another story. While Windows Phone still has a few hold outs, and there’s a handful of us holding on to webOS, most of the world has accepted either iOS or Android. But Linux is not to be counted out — and this is where things get really interesting.

This summer, I ran Sailfish OS for a couple weeks. Based on open-source Linux, but with some proprietary software, Sailfish is built by a company in Sweden called Jolla, who offers it as a viable alternative platform and has had success in certifying for use in some European governments. The free base OS is quite stable (although I don’t love some of their UI choices) and the paid version provides Microsoft Exchange enterprise email and a Google-free Android compatibility layer. The former is too buggy for serious use, in my opinion, and the latter suffers from some stability issues. But for basic use, on modern hardware, Sailfish is gradually becoming a legitimate contender.

Jolla is building a real alternative to Android

Other Linux-based offerings are at varying levels of maturity. Ubuntu Touch doesn’t quite have the backing it did when initially launched, but it was a breeze to install on my Pixel 3A. Manjaro Linux is fast becoming the OS of choice for the PinePhone crowd — but I can’t afford the hardware, so I haven’t tried it out. And my own favorite, LuneOS, which is an attempt to restore webOS to its original mobile glory, by porting Open WebOS (and other) components, along with a familiar UI and APP support, continues its slow evolution.

Back in the infancy of the smart phone, I considered myself an early adopter. I stood in line for the first iPhone, having sold two of my most precious gadgets (my Newton 2000, and my iPod) to pay for it. While at Microsoft, I had virtually every Windows Phone 7 device available. And I’ve had numerous Android-based devices, and worked for 3 years on an open Android (AOSP) derived platform at Amazon. But the past 7-10 years have been depressingly boring. Watching people drool over the new iPhone that’s basically the same as the last iPhone is infuriating — its like society collectively agreed that we’re not interesting in investing in innovation, only in rewarding repetition.


In early 2022, I can say unequivocally that most people don’t need, and won’t be satisfied with, a Linux-based phone right now. But for the first time in nearly a decade, my desk has an array of interesting smart phones on it again, and I’m optimistic that a couple of these new OSes might evolve to the point where those of us who aren’t willing to trade our privacy for the right to make and receive phone calls might finally have some choices again.

The Next Web

The tech bros have declared a new era of the Internet; they call it Web 3.0, or just web3. They claim this new era is about decentralization, but their claims are suspiciously linked to non-web-specific ideas like blockchain, crypto currency and NFTs. I object to all of it.

I consider myself a child of Web 1.0 — I cut my teeth on primitive web development just as it was entering the mainstream. But I consider myself a parent of Web 2.0. Not that I was never in the running to be a dot com millionaire, but my career has largely been based on the development and employment of technologies related to the read/write web, and many of my own personal projects, such as this website, are an exploration of that era of the Internet. Web 2.0 is the cyberspace I am at-home in. So one might accuse me of becoming a stodgy old man, refusing to don my VR headset and embrace a new world of DeFi and Crypto — and time will tell, maybe I am…

But its my personal opinion that web3 is on the fast track to the Trough of Disillusionment, and that when the crypto bubble bursts, there will be much less actual change remaining in the rubble than the dot-com bubble burst left us with.

Before I dive in, let’s clear up some terms. “Crypto” has always been short for cryptography. Only recently has the moniker been re-purposed as a short form for cryptographic currency. Crypto being equated with currency is a bit of a slight to a technology space that has a wide-variety of applications. But fine, language evolves, so let’s talk about crypto using its current meaning. Crypto includes BitCoin, Dogecoin, Litecoin, Ethereum and all the other memetic attempts at re-inventing currency. Most of these are garbage, backed by nothing except hype. People who are excited by them are probably the same people holding them, desperate to drive up their value. There’s literally no “fundamentals” to look to in crypto — its all speculative. But that doesn’t mean they don’t have value; the market is famously irrational, and things are worth what people think they’re worth. Bitcoin, the original crypto-currency, has fluctuating, but real value, because virtual though it may be, it has increasing scarcity built-in.

I stole this image from somewhere on the web. I assume someone else owns the NFT for it.

I’ve owned Bitcoin twice — and sold it twice. The first time was fairly early on, when I mined it myself on a Mac I rescued. What I mined was eventually worth about $800: enough to buy myself a newer and more powerful computer.

My next Bitcoin experiment was last year, where I bought a dip and sold it when it went high. I made about $300. I made more money buying and selling meme stocks with real companies behind them, so I have no plans to continue my Bitcoin experiments in the near future.

As a nerd who’s profited (lightly) off it, you’d think I’d be more of a proponent of digital currency, but the reality is that none of its purported benefits are actually real — and all of its unique dangers actually are. I won’t cite the whole article, but I need to do more than link to this excellent missive I found on the topic, because its much better written than I could manage, so here’s a relevant excerpt — go read the rest:

Bitcoin is touted as both a secure and non-inflationary asset, which brushes aside the fact that those things have never simultaneously been true. As we’ve seen, Bitcoin’s mining network, and therefore its security, is heavily subsidized by the issuance of new bitcoin, i.e. inflation. It’s true that bitcoin will eventually be non-inflationary (beginning in 2140), but whether it will remain secure in that state is an open question…

The security of bitcoin matters for two reasons. First, because bitcoin’s legitimacy as the crypto store of value stems from it. Out of a sea of coins, the two things that bitcoin indisputably ranks first in are its security and age. In combination, these make bitcoin a natural Schelling point for stored value. If halvings cause bitcoin’s security to fall behind another coin, its claim to the store of value throne becomes a more fragile one of age and inertia alone.

The second reason is the tail risk of an actual attack. I consider this a remote possibility over the timeframe of a decade, but with every halving it gets more imaginable. The more financialized bitcoin becomes, the more people who could benefit from a predictable and sharp price move, and an attack on the chain (and ensuing panic) would be one way for an unscrupulous investor to create one.

Paul Butler – Betting Against Bitcoin

I think a few crypto-currencies will survive, and that blockchain as a technology may find a few persistent use-cases. But I’m also convinced that the current gold rush will eventually be seen as just that. Crypto isn’t that much better than our current banking system: its not faster, it doesn’t offer any real privacy, it isn’t truly decentralized, and blockchain itself is grossly inefficient. But that doesn’t mean people won’t try find uses for it. Just look at NFTs!

NFTs, or Non Fungible Tokens, employ blockchain technology to assert ownership of digital assets that are, by their very nature, fungible. You could, right now, take a screen shot of this website and share it with everyone. You could claim it was your website, and I couldn’t stop you. The website itself would continue to be mine, and DNS records for the domain will be under my ownership as long as I pay the annual renewal. But that screen shot you took can belong to anyone and everyone. This fact has caused no end of grief for media companies, who have been trying, almost since the dawn of the web, to control the distribution of digital media. No wonder NFTs are taking the world by storm right now, both the tech and investment community (who desperately need a reason to justify their blockchain investments) and the media conglomerates, want it to be the answer. Its not — but that reality won’t impact the hype cycle.

A NFT is an immutable record of ownership of a digital asset. The asset may be freely distributed, but the record of ownership remains, preserved in a blockchain that can be added to — but not removed from. Its like a receipt, showing that you bought a JPG or GIF… except that this receipt imparts no rights. I can still freely copy that JPG, host it, post it, email it and share it. The purchase record only serves as evidence that someone stupidly spent money (sometimes incredible amounts of money) for it. Everyone else just gets to benefit from the asset as before. If the stock market indicates the perceived value of holding a share of a company, the NFT market indicates that some people are willing to assign value to holding only the perception itself. Its ludicrous.

You can start to understand why some are calling web3 a giant ponzi scheme. People who bought into ephemeral perceptions of value need to increase the perceived value of their non-existent holdings, so they hype them up and try to get others to buy in, to raise that perceived value — until at some point, the bubble bursts, and not only does all the perception disappear — but the basis of those perceptions mostly go away too. Unlike the dot com bubble, which left behind useful ideas to be gainfully employed in less foolish ways, the web3 bubble has created nothing of real value, and little useful technological innovation will remain.

All that said, I do despair for the state of Web 2.0 — it has been almost completely commercialized, and the individual freedom of expressions the “eternal September” has wrought clearly indicated that our Civilization of the Mind needed some better guardrails. I also believe in some of the concepts of decentralization that are being espoused for the next web. But I’d argue that hyped up ideas of virtual ownership, meta realities, and the mistaken assumption of privacy that comes with cryptographic currency, are not the changes we need. The barrier to entry for the freedom of exchange shouldn’t be based on how much money you’re willing to gamble on dubious claims of hyped-up technologies…

I’m not sure I know what would be a fair and equitable way to approach Web 3.0. Web 1.0’s barriers were around a user’s ability to understand technology — a bar that ensured that most of its participants were relatively intelligent. Web 2.0’s democratization lowered that bar, enabling more diverse conversation and viewpoints (both good and bad), but also enabled a level of exploitation that even the original denizens of the web were unable to stop. The next web can’t be allowed to be more exploitative — and its participants shouldn’t be more easily misled. We need to learn from our mistakes and make something better.

Maybe I’m too old, or too disconnected from the metaverse and tech bro culture to influence much. But I observe that there are still wonderful communities on the web. They’re a little harder to find (sometimes deliberately so), but there are still (relatively small) groups of people who can converse civilly on a topic of shared interest. Since leaving Facebook, I’ve created and joined a few, and it gives me hope that technology can still be enabler for real communication and community. For some people, wearing a funny headset might be a part of that, and that’s OK. For others, it may mean speculating on stocks or new forms of stored value, and if they’re doing that responsibly, maybe that’s OK too. But if Silicon Valley decides that web3 requires crypto currency, NFTs and the metaverse, I think I’m going to skip that update… forever.

In Defense of Elizabeth Holmes

The tech world’s latest villain is a woman named Elizabeth Holmes. You may have heard about her trial on the news, or maybe seen the documentary about her fall from grace. Maybe you even read the Wall Street Journal expose that started her downward spiral. We don’t know much about her as a person, save for the carefully curated public image she presented to the world during her glory days — when she was called a female Steve Jobs, and her company, the Apple Computers of the biotech world. But if the media is to be believed, we now know that she is definitely evil

Her defense at her trial attempted to position her as a victim: she was manipulated by the men in her life, and allegedly abused sexually, which somehow explains why she lied to investors and carried on the shell game that was Theranos for so long. As I write this, the jury is literally still out on whether or not her sob story will garner her any leniency. But the facts of the case are clear: her tech didn’t work, she knew it didn’t work, she lied about it, and — for a time — got rich and famous off of those lies. Eventually her sin, and that of her company, was discovered, and it all unraveled. Her company was destroyed, she was destroyed, and the hope she’d created of a new and easier way to perform medical testing was shown to be a pipe dream. She sold snake oil with the best of them, then got caught. She should be punished.

But I object to the notion that Elizabeth Holmes is some kind of super villain, the first of her kind to come up with this scheme, and pull one over on an innocent and trusting stock market. Heck, Silicon Valley is the home of “fake it til you make it” — an entire culture built on selling B.S. to complicit investors, with the hope of making some unlikely vision a reality. No, I think Elizabeth Holmes’ real crime is that she played the Tech Bro game while being female. She did literally the same thing that a thousand other “entrepreneurs” have done before her, and she did it really, really well for years. She had people believing in, not just her idea, but in her. And like so many before her, she failed. But instead of the embarrassment of that failure being limited to herself, her employees and her shareholders, the embarrassment blew back on a woke culture that needed her to be some kind of proof point, some kind of vindicating example of how feminism can destroy the patriarchy. Elizabeth Holmes wasn’t just a tech hero, she was a culture war hero — and she betrayed those culture warriors by being… the same species as the men she was emulating. Not better than, not some kind of feminist ideal; just another flawed human being. Someone who tried to do something really important, and really hard, and who pushed herself and everyone around her to reach for something insanely great… then when it became evident that it wasn’t working, tried to buy time, stretch the truth, change the goalposts — anything to avoid admitting defeat.

Was she wrong? Definitely.

Was her behavior completely human? Totally.

These are her notes to herself. Comments online suggest these are proof of her manipulative nature. But I don’t see the notes of a coniving con artist, set on deceiving and destroying her way to the top. I see the notes of a struggling human person, trying to discipline her own flaws, work hard enough to be taken seriously, and actually accomplish something for herself and those who depend on her — someone trying do all of that in a place that is disproportionately male-oriented. She didn’t set out to deceive, she set out to succeed. And the fact that she didn’t doesn’t change her worth as a person. The fact that she couldn’t bear to face defeat and lied about it doesn’t mean she has betrayed her gender and should be burned at the stake; it just means she made the same set of mistakes most of us would make in her shoes. The same mistakes that literally hundreds of wannabe Silicon Valley founders have before her. She did all of that while female, but she should face no greater consequences than her male counterparts — many of whom are lauded for the same kinds of failure.

I’m not opposed to feminism: I have two daughters and I want them to be and do whatever they want with their lives. I am opposed to the backlash she’s receiving from the culture she embodied. I’m saying that gender — and culture — factored in to the job she was trying to do — and unfairly so. And I’m saying neither should factor in to the punishment she faces for failing at that job. She doesn’t owe society some special penance because she didn’t succeed at “leaning in.” She’s just a human being who tried something really hard, and failed at it ungracefully. Here’s a list of mostly male-founded companies who fared no better. Let’s not judge her any more harshly for playing the same game while female.

The Year of Linux on the Desktop

One of the first times I heard about Linux was during my freshman year at college. The first-year curriculum for Computer Programmer/Analyst was not particularly challenging — they used it as a level-set year, so assumed everyone was new to computers and programming. I’d been trying to teach myself programming since I was 11, and I was convinced I knew everything (a conviction solidified when one of the profs asked me to help him update his curriculum.) I don’t even remember what class it was for, but we were each to present some technology, and this kid, who looked to be straight out of the movie Hackers, got up and preached a sermon about a new OS that was going to change the world: Linux. Now there was something I didn’t know…

I’d actually downloaded a distro once, and tried to get it going — I figured it would complete the set, along with OS/2 Warp and Windows NT 4 — but hardware support wasn’t there, so I gave up. But after that class, I went home and downloaded Slackware and tried it again. Everything went swimmingly until I figured out that to get the proper resolution out of my graphics card I would have to recompile the Linux kernel myself! I decided this was another level of nerdy, and that I had better things to do. Besides, I already had an alternative-OS religion I was faithful to: the barely-hanging-on MacOS.

It was around the same time that the “second coming” of Steve Jobs was occurring, and the battered Mac community was excited about their salvation. The iMac wowed everyone, and OS X provided the real possibility that Macintosh might survive to see the 21st century after all. I went irresponsibly into debt buying myself an iMac SE — an offering to the fruit company that would re-occur with almost every generation of Apple technology. I sold all my toys and stood in line for the first iPhone. I bent over backward to get someone to buy me the first MacBook Pro with a 64-bit Intel processor. My first Mac ever was rescued from a dumpster, and I’ve saved many more since.

My loving bride has since (mostly) cured me of my irresponsible gadget spending, and I’ve made something of a hobby of fixing up high end Apple computers a few years after their prime, and pressing them back into service. I edit video on a Mac Pro from 2009 — it would have retailed for more than $20k new, but I got it for $800, and have since put maybe $300 into upgrades into it. Its a fantastic machine that runs 5 different operating systems and I will keep it useful for as long as I can. Similarly, for the past 4 years I’ve done hobby development on a 2008 MacBook Pro; another fantastic machine, and while I’ve had better work laptops, I’ve loved none of them like I love that one.

But even I’m having trouble keeping it useful. A newer OS than it was ever supposed to run has been hacked to keep it going, but Apple’s treadmill continues, and one-by-one, its useful capabilities have dropped off: first I couldn’t update Office, then I couldn’t sync DropBox, OneDrive came next, and now Firefox is about to drop support. I need special software to do thermal management, since the OS doesn’t support the hardware, but despite having torn-it-down to re-apply thermal paste multiple times, it runs hot, the fans spin loudly, and it eats through batteries. It could be argued that all you really need is a web browser, a text editor and a Terminal window… but a few more modern accoutrements would be nice. After 14 years of service, I have to admit, its time to let the old girl go…

But the Apple of today is not the company I so slavishly followed in my youth. The Intel years were some of their best: upgradable, repairable hardware in beautiful cases. Now Apple exclusively makes disposable computers, increasingly dumbed down for the masses. What’s a nerd to do?

Its a bit of a joke in the geek world that the year of “Linux On the Desktop” is right around the corner. Since I first played with it in the last 90s, its come a long way — never quite getting popular support amongst the aforementioned masses, but finding its way in other places. If your phone doesn’t run iOS, it runs a version of Linux. Chromebooks also run Linux. And the Raspberry Pi, my favorite home-automating/web-serving/hardware-hacking/education-enabling $30 computer, runs Linux. In fact, I’ve been running Linux for the past 10 years — just never on the desktop.

So for this nerd, 2022 will be my year of Linux On the Desktop. My faithful 2008 MacBook Pro has found a worthy successor in the form of a 2018 Thinkpad X1 Carbon. Purchased for less than $400 on eBay, and needing only new m.2 storage (hard drive), this thing screams. It’s soft-touch carbon fibre case is almost as sexy as a Mac, but the insides are twice as capable — while running silently most of the time. Its light, but solid, with an excellent keyboard with the perfect amount of travel, and a full array of useful ports. Its inner bits are easily accessed with a couple captive-screws, making it simple to repair, and it was designed with Linux in mind, fully supported by the manufacturer on the latest Ubuntu.

So far, I’ve found no down-side to Ubuntu’s flavor of Linux on the Desktop — at least for development use. All my favorite programming environments are supported: Python, C#, Javascript and the LAMP stack. Ubuntu is nicely polished, and the native Bash shell let’s me do anything I want to the underlying OS, or any remote server. It connects to my old gadgets and my new ones, and running full out, still gets 5 hours of battery life.

I’ll continue experimenting with alternative phone OSes too. webOS won’t survive the 3G apocalypse, but Sailfish OS is promising, and having newer mobile hardware is nice — not to mention, allowing me to opt-out of Google’s data collection ecosystem. We’ll see if this machine makes it the full 14 years that my MacBook Pro did — but I’m willing to bet it outlasts any Mac sold in the past year…

The 3G Apocalypse

There are a number of topics that I’d like to blog about, but I’m afraid our current cultural climate doesn’t leave room for the kind of nuance they’d require. Things like our conflicted feelings about getting our youngest vaccinated. We’re pro-vaccine in this house, and 4 out of 5 of us got our shots as soon as responsibly possible once we were eligible. But our 10-year old, who’s already had and recovered from Covid, is a different discussion. It requires consideration and nuance — but saying such things out loud invites input from crazy people, waiting to seize any apparent endorsement of their anti-vac stupidity. If we were to admit some trepidation about vaccinating our youngest, that might somehow give credence to the horse-paste crowd…

Here’s another such topic: 5G cellular networks. I’m going to write about the cluster that is telecommunication technology at the end of 2021, but I don’t want my technical mumbo jumbo to be interpreted as any sort of nod to the “5G networks are a secret plan by Bill Gates to control the minds of people who got microchipped by the Covid vaccine” nut jobs. There’s stupidity, and then there’s conspiracy nut stupidity. I just want to talk about the first kind.

Here in the States, all of the big cellular networks are in the process of shutting down their 2G (aka “Edge”) and 3G infrastructure, to make room for more 5G coverage. On the surface, maybe this sounds like a good plan. I remember laughing at a distant relative still clinging to their “car phone” as Canada shuttered their analog cellular infrastructure late last decade. I’m sure I thought something snarky, like “get with the times, old man.” But sometimes there’s good reason not to rush into euthanizing old technology…

What most people don’t know is 5G is not a singular replacement for previous networks — and won’t be for a long time. In fact, there’s two kinds of 5G, and there will probably have to be a third. The network technology touted by your cell phone carrier as being “next gen” doesn’t actually work inside most buildings…basically at all. So all those cell towers out there, poised to be stripped of their 2G and 3G antennas, won’t be able to help you make a call or update your Instagram if you happen to be inside a building.

Instead, a second 5G infrastructure needs to be created, one that runs on a different frequency entirely and which can penetrate building walls. The unfortunate downside of these low frequencies is that the range sucks — they won’t work outside! So we need two 5Gs, two sets of antennas, two sets of hardware in our phones, and we need to seamlessly move between them. Seems simple enough, but wait!

Turns out there’s a third scenario: places that are partially enclosed so can’t be covered with long-range 5G, but are too big to be properly covered by short-range 5G. Outdoor shopping malls, stadiums, and even “urban canyons” — downtown areas surrounded by tall buildings. These environments may require a different infrastructure again; a “medium range” 5G, that needs different antennas and different hardware in your cell phones.

Now you start to see why they need to cannibalize the older, but still functional, 3G networks — they need room on the towers to run multiple kinds of 5G networks, they need more towers, and they need more people carefully stacking the house of cards that may eventually fulfill the promise of 5G’s faster speeds. Three of everything, and then it’ll be great!

But hey, time marches on, progress in inexorable, and we should just push through these challenges, so we can be better connected, right? We gotta have our Instagram stories as fast as possible, so “get with the times, old man” and all that…

Except there’s another problem. It turns out that millions of “field devices” that depend on 3G are being killed off. 2G or 3G-enabled medical monitoring devices that give patients freedom, connected car technologies that help send life-saving aid in accidents, soil sensors that help farmers make decisions to optimize crop yields, and pipe line and oil field sensors that keep our addiction to fossil fuels fed while helping avoid ecological disasters. All of these will have to be replaced with 5G-enabled devices — and maybe replaced multiple times, until 5G finally stabilizes. 5G won’t make things better right away, in fact, its going to make them worse for a long time…

But even with all these concerns, its going to happen anyway. AT&T has already shut down their 2G network, those customers are out of luck; their 3G customers lose service in December (including all 3 of our cars, which can never be upgraded to 4G). Verizon and T-Mobile will follow suit later in 2022. When you ask a carrier what’s to be done about your old equipment, their only answer is to sell you a new phone — despite the fact that consumer smart phones are only a part of the equation, all they can offer is to take your money for something you probably don’t need.

Smart phone upgrades haven’t been worth lining up for since 2007

And let’s talk about that, because selling you a new, slightly different phone every 12-24 months is big business. Stupid people, who don’t understand that technology evolution inevitably plateaus, are still convinced by Apple that they have to have the latest iPhone. And carriers are happy to help create that need. Our kids each have an iPhone 6s — a phone released in September of 2015. Its a great phone, has all the features they need, can be purchased for about $90 used, and the only real reason to replace will happen when our cell carrier refuses to support it.

Tiny HP VeerMy personal phone is a svelte HP Veer from 2011. Its a total of 4 inches diagonal, doesn’t track my location or report my data to Google, doesn’t wake me up with notifications or bother me with ads. And if it weren’t for the 3G shut down, I’d be the crazy uncle using that thing until the hardware falls apart.

Its too late to stop the Frankenstein’s monster that is 5G. Its not going to control your brain or give you cancer — but it will continue to be a mess of a roll out. There’s not much we can do as consumers to stop this one, but there is something you can do to help break the cycle: stop buying new things you don’t need.

The good news is that if your phone has 4G, it won’t be shut down, or rendered obsolete by 5G. Because of the crappy state of 5G, 4G is likely to be around for another 10 years. So vote with your wallet, and refuse to buy a new smart phone in 2022 — from any company. If your battery is getting old, $80 will fix that. If you drop your phone in a toilet, head over to or eBay and buy the same model used. Don’t be tempted by the dubious promise of “faster networks” or indistinguishable improvements to cameras. Technology isn’t jewelry or a fashion trend: its a tool. Insist on tools that work, that last for years, and that don’t trap you on a treadmill of constant replacements.

Maintainers of Lightning – Part 2

When I was a young computer nerd, back in the early days of a consumer-friendly Internet, formal computer education in high school was limited to whatever rudimentary understanding our middle-aged teachers could piece together for themselves to pass on. But there was one tool that taught me more about web development than any book or class or teacher ever could: View > Source.

View Source MenuYou can do it now, although the output is significantly less meaningful or helpful on the modern web. Dig around your Browser’s menus and it’s still there somewhere: the opportunity to examine the source code of this, and other websites.

It was from these menus that I learned my first simple tricks in HTML, like the infamous blink tag, and later the more complex capabilities offered through a simple programming language called Javascript that increasingly became part of the Web.

Javascript and the web’s DOM (Document Object Model) taught me Object Oriented Programming when my “college level” computer science course was still teaching procedural BASIC. Object Oriented Programming, and the web as a development platform, have defined my career now 25 years hence. I owe an unpayable debt to the View > Source menu. It was the equivalent of “popping the hood” on a car, and it allowed a generation to learn how things work. Your iPhone doesn’t do that – in fact, today’s Apple would prefer if it people would stop trying to ever open their hardware or software platforms.

This, I assert, is why people fell in love with the Apple II, and why it endures after 40 years. It was a solid consumer device that could be used without needing to be built or tinkered with, but its removable top invited you to explore. “Look at the amazing things this computer can do… then peak inside and learn about how!”

As soon as our son was old enough to hold a screwdriver, he was taking things apart to look inside. I’m told I was the same way, but as parents Nicole and I had a plan: we’d take our kid to Goodwill and let him pick out junk to disassemble — rather than working electronics from around the home! I consider it a tragedy that some people outgrow such curiosity. Dr. Buggie has not – in fact, in my brief conversation with him, I think curiosity might be one of his defining characteristics…

Have PHD, Will Travel

Some time in the early 70s, Stephen Buggie completed his PHd in Psychology from the University of Oregon. Finding a discouraging job market in America, Buggie decided to seek his fortunes elsewhere. The timeline from there is a little confusing: he spent 12 years in Africa, teaching in both Zambia during its civil war (he vividly recalls military helicopters landing beside his house) and Malawi. He was in America for a sabbatical in 1987, where he saw computers being used in a classroom, and at some point in there he taught at the University of the Americas in Mexico, who were early adopters of the Apple II for educational use. His two sons were born in Africa, and despite their skin color, their dual citizenship makes them African Americans – a point he was proud to make. He taught 5 years in South Carolina, before joining the University of New Mexico on a tenure track. He maintains an office there as a Professor Emeritus (although rarely visits — especially in the age of COVID) and still has his marking program and years of class records on 5.25” floppy disks.

Apple II Classroom Computers – Seattle’s Living Computer Museum

Buggie found community amongst Apple II users in the 80s and 90s. Before the World Wide Web there were trade magazines, Bulletin Board Systems and mail-in catalogs. Although there was no rating system for buyers and sellers like eBay has today, good people developed a reputation for their knowledge or products. As the average computer user moved to newer platforms, the community got smaller and more tight-knit. Buggie found a key weakness in maintaining older hardware was the power supplies, and scavenged for parts that were compatible, or could be modified to be compatible, and bought out a huge stock – one he is still working through, as he ships them to individual hobbyists and restorers decades later. His class schedule made it difficult for him to go to events, but retirement allowed him to start attending — and writing about KansasFest, and still talks excitedly about the fun of those in-person interactions, and the value he got from trading stories, ideas and parts with other users.

To be certain, Buggie’s professional skills are in the psychology classroom. He doesn’t consider himself to be any sort of computer expert — he’s more like a pro user who taught himself the necessary skills to maintain his tools. And maybe this is the second theme that emerged in our conversation: like the virtue of curiosity, there is an inherent value in durability. This isn’t commentary on the disposable nature of consumer products so much as it is on the attitudes of consumers.

We shape our tools, and thereafter our tools shape us

My dad worked with wood my whole life. Sometimes as a profession, when he taught the shop program at a school in town. Sometimes as a hobby, when he built himself a canoe. Sometimes out of necessity, when my parents restored our old house to help finance their missionary work (and the raising of three kids.) His tools were rarely brand new, but they were always well organized and maintained. He instilled in me a respect for the craft and in the artifacts of its execution.

Computers are tools too – multi-purpose ones that can be dedicated to many creative works. When we treat them like appliances, or use them up like consumable goods, the disrespect is to ourselves, as well as our craft. Our lack of care and understanding, our inability to maintain our tools beyond a year or two, renders us stupid and helpless. We cease to be users and we become the used – by hardware vendors who just want us to buy the latest versions, or credit card companies that want to keep us in debt, or by some imagined social status competition…

In those like Dr. Buggie who stubbornly refuse to replace a computer simply because it’s old, I see both nostalgia and virtue.

The nostalgia isn’t always positive – I’ve seen many Facebook posts from “collectors” who have storage rooms full of stacked, dead Macintosh computers that they compulsively buy despite having no use for, perhaps desperate for some prior stage of life, or to be able to claim a part in the computing revolution of yester-year that really did “change the world” in ways we’re still coming to grips with. There is a certain strangeness to this hobby…

But there is also virtue in preserving those sparks of curiosity, those objects that first invited us to learn more, or to create, or to share. And there is virtue in respecting a good tool, in teaching the next generation to steward well the things we use to propel and shape our passions or careers or the homes we build for our families.

Fading Lightning

I joke with my wife that our rec room is like a palliative care facility for dying electronics: my 1992 Laserdisc player is connected to a 2020-model flatscreen TV and still regularly spins movies for us. I still build apps for a mobile platform that was discontinued in 2011 using a laptop sold in 2008. In fact, these two posts about the Apple II were written on a PowerBook G3 from 2001. Old things aren’t necessarily useless things… and in fact, in many ways they provide more possibility than the increasingly closed computing platforms of the modern era.

In an age where companies sue customers who attempt to repair their own products, there is something to be said for an old computer that still invites you to pop the top off and peer inside. It says that maybe the technology belongs to the user – and not the other way around. It says that maybe those objects that first inspired us to figure out how things works are worthwhile monuments that can remind us to keep learning – even after we retire. It says that maybe the lessons of the past are there to help us teach our kids to shape their world, and not to be shaped by it…

Thanks for visiting! If you missed Part 1, check it out here!

Maintainers of Lightning – Part 1

Of the subset of people who are into computers, there is a smaller subset (although bigger than you might think) that are into vintage computers. Of that subset, there’s a significant proportion with a specific affinity for old Apple hardware. Of that subset, there’s a smaller subset that are into restoring and preserving said hardware. For that group of people, something like a Lisa (the immediate precursor to the Macintosh) or a NeXT Cube (Steve Jobs follow-up to the Macintosh, after he was kicked out of Apple) holds special value. Something like an original Apple I board holds practically infinite value.

If you are in this sub-sub-sub-group of people, then you have undoubtedly come across the eBay auctions of one Dr. Stephen Buggie. You’ll know you’ve purchased something from him, because whether its as big as a power supply or as small as a single key from a keyboard, it comes packed with memorabilia of either the Apple II, or his other hobby, hunting for radioactive material in the New Mexico desert. Hand-written, photocopied, or on a floppy disc or DVD, he always includes these small gifts of obscura.

Last year, after the fifth or sixth time one of his eBay auctions helped me rescue an Apple II machine I was restoring, I reached out to Dr. Buggie and asked him if I could interview him for my blog. His online presence outside of eBay is relatively small. Of of the few places you can find him is on YouTube, presenting at KansasFest, an annual meeting of sub-sub-sub-group nerds entirely focused on the Apple II line-up of computers. Not all of these folks are into restoration. Some are creating brand new inventions that extend the useful life of this 40+ year-old hardware. Some have been using the hardware continuously for 40+ years, and still live the early PC wars, full of upstarts like Atari, Commodore and Amiga, gearing up for battle against IBM.  Many still see the Macintosh as a usurper, and pledge more allegiance to the machine that Wozniak wrought than the platform that begot the MacBook, iPhone and other iJewelry that all the cool kids line up for today. If you search really hard, you may even find references to Dr. Buggie’s actual, although former, professional life: that of a world-traveling psychology professor. What you won’t find is any sort of social media presence — Facebook and Twitter don’t work on an Apple II, so Dr. Buggie doesn’t have much use for them.

The 5 Whys

What I wanted to know most from Buggie was why Apple II? Why, after so many generations of computer technology had come and gone, is a retired professor still fixing up, using and sourcing parts for, a long-dead computer platform? I don’t think I got a straight answer to that: he likes bringing the Apple //c camping, because the portable form-factor and external power supply make it easy to connect and use in his camper. He likes the ][e because he used it in his classroom for decades and has specialized software for teaching and research. But as for why Apple II at all… well mostly his reasons are inexplicable — at least in the face of the upgrade treadmill that Intel, Apple and the rest of the industry have conspired to set us all on. He uses his wife’s Dell to post things on eBay, and while still connected to the University, he acquiesced to having a more modern iMac installed when the I.T. department told him they would no longer support his Apple II. But if it was up to him, all of Dr. Buggie’s computing would probably happen on an Apple II…

OK, But What is an Apple II?

It’s probably worth pausing for a little history lesson, for the uninitiated. The Apple II was the second computer shipped by Steve’s Jobs and Wozniak in the late 70s. Their first computer, the Apple I (or just Apple) was a bare logic board intended for the hobbyist. It had everything you needed to make your own computer… except a case, power supply, keyboard, storage device or display! The Apple II changed everything. When it hit the market as a consumer-friendly device complete in a well-made plastic case, with an integrated power-supply and keyboard, that could be hooked up to any household TV set, it was a revolution. With very little room for debate, most would agree that the Apple II was the first real personal computer.

Apple I, with aftermarket “accessories” (like a power supply!)

As with any technology, there were multiple iterations improving on the original. But each retained a reasonable degree of compatibility with previous hardware and software, establishing the Apple II as a platform with serious longevity – lasting more than a decade of shipping product. As it aged, Apple understood it would eventually need a successor, and embarked on multiple projects to accomplish that goal. The Apple III flopped, and Steve Jobs’ first pet project, the graphical-UI based Lisa was far too expensive for the mainstream. It wasn’t until Jobs got hold of a skunkworks projects led by Jef Raskin that the Macintosh as we know it was born. By then, Apple II was an institution; an aging, but solid bedrock of personal computing.

The swan song of the Apple II platform was called the IIGS (the letters stood for Graphics and Sound), which supported most of the original Apple II software line-up, but introduced a GUI based on the Macintosh OS — except in color! The GS was packed with typical Wozniak wizardry, relying less on brute computing power (since Jobs insisted its clock speed be limited to avoid competing with the Macintosh) and more on clever tricks to coax almost magical capabilities out of the hardware (just try using a GS without its companion monitor, and you’ll see how much finesse went into making the hardware shine!)

But architecture, implementation, and company politics aside, there was one very big difference between the Apple II and the Macintosh. And although he may not say it outright, I believe it’s this difference that kept the flame alive for Stephen Buggie, and so many other like-minded nerds.

Image stolen from someone named Todd. I bet he and Dr. Buggie would get along famously.

That key difference? You could pop the top off.

I’ll cover why that’s important — both then and now — and more about my chat with Dr. Buggie in Part 2