Maintainers of Lightning – Part 2

When I was a young computer nerd, back in the early days of a consumer-friendly Internet, formal computer education in high school was limited to whatever rudimentary understanding our middle-aged teachers could piece together for themselves to pass on. But there was one tool that taught me more about web development than any book or class or teacher ever could: View > Source.

View Source MenuYou can do it now, although the output is significantly less meaningful or helpful on the modern web. Dig around your Browser’s menus and it’s still there somewhere: the opportunity to examine the source code of this, and other websites.

It was from these menus that I learned my first simple tricks in HTML, like the infamous blink tag, and later the more complex capabilities offered through a simple programming language called Javascript that increasingly became part of the Web.

Javascript and the web’s DOM (Document Object Model) taught me Object Oriented Programming when my “college level” computer science course was still teaching procedural BASIC. Object Oriented Programming, and the web as a development platform, have defined my career now 25 years hence. I owe an unpayable debt to the View > Source menu. It was the equivalent of “popping the hood” on a car, and it allowed a generation to learn how things work. Your iPhone doesn’t do that – in fact, today’s Apple would prefer if it people would stop trying to ever open their hardware or software platforms.

This, I assert, is why people fell in love with the Apple II, and why it endures after 40 years. It was a solid consumer device that could be used without needing to be built or tinkered with, but its removable top invited you to explore. “Look at the amazing things this computer can do… then peak inside and learn about how!”

As soon as our son was old enough to hold a screwdriver, he was taking things apart to look inside. I’m told I was the same way, but as parents Nicole and I had a plan: we’d take our kid to Goodwill and let him pick out junk to disassemble — rather than working electronics from around the home! I consider it a tragedy that some people outgrow such curiosity. Dr. Buggie has not – in fact, in my brief conversation with him, I think curiosity might be one of his defining characteristics…

Have PHD, Will Travel

Some time in the early 70s, Stephen Buggie completed his PHd in Psychology from the University of Oregon. Finding a discouraging job market in America, Buggie decided to seek his fortunes elsewhere. The timeline from there is a little confusing: he spent 12 years in Africa, teaching in both Zambia during its civil war (he vividly recalls military helicopters landing beside his house) and Malawi. He was in America for a sabbatical in 1987, where he saw computers being used in a classroom, and at some point in there he taught at the University of the Americas in Mexico, who were early adopters of the Apple II for educational use. His two sons were born in Africa, and despite their skin color, their dual citizenship makes them African Americans – a point he was proud to make. He taught 5 years in South Carolina, before joining the University of New Mexico on a tenure track. He maintains an office there as a Professor Emeritus (although rarely visits — especially in the age of COVID) and still has his marking program and years of class records on 5.25” floppy disks.

Apple II Classroom Computers – Seattle’s Living Computer Museum

Buggie found community amongst Apple II users in the 80s and 90s. Before the World Wide Web there were trade magazines, Bulletin Board Systems and mail-in catalogs. Although there was no rating system for buyers and sellers like eBay has today, good people developed a reputation for their knowledge or products. As the average computer user moved to newer platforms, the community got smaller and more tight-knit. Buggie found a key weakness in maintaining older hardware was the power supplies, and scavenged for parts that were compatible, or could be modified to be compatible, and bought out a huge stock – one he is still working through, as he ships them to individual hobbyists and restorers decades later. His class schedule made it difficult for him to go to events, but he did finally make it to KansasFest in recent years, and still talks excitedly about the fun of those in-person interactions, and the value he got from trading stories, ideas and parts with other users.

To be certain, Buggie’s professional skills are in the psychology classroom. He doesn’t consider himself to be any sort of computer expert — he’s more like a pro user who taught himself the necessary skills to maintain his tools. And maybe this is the second theme that emerged in our conversation: like the virtue of curiosity, there is an inherent value in durability. This isn’t commentary on the disposable nature of consumer products so much as it is on the attitudes of consumers.

We shape our tools, and thereafter our tools shape us.

My dad worked with wood my whole life. Sometimes as a profession, when he taught the shop program at a school in town. Sometimes as a hobby, when he built himself a canoe. Sometimes out of necessity, when my parents restored our old house to help finance their missionary work (and the raising of three kids.) His tools were rarely brand new, but they were always well organized and maintained. He instilled in me a respect for the craft and in the artifacts of its execution.

Computers are tools too – multi-purpose ones that can be dedicated to many creative works. When we treat them like appliances, or use them up like consumable goods, the disrespect is to ourselves, as well as our craft. Our lack of care and understanding, our inability to maintain our tools beyond a year or two, renders us stupid and helpless. We cease to be users and we become the used – by hardware vendors who just want us to buy the latest versions, or credit card companies that want to keep us in debt, or by some imagined social status competition…

In those like Dr. Buggie who stubbornly refuse to replace a computer simply because it’s old, I see both nostalgia and virtue.

The nostalgia isn’t always positive – I’ve seen many Facebook posts from “collectors” who have storage rooms full of stacked, dead Macintosh computers that they compulsively buy despite having no use for, perhaps desperate for some prior stage of life, or to be able to claim a part in the computing revolution of yester-year that really did “change the world” in ways we’re still coming to grips with. There is a certain strangeness to this hobby…

But there is also virtue in preserving those sparks of curiosity, those objects that first invited us to learn more, or to create, or to share. And there is virtue in respecting a good tool, in teaching the next generation to steward well the things we use to propel and shape our passions or careers or the homes we build for our families.

Fading Lightning

I joke with my wife that our rec room is like a palliative care facility for dying electronics: my 1992 Laserdisc player is connected to a 2020-model flatscreen TV and still regularly spins movies for us. I still build apps for a mobile platform that was discontinued in 2011 using a laptop sold in 2008. In fact, these two posts about the Apple II were written on a PowerBook G3 from 2001. Old things aren’t necessarily useless things… and in fact, in many ways they provide more possibility than the increasingly closed computing platforms of the modern era.

In an age where companies sue customers who attempt to repair their own products, there is something to be said for an old computer that still invites you to pop the top off and peer inside. It says that maybe the technology belongs to the user – and not the other way around. It says that maybe those objects that first inspired us to figure out how things works are worthwhile monuments that can remind us to keep learning – even after we retire. It says that maybe the lessons of the past are there to help us teach our kids to shape their world, and not to be shaped by it…

Thanks for visiting! If you missed Part 1, check it out here!

Maintainers of Lightning – Part 1

Of the subset of people who are into computers, there is a smaller subset (although bigger than you might think) that are into vintage computers. Of that subset, there’s a significant proportion with a specific affinity for old Apple hardware. Of that subset, there’s a smaller subset that are into restoring and preserving said hardware. For that group of people, something like a Lisa (the immediate precursor to the Macintosh) or a NeXT Cube (Steve Jobs follow-up to the Macintosh, after he was kicked out of Apple) holds special value. Something like an original Apple I board holds practically infinite value.

If you are in this sub-sub-sub-group of people, then you have undoubtedly come across the eBay auctions of one Dr. Stephen Buggie. You’ll know you’ve purchased something from him, because whether its as big as a power supply or as small as a single key from a keyboard, it comes packed with memorabilia of either the Apple II, or his other hobby, hunting for radioactive material in the New Mexico desert. Hand-written, photocopied, or on a floppy disc or DVD, he always includes these small gifts of obscura.

Last year, after the fifth or sixth time one of his eBay auctions helped me rescue an Apple II machine I was restoring, I reached out to Dr. Buggie and asked him if I could interview him for my blog. His online presence outside of eBay is relatively small. Of of the few places you can find him is on YouTube, presenting at KansasFest, an annual meeting of sub-sub-sub-group nerds entirely focused on the Apple II line-up of computers. Not all of these folks are into restoration. Some are creating brand new inventions that extend the useful life of this 40+ year-old hardware. Some have been using the hardware continuously for 40+ years, and still live the early PC wars, full of upstarts like Atari, Commodore and Amiga, gearing up for battle against IBM.  Many still see the Macintosh as a usurper, and pledge more allegiance to the machine that Wozniak wrought than the platform that begot the MacBook, iPhone and other iJewelry that all the cool kids line up for today. If you search really hard, you may even find references to Dr. Buggie’s actual, although former, professional life: that of a world-traveling psychology professor. What you won’t find is any sort of social media presence — Facebook and Twitter don’t work on an Apple II, so Dr. Buggie doesn’t have much use for them.

The 5 Whys

What I wanted to know most from Buggie was why Apple II? Why, after so many generations of computer technology had come and gone, is a retired professor still fixing up, using and sourcing parts for, a long-dead computer platform? I don’t think I got a straight answer to that: he likes bringing the Apple //c camping, because the portable form-factor and external power supply make it easy to connect and use in his camper. He likes the ][e because he used it in his classroom for decades and has specialized software for teaching and research. But as for why Apple II at all… well mostly his reasons are inexplicable — at least in the face of the upgrade treadmill that Intel, Apple and the rest of the industry have conspired to set us all on. He uses his wife’s Dell to post things on eBay, and while still connected to the University, he acquiesced to having a more modern iMac installed when the I.T. department told him they would no longer support his Apple II. But if it was up to him, all of Dr. Buggie’s computing would probably happen on an Apple II…

OK, But What is an Apple II?

It’s probably worth pausing for a little history lesson, for the uninitiated. The Apple II was the second computer shipped by Steve’s Jobs and Wozniak in the late 70s. Their first computer, the Apple I (or just Apple) was a bare logic board intended for the hobbyist. It had everything you needed to make your own computer… except a case, power supply, keyboard, storage device or display! The Apple II changed everything. When it hit the market as a consumer-friendly device complete in a well-made plastic case, with an integrated power-supply and keyboard, that could be hooked up to any household TV set, it was a revolution. With very little room for debate, most would agree that the Apple II was the first real personal computer.

Apple I, with aftermarket “accessories” (like a power supply!)

As with any technology, there were multiple iterations improving on the original. But each retained a reasonable degree of compatibility with previous hardware and software, establishing the Apple II as a platform with serious longevity – lasting more than a decade of shipping product. As it aged, Apple understood it would eventually need a successor, and embarked on multiple projects to accomplish that goal. The Apple III flopped, and Steve Jobs’ first pet project, the graphical-UI based Lisa was far too expensive for the mainstream. It wasn’t until Jobs got hold of a skunkworks projects led by Jef Raskin that the Macintosh as we know it was born. By then, Apple II was an institution; an aging, but solid bedrock of personal computing.

The swan song of the Apple II platform was called the IIGS (the letters stood for Graphics and Sound), which supported most of the original Apple II software line-up, but introduced a GUI based on the Macintosh OS — except in color! The GS was packed with typical Wozniak wizardry, relying less on brute computing power (since Jobs insisted its clock speed be limited to avoid competing with the Macintosh) and more on clever tricks to coax almost magical capabilities out of the hardware (just try using a GS without its companion monitor, and you’ll see how much finesse went into making the hardware shine!)

But architecture, implementation, and company politics aside, there was one very big difference between the Apple II and the Macintosh. And although he may not say it outright, I believe it’s this difference that kept the flame alive for Stephen Buggie, and so many other like-minded nerds.

Image stolen from someone named Todd. I bet he and Dr. Buggie would get along famously.

That key difference? You could pop the top off.

I’ll cover why that’s important — both then and now — and more about my chat with Dr. Buggie in Part 2

What's Happening to Sony

John Dvorak is far from the most salient technology reporter out there. His years of trolling about the imminent death of Apple never really panned out, and most of his other “observations” trail actual news by a few weeks. So its no surprise that he’s oblivious about why Sony has been under attack for the past few weeks. It would be a surprise, however, if Sony were equally confused. If they haven’t gotten the message by now, they probably never will: their view of the marketplace is dying. The Internet is killing it.
When Sony launched the PlayStation 3, they pre-empted the homebrew scene, who’s efforts to unlock the Dreamcast and the original XBox drastically increased the functionality of those devices, but put their go-to-market strategy under strain. Sony decided to include a feature called OtherOS which allowed tinkerers a “sandbox” in which they could install Linux (or another OS) and experiment with the powerful hardware in the game machine. It was a good move.
So a few years after the fact, when they released a software update that added no new features, but did permanently disable the OtherOS feature, you can understand why it was pretty much universally considered a bad move.
When a brilliant but harmless at-home tinkerer, the young GeoHot, exploited a gaping hole in the Playstation 3’s hardware architecture, re-enabling the ability to launch Linux, Sony didn’t respond by patching the hole, or by re-activating the feature. Instead they sued the young college student for all he was worth, filing legal injunctions to halt his PayPal account, take down his website and require social media websites, like YouTube, to turn over identifying information of anyone who might have viewed GeoHot’s posts of how he unlocked the hardware that he had purchased with his own money.
To be clear, GeoHot owned the hardware, used his own tools and his own observations to learn how the device worked and then enabled it to be more useful. I’m pretty sure inventors have been doing this for centuries. Only now its illegal.
Unfortunately for Sony, that legality is a relic of an era they have failed to outgrow. An era when only big companies owned innovation and ideas, when only bureaucracies had the authority to publish information, and where the group with the most money usually wins. Yes, GeoHot settled and agreed never again to publish information about the PS3, but that doesn’t mean Sony won.
Sure, the giant company squashed the tiny individual. But now tiny individuals all over the Internet have responded in kind, exploiting more gaping holes in Sony’s old-world technology offerings. Long-known backdoors not sealed, unencrypted username and password lists, credit card information stored in plain text on exposed servers… Sony is a stupid, plodding behemoth, swatting at a million flies who combined are smarter, more innovative and more effective than ironfisted control over information and ideas.
We see it happening over-and-over again: a shift of power from organizations to individuals. Whether its a thousand hackers turning from curious exploration to retaliation against a company that thinks it owns ideas and the people who buy their wares, or individuals across the nation of Egypt sparking a Jasmine Revolution over the Internet and over-throwing a dictator who’d held onto power for decades. The Internet connects people faster and across greater distances than ever before. A heirarchy is not needed to spark and shape ideas: a swarm can do it better. Loyalty is to ideas and activities that allow us to explore them, not to companies or boards of directors. A company or a government cannot afford to be stagnant — and they certainly cannot afford to move backward — because if we can’t find what we want, we can invent it ourselves: without a budget or a chairperson or an executive committee. The world is a smaller place, and whether you go by the name Sony, or Mubarak, if you can’t change, you can die by a thousand cuts…

What is "The Cloud"?

It seems to be one of those buzz words that everyone uses, and no one can define. In fact, you couldn’t pick a more amorphous descriptor for a technology if you tried! Clouds by their definition are hazy, ever shifting things!
But despite the buzz and the confusion, the Cloud is real, and it represents the biggest shift in computing since the Internet itself. The Cloud changes everything — and its inevitable.
Actually, its been inevitable for about a decade now. Most of us saw it coming, but few could predict its scope. “Smart devices and the Cloud”, or the “Post PC era” are sound bytes from two competitors who understand the same reality: things are changing.
For years computers were silos of data and processing power sitting on your desk or your lap. With early networks, and then the Internet, those computers began to be able to share their data, and sometimes even their compute cycles. Another class of computer, called a server, existed primarily for these sharing functions. Web sites began changing from online newsprint to online services, where users could access the functionality of other computers to perform tasks, like uploading and encoding videos, or executing banking transactions. As both the servers and the computers accessing these servers (called “clients”) got smarter, web services started to become richer and more interactive. Soon the experience you got using a website began to look and behave a lot like the experience you got using a program you installed on your home computer. The so called “Web 2.0” was born when a critical mass of technologies began to support this kind of interaction.
The Cloud represents the logical conclusion of that progress: if services on the web can be as rich and usable as programs on your home computer — and in fact, richer because of their connected nature — then why bother installing programs on your computer at all? No one has to download and install Facebook on their PC or Mac — but its still one of the most used applications in history. Even the Facebook “app” you install on your smart phone is really just a curated Internet — more a web application than a traditional application (and if you don’t believe me, just trying using it with your WiFi and data connection off!)
More and more applications have a “web” version, that runs in the browser. In some cases these versions aren’t quite as rich as their traditional installed (aka “thick client”) counterparts, but many are quite usable and useful. Of course some applications still need direct access to the Client hardware. Photoshop won’t be in the Cloud any time soon — but its not unreasonable to predict that it could be enriched by the Cloud in the near future.
Cloud data centers offer more than just “software as a service” (SaaS). They also offer massive amounts of scalable compute power. Rendering 60 seconds of 30 frames per minute 3D video for a Pixar movie may take one machine 10 hours, but farm that processing up to 100 servers in the Cloud to run in parallel, and suddenly you’ve got your result in 10 minutes.  And storing terabytes of Hollywood movies in hard drives at home would be both cost and space prohibitive, but if a 300 computers in a Cloud data center store them, and you can access them from your XBox for a small monthly fee, suddenly you have decades of entertainment at your fingertips. This is called “infrastructure as a service” (IaaS).
The final step is called “platform as a service” (PaaS) and its ambitious goal is to begin augmenting and replacing the need for companies to own their own servers at all. When a mobile AIDS health station moves through Africa, they shouldn’t need to move computers and database servers with them — a smart, tablet style device and a satellite Internet connection can allow them access to a world of services, applications, and data with equipment that will fit in a back-pack.
As PaaS evolves and more applications and services are moved into the Cloud, the technology needs for first individuals and small companies, and later for large companies and enterprises declines. Software requirements are pushed into data centers, freeing up the Client device manufacturers to innovate around more natural interfaces. Rather than pouring R&D money into making a computer 1% faster than last year’s, that money goes into understanding human speech, facial expression or hand gestures. The processing and storage requirements are handled by the Cloud, while the interaction requirements are handled by the device the human being sees.
We’ve seen this already with touch-aware devices like the iPhone (and Android and Windows Phone) and with gesture-aware devices like the Kinect. Consumers are more interested in the experience than in installing and configuring applications. They don’t want to organize and train their computer to meet their needs, they want the device to be intuitive. In the past, software developers had a focus split between meeting the needs of the PC their software would run on, and the needs of the user accessing that software. In the Cloud, the software runs elsewhere, and the interface the user sees can be customized to the device and interaction model being used.
This transition won’t be a quick one — but its already begun. And analysts predict that by 2015 PaaS will be dominant in the industry. That means better devices, new kinds of software, and more connected experiences. For those who think Facebook is too invasive (or pervasive): prepare yourself. The always-on connected world is coming. And its gonna be cool.
The Terminator’s SkyNet may not have become self aware on April 19, 2011 but a massive network of intelligent computers running everything is closer to reality than science fiction…