Making My Own Cloud

Nic and I got our first camera together a few months before our wedding — it was also our last film camera. I had a few digital cameras, but at the time, the quality was low and the prices still high, and we wanted really good pictures that would last. So we got a good scanner to go along with our film, and resolved to began preserving memories.

A couple of kids at Dunn’s River Falls, 2001

The subsequent economies of scale for digital cameras, and later the rapidly improving quality of phone cameras resulted in a proliferation of photographs — and many hours spent on a strategy for organizing and storing them. This got even more complicated with 3 new digital archivists when our kids got phones and began photographing and recording everything.

The result is more than 140GB of photos over 20+ years, organized in folders by year, then by month. The brief metadata I can capture in a folder name is rarely enough to be able to quickly pinpoint a particular memory, but wading through the folders to find something is often a fun walk down memory lane anyway.

Apple's Digital Hub strategy of yester-yearSince I first started out archiving photos, a number of technology solutions have come along claiming to be able to do it better. Flickr, Google Photos, iPhoto, Amazon Photos all made bold assertions that they could automatically organize photos for you, in exchange for a small fee for storing them. The automatic organization always sucked, and the fees were usually part of an ecosystem lock-in play. It seems nothing has been able to beat hierarchical directory trees yet.

Still 140GB is a lot, and 20 years of memories can’t be saved on a single hard drive — there’s too much risk. Some kind of bulk back-up mechanism is important. For the past 8 years, we’ve used Microsoft’s OneDrive. Their pricing is the best, and their sync clients work well on most platforms. They don’t try to force any organization on you, it kinda just works.

Lately, though, they’ve begun playing into the “ecosystem lock” trap. The macOS client is rapidly abandoning older (and more functional) versions of the OS. OneDrive is priced most attractively if you also subscribe to Office, which is also moving to the consumer treadmill model of questionable new features requiring newer hardware. It seems that in order to justify the subscription software model, vendors need us to abandon our hardware every 3-4 years. This artificial obsolescence must be the industry’s answer to the fact that consumer computing innovation has plateaued, and there’s no good reason to replace a computer less than 10 years old any more — and many good reasons not to.

Hidden in an unknown corner of Inner Mongolia is a toxic, nightmarish lake created by our thirst for smartphones, consumer gadgets and green tech. Read the article.

In short, while I’m not unhappy with OneDrive, and consider the current incarnation of Microsoft to be one of the more ethical tech giants, it was high time to begin exploring alternatives. If you’re sensing a theme here lately, it stems from a realization that if us nerds willingly surrender all our data to big companies, then what hope does the average person have of privacy in this connected age?

Fortunately, like with most other tech, there are open source alternatives. Some are only for those who can DIY, but a significant number are available for the less tech savvy willing to vote with their wallets. Both NextCloud and OwnCloud offer sync clients that are highly compatible with a range of operating systems and environments. Both are available as self-host and subscription systems — from a range of providers. At least for now, I’ve decided to self-host OwnCloud in Azure. This is largely because I get some free Azure as a work-perk. If that arrangement changes in the future, I’m very likely to subscribe to NextCloud provided by Hetzner, a privacy-conscious European service that costs less than $5.

My first digital camera, a Kodak DC20, circa 1997. Thanks mom and dad!Right now, our total synced storage needs for the family are under 300GB. I have another terabyte of historical software, a selection of which will remain in a free OneDrive account. The complete 1.3TB backup is on dual hard drives, one always stored offsite. This relatively small size is due to the fact that we stopped downloading video as soon as streaming services became a viable paid alternative — although that appears to be changing.

I started making money on computers as a teen by going around and fixing people’s computers for them. Most made the same simple mistakes, were grateful for help, and were generally eager to learn. In 2022, I’m afraid we’ve all just surrendered to big tech — we’ve decided its too hard to learn to manage our digital lives, so we let someone else do it; in exchange, we’ve stopped being customers and instead we’ve become the products. With our digital presence being so important, maybe its time consumers decided we’re not for sale.

Smart Phones Strike Back

Smart phones are getting interesting again — and I’m not talking about next week’s Apple event, where they’re sure to announce yet another set of incremental changes to the iPhone, or Samsung getting caught throttling apps on their most recent set of samey Galaxy devices. I’m talking about choice re-emerging. Slowly, mind you, but without the participation of big data gobbling companies. Instead, its brought to you by the same type of nerd that railed against the desktop duopoly in the 90s: the Linux crowd.

While Desktop Linux never really had its day, Linux has succeeded in a myriad of different ways: on embedded devices, web servers, and in tiny computers. It turns out there is a place for an alternate operating system — it maybe just looks a little different than the computers you see at Walmart or Best Buy.

As consumers we’re fairly bereft on hardware choice — except for a few expensive experiments, smart phones all pretty much look the same these days. Software choice, too, is basically non-existent. Do you want to buy into Apple’s ecosystem, or Google’s? If you don’t want either, then you’re basically cut off from the world. No banking, no messaging, and thanks to the 3G shutdown, soon no phone calls either.

Those tackling the issue of hardware choice are a relatively brave few. The FairPhone isn’t cheap, or terribly different looking, but its a good effort at making a repairable handset. The PinePhone is a truly hacker-friendly device, with hardware switches to protect against snooping software, and an open bootloader capable of handing off to multiple operating systems. Both devices still look like rectangular slabs, but have important features that the hegemony is unwilling to consider.

It’s also interesting to see big manufacturers explore different form factors. I’m glad to see folding phones making a comeback — although they remain too expensive for my blood. There’s nothing approaching the 3G-era of creative and exploratory hardware design, though. When my Veer finally gets cut off from T-Mobile’s doomed 3G network this summer, there is unlikely to be a replacement quite so svelte.

But while its true that hardware remains mostly yawn-worthy, smart phone operating systems are another story. While Windows Phone still has a few hold outs, and there’s a handful of us holding on to webOS, most of the world has accepted either iOS or Android. But Linux is not to be counted out — and this is where things get really interesting.

This summer, I ran Sailfish OS for a couple weeks. Based on open-source Linux, but with some proprietary software, Sailfish is built by a company in Sweden called Jolla, who offers it as a viable alternative platform and has had success in certifying for use in some European governments. The free base OS is quite stable (although I don’t love some of their UI choices) and the paid version provides Microsoft Exchange enterprise email and a Google-free Android compatibility layer. The former is too buggy for serious use, in my opinion, and the latter suffers from some stability issues. But for basic use, on modern hardware, Sailfish is gradually becoming a legitimate contender.

Jolla is building a real alternative to Android

Other Linux-based offerings are at varying levels of maturity. Ubuntu Touch doesn’t quite have the backing it did when initially launched, but it was a breeze to install on my Pixel 3A. Manjaro Linux is fast becoming the OS of choice for the PinePhone crowd — but I can’t afford the hardware, so I haven’t tried it out. And my own favorite, LuneOS, which is an attempt to restore webOS to its original mobile glory, by porting Open WebOS (and other) components, along with a familiar UI and APP support, continues its slow evolution.

Back in the infancy of the smart phone, I considered myself an early adopter. I stood in line for the first iPhone, having sold two of my most precious gadgets (my Newton 2000, and my iPod) to pay for it. While at Microsoft, I had virtually every Windows Phone 7 device available. And I’ve had numerous Android-based devices, and worked for 3 years on an open Android (AOSP) derived platform at Amazon. But the past 7-10 years have been depressingly boring. Watching people drool over the new iPhone that’s basically the same as the last iPhone is infuriating — its like society collectively agreed that we’re not interesting in investing in innovation, only in rewarding repetition.


In early 2022, I can say unequivocally that most people don’t need, and won’t be satisfied with, a Linux-based phone right now. But for the first time in nearly a decade, my desk has an array of interesting smart phones on it again, and I’m optimistic that a couple of these new OSes might evolve to the point where those of us who aren’t willing to trade our privacy for the right to make and receive phone calls might finally have some choices again.

Maintainers of Lightning – Part 2

When I was a young computer nerd, back in the early days of a consumer-friendly Internet, formal computer education in high school was limited to whatever rudimentary understanding our middle-aged teachers could piece together for themselves to pass on. But there was one tool that taught me more about web development than any book or class or teacher ever could: View > Source.

View Source MenuYou can do it now, although the output is significantly less meaningful or helpful on the modern web. Dig around your Browser’s menus and it’s still there somewhere: the opportunity to examine the source code of this, and other websites.

It was from these menus that I learned my first simple tricks in HTML, like the infamous blink tag, and later the more complex capabilities offered through a simple programming language called Javascript that increasingly became part of the Web.

Javascript and the web’s DOM (Document Object Model) taught me Object Oriented Programming when my “college level” computer science course was still teaching procedural BASIC. Object Oriented Programming, and the web as a development platform, have defined my career now 25 years hence. I owe an unpayable debt to the View > Source menu. It was the equivalent of “popping the hood” on a car, and it allowed a generation to learn how things work. Your iPhone doesn’t do that – in fact, today’s Apple would prefer if it people would stop trying to ever open their hardware or software platforms.

This, I assert, is why people fell in love with the Apple II, and why it endures after 40 years. It was a solid consumer device that could be used without needing to be built or tinkered with, but its removable top invited you to explore. “Look at the amazing things this computer can do… then peak inside and learn about how!”

As soon as our son was old enough to hold a screwdriver, he was taking things apart to look inside. I’m told I was the same way, but as parents Nicole and I had a plan: we’d take our kid to Goodwill and let him pick out junk to disassemble — rather than working electronics from around the home! I consider it a tragedy that some people outgrow such curiosity. Dr. Buggie has not – in fact, in my brief conversation with him, I think curiosity might be one of his defining characteristics…

Have PHD, Will Travel

Some time in the early 70s, Stephen Buggie completed his PHd in Psychology from the University of Oregon. Finding a discouraging job market in America, Buggie decided to seek his fortunes elsewhere. The timeline from there is a little confusing: he spent 12 years in Africa, teaching in both Zambia during its civil war (he vividly recalls military helicopters landing beside his house) and Malawi. He was in America for a sabbatical in 1987, where he saw computers being used in a classroom, and at some point in there he taught at the University of the Americas in Mexico, who were early adopters of the Apple II for educational use. His two sons were born in Africa, and despite their skin color, their dual citizenship makes them African Americans – a point he was proud to make. He taught 5 years in South Carolina, before joining the University of New Mexico on a tenure track. He maintains an office there as a Professor Emeritus (although rarely visits — especially in the age of COVID) and still has his marking program and years of class records on 5.25” floppy disks.

Apple II Classroom Computers – Seattle’s Living Computer Museum

Buggie found community amongst Apple II users in the 80s and 90s. Before the World Wide Web there were trade magazines, Bulletin Board Systems and mail-in catalogs. Although there was no rating system for buyers and sellers like eBay has today, good people developed a reputation for their knowledge or products. As the average computer user moved to newer platforms, the community got smaller and more tight-knit. Buggie found a key weakness in maintaining older hardware was the power supplies, and scavenged for parts that were compatible, or could be modified to be compatible, and bought out a huge stock – one he is still working through, as he ships them to individual hobbyists and restorers decades later. His class schedule made it difficult for him to go to events, but retirement allowed him to start attending — and writing about KansasFest, and still talks excitedly about the fun of those in-person interactions, and the value he got from trading stories, ideas and parts with other users.

To be certain, Buggie’s professional skills are in the psychology classroom. He doesn’t consider himself to be any sort of computer expert — he’s more like a pro user who taught himself the necessary skills to maintain his tools. And maybe this is the second theme that emerged in our conversation: like the virtue of curiosity, there is an inherent value in durability. This isn’t commentary on the disposable nature of consumer products so much as it is on the attitudes of consumers.

We shape our tools, and thereafter our tools shape us

My dad worked with wood my whole life. Sometimes as a profession, when he taught the shop program at a school in town. Sometimes as a hobby, when he built himself a canoe. Sometimes out of necessity, when my parents restored our old house to help finance their missionary work (and the raising of three kids.) His tools were rarely brand new, but they were always well organized and maintained. He instilled in me a respect for the craft and in the artifacts of its execution.

Computers are tools too – multi-purpose ones that can be dedicated to many creative works. When we treat them like appliances, or use them up like consumable goods, the disrespect is to ourselves, as well as our craft. Our lack of care and understanding, our inability to maintain our tools beyond a year or two, renders us stupid and helpless. We cease to be users and we become the used – by hardware vendors who just want us to buy the latest versions, or credit card companies that want to keep us in debt, or by some imagined social status competition…

In those like Dr. Buggie who stubbornly refuse to replace a computer simply because it’s old, I see both nostalgia and virtue.

The nostalgia isn’t always positive – I’ve seen many Facebook posts from “collectors” who have storage rooms full of stacked, dead Macintosh computers that they compulsively buy despite having no use for, perhaps desperate for some prior stage of life, or to be able to claim a part in the computing revolution of yester-year that really did “change the world” in ways we’re still coming to grips with. There is a certain strangeness to this hobby…

But there is also virtue in preserving those sparks of curiosity, those objects that first invited us to learn more, or to create, or to share. And there is virtue in respecting a good tool, in teaching the next generation to steward well the things we use to propel and shape our passions or careers or the homes we build for our families.

Fading Lightning

I joke with my wife that our rec room is like a palliative care facility for dying electronics: my 1992 Laserdisc player is connected to a 2020-model flatscreen TV and still regularly spins movies for us. I still build apps for a mobile platform that was discontinued in 2011 using a laptop sold in 2008. In fact, these two posts about the Apple II were written on a PowerBook G3 from 2001. Old things aren’t necessarily useless things… and in fact, in many ways they provide more possibility than the increasingly closed computing platforms of the modern era.

In an age where companies sue customers who attempt to repair their own products, there is something to be said for an old computer that still invites you to pop the top off and peer inside. It says that maybe the technology belongs to the user – and not the other way around. It says that maybe those objects that first inspired us to figure out how things works are worthwhile monuments that can remind us to keep learning – even after we retire. It says that maybe the lessons of the past are there to help us teach our kids to shape their world, and not to be shaped by it…

Thanks for visiting! If you missed Part 1, check it out here!

Maintainers of Lightning – Part 1

Of the subset of people who are into computers, there is a smaller subset (although bigger than you might think) that are into vintage computers. Of that subset, there’s a significant proportion with a specific affinity for old Apple hardware. Of that subset, there’s a smaller subset that are into restoring and preserving said hardware. For that group of people, something like a Lisa (the immediate precursor to the Macintosh) or a NeXT Cube (Steve Jobs follow-up to the Macintosh, after he was kicked out of Apple) holds special value. Something like an original Apple I board holds practically infinite value.

If you are in this sub-sub-sub-group of people, then you have undoubtedly come across the eBay auctions of one Dr. Stephen Buggie. You’ll know you’ve purchased something from him, because whether its as big as a power supply or as small as a single key from a keyboard, it comes packed with memorabilia of either the Apple II, or his other hobby, hunting for radioactive material in the New Mexico desert. Hand-written, photocopied, or on a floppy disc or DVD, he always includes these small gifts of obscura.

Last year, after the fifth or sixth time one of his eBay auctions helped me rescue an Apple II machine I was restoring, I reached out to Dr. Buggie and asked him if I could interview him for my blog. His online presence outside of eBay is relatively small. Of of the few places you can find him is on YouTube, presenting at KansasFest, an annual meeting of sub-sub-sub-group nerds entirely focused on the Apple II line-up of computers. Not all of these folks are into restoration. Some are creating brand new inventions that extend the useful life of this 40+ year-old hardware. Some have been using the hardware continuously for 40+ years, and still live the early PC wars, full of upstarts like Atari, Commodore and Amiga, gearing up for battle against IBM.  Many still see the Macintosh as a usurper, and pledge more allegiance to the machine that Wozniak wrought than the platform that begot the MacBook, iPhone and other iJewelry that all the cool kids line up for today. If you search really hard, you may even find references to Dr. Buggie’s actual, although former, professional life: that of a world-traveling psychology professor. What you won’t find is any sort of social media presence — Facebook and Twitter don’t work on an Apple II, so Dr. Buggie doesn’t have much use for them.

The 5 Whys

What I wanted to know most from Buggie was why Apple II? Why, after so many generations of computer technology had come and gone, is a retired professor still fixing up, using and sourcing parts for, a long-dead computer platform? I don’t think I got a straight answer to that: he likes bringing the Apple //c camping, because the portable form-factor and external power supply make it easy to connect and use in his camper. He likes the ][e because he used it in his classroom for decades and has specialized software for teaching and research. But as for why Apple II at all… well mostly his reasons are inexplicable — at least in the face of the upgrade treadmill that Intel, Apple and the rest of the industry have conspired to set us all on. He uses his wife’s Dell to post things on eBay, and while still connected to the University, he acquiesced to having a more modern iMac installed when the I.T. department told him they would no longer support his Apple II. But if it was up to him, all of Dr. Buggie’s computing would probably happen on an Apple II…

OK, But What is an Apple II?

It’s probably worth pausing for a little history lesson, for the uninitiated. The Apple II was the second computer shipped by Steve’s Jobs and Wozniak in the late 70s. Their first computer, the Apple I (or just Apple) was a bare logic board intended for the hobbyist. It had everything you needed to make your own computer… except a case, power supply, keyboard, storage device or display! The Apple II changed everything. When it hit the market as a consumer-friendly device complete in a well-made plastic case, with an integrated power-supply and keyboard, that could be hooked up to any household TV set, it was a revolution. With very little room for debate, most would agree that the Apple II was the first real personal computer.

Apple I, with aftermarket “accessories” (like a power supply!)

As with any technology, there were multiple iterations improving on the original. But each retained a reasonable degree of compatibility with previous hardware and software, establishing the Apple II as a platform with serious longevity – lasting more than a decade of shipping product. As it aged, Apple understood it would eventually need a successor, and embarked on multiple projects to accomplish that goal. The Apple III flopped, and Steve Jobs’ first pet project, the graphical-UI based Lisa was far too expensive for the mainstream. It wasn’t until Jobs got hold of a skunkworks projects led by Jef Raskin that the Macintosh as we know it was born. By then, Apple II was an institution; an aging, but solid bedrock of personal computing.

The swan song of the Apple II platform was called the IIGS (the letters stood for Graphics and Sound), which supported most of the original Apple II software line-up, but introduced a GUI based on the Macintosh OS — except in color! The GS was packed with typical Wozniak wizardry, relying less on brute computing power (since Jobs insisted its clock speed be limited to avoid competing with the Macintosh) and more on clever tricks to coax almost magical capabilities out of the hardware (just try using a GS without its companion monitor, and you’ll see how much finesse went into making the hardware shine!)

But architecture, implementation, and company politics aside, there was one very big difference between the Apple II and the Macintosh. And although he may not say it outright, I believe it’s this difference that kept the flame alive for Stephen Buggie, and so many other like-minded nerds.

Image stolen from someone named Todd. I bet he and Dr. Buggie would get along famously.

That key difference? You could pop the top off.

I’ll cover why that’s important — both then and now — and more about my chat with Dr. Buggie in Part 2

What's Happening to Sony

John Dvorak is far from the most salient technology reporter out there. His years of trolling about the imminent death of Apple never really panned out, and most of his other “observations” trail actual news by a few weeks. So its no surprise that he’s oblivious about why Sony has been under attack for the past few weeks. It would be a surprise, however, if Sony were equally confused. If they haven’t gotten the message by now, they probably never will: their view of the marketplace is dying. The Internet is killing it.
When Sony launched the PlayStation 3, they pre-empted the homebrew scene, who’s efforts to unlock the Dreamcast and the original XBox drastically increased the functionality of those devices, but put their go-to-market strategy under strain. Sony decided to include a feature called OtherOS which allowed tinkerers a “sandbox” in which they could install Linux (or another OS) and experiment with the powerful hardware in the game machine. It was a good move.
So a few years after the fact, when they released a software update that added no new features, but did permanently disable the OtherOS feature, you can understand why it was pretty much universally considered a bad move.
When a brilliant but harmless at-home tinkerer, the young GeoHot, exploited a gaping hole in the Playstation 3’s hardware architecture, re-enabling the ability to launch Linux, Sony didn’t respond by patching the hole, or by re-activating the feature. Instead they sued the young college student for all he was worth, filing legal injunctions to halt his PayPal account, take down his website and require social media websites, like YouTube, to turn over identifying information of anyone who might have viewed GeoHot’s posts of how he unlocked the hardware that he had purchased with his own money.
To be clear, GeoHot owned the hardware, used his own tools and his own observations to learn how the device worked and then enabled it to be more useful. I’m pretty sure inventors have been doing this for centuries. Only now its illegal.
Unfortunately for Sony, that legality is a relic of an era they have failed to outgrow. An era when only big companies owned innovation and ideas, when only bureaucracies had the authority to publish information, and where the group with the most money usually wins. Yes, GeoHot settled and agreed never again to publish information about the PS3, but that doesn’t mean Sony won.
Sure, the giant company squashed the tiny individual. But now tiny individuals all over the Internet have responded in kind, exploiting more gaping holes in Sony’s old-world technology offerings. Long-known backdoors not sealed, unencrypted username and password lists, credit card information stored in plain text on exposed servers… Sony is a stupid, plodding behemoth, swatting at a million flies who combined are smarter, more innovative and more effective than ironfisted control over information and ideas.
We see it happening over-and-over again: a shift of power from organizations to individuals. Whether its a thousand hackers turning from curious exploration to retaliation against a company that thinks it owns ideas and the people who buy their wares, or individuals across the nation of Egypt sparking a Jasmine Revolution over the Internet and over-throwing a dictator who’d held onto power for decades. The Internet connects people faster and across greater distances than ever before. A heirarchy is not needed to spark and shape ideas: a swarm can do it better. Loyalty is to ideas and activities that allow us to explore them, not to companies or boards of directors. A company or a government cannot afford to be stagnant — and they certainly cannot afford to move backward — because if we can’t find what we want, we can invent it ourselves: without a budget or a chairperson or an executive committee. The world is a smaller place, and whether you go by the name Sony, or Mubarak, if you can’t change, you can die by a thousand cuts…

What is "The Cloud"?

It seems to be one of those buzz words that everyone uses, and no one can define. In fact, you couldn’t pick a more amorphous descriptor for a technology if you tried! Clouds by their definition are hazy, ever shifting things!
But despite the buzz and the confusion, the Cloud is real, and it represents the biggest shift in computing since the Internet itself. The Cloud changes everything — and its inevitable.
Actually, its been inevitable for about a decade now. Most of us saw it coming, but few could predict its scope. “Smart devices and the Cloud”, or the “Post PC era” are sound bytes from two competitors who understand the same reality: things are changing.
For years computers were silos of data and processing power sitting on your desk or your lap. With early networks, and then the Internet, those computers began to be able to share their data, and sometimes even their compute cycles. Another class of computer, called a server, existed primarily for these sharing functions. Web sites began changing from online newsprint to online services, where users could access the functionality of other computers to perform tasks, like uploading and encoding videos, or executing banking transactions. As both the servers and the computers accessing these servers (called “clients”) got smarter, web services started to become richer and more interactive. Soon the experience you got using a website began to look and behave a lot like the experience you got using a program you installed on your home computer. The so called “Web 2.0” was born when a critical mass of technologies began to support this kind of interaction.
The Cloud represents the logical conclusion of that progress: if services on the web can be as rich and usable as programs on your home computer — and in fact, richer because of their connected nature — then why bother installing programs on your computer at all? No one has to download and install Facebook on their PC or Mac — but its still one of the most used applications in history. Even the Facebook “app” you install on your smart phone is really just a curated Internet — more a web application than a traditional application (and if you don’t believe me, just trying using it with your WiFi and data connection off!)
More and more applications have a “web” version, that runs in the browser. In some cases these versions aren’t quite as rich as their traditional installed (aka “thick client”) counterparts, but many are quite usable and useful. Of course some applications still need direct access to the Client hardware. Photoshop won’t be in the Cloud any time soon — but its not unreasonable to predict that it could be enriched by the Cloud in the near future.
Cloud data centers offer more than just “software as a service” (SaaS). They also offer massive amounts of scalable compute power. Rendering 60 seconds of 30 frames per minute 3D video for a Pixar movie may take one machine 10 hours, but farm that processing up to 100 servers in the Cloud to run in parallel, and suddenly you’ve got your result in 10 minutes.  And storing terabytes of Hollywood movies in hard drives at home would be both cost and space prohibitive, but if a 300 computers in a Cloud data center store them, and you can access them from your XBox for a small monthly fee, suddenly you have decades of entertainment at your fingertips. This is called “infrastructure as a service” (IaaS).
The final step is called “platform as a service” (PaaS) and its ambitious goal is to begin augmenting and replacing the need for companies to own their own servers at all. When a mobile AIDS health station moves through Africa, they shouldn’t need to move computers and database servers with them — a smart, tablet style device and a satellite Internet connection can allow them access to a world of services, applications, and data with equipment that will fit in a back-pack.
As PaaS evolves and more applications and services are moved into the Cloud, the technology needs for first individuals and small companies, and later for large companies and enterprises declines. Software requirements are pushed into data centers, freeing up the Client device manufacturers to innovate around more natural interfaces. Rather than pouring R&D money into making a computer 1% faster than last year’s, that money goes into understanding human speech, facial expression or hand gestures. The processing and storage requirements are handled by the Cloud, while the interaction requirements are handled by the device the human being sees.
We’ve seen this already with touch-aware devices like the iPhone (and Android and Windows Phone) and with gesture-aware devices like the Kinect. Consumers are more interested in the experience than in installing and configuring applications. They don’t want to organize and train their computer to meet their needs, they want the device to be intuitive. In the past, software developers had a focus split between meeting the needs of the PC their software would run on, and the needs of the user accessing that software. In the Cloud, the software runs elsewhere, and the interface the user sees can be customized to the device and interaction model being used.
This transition won’t be a quick one — but its already begun. And analysts predict that by 2015 PaaS will be dominant in the industry. That means better devices, new kinds of software, and more connected experiences. For those who think Facebook is too invasive (or pervasive): prepare yourself. The always-on connected world is coming. And its gonna be cool.
The Terminator’s SkyNet may not have become self aware on April 19, 2011 but a massive network of intelligent computers running everything is closer to reality than science fiction…