What's Happening to Sony

John Dvorak is far from the most salient technology reporter out there. His years of trolling about the imminent death of Apple never really panned out, and most of his other “observations” trail actual news by a few weeks. So its no surprise that he’s oblivious about why Sony has been under attack for the past few weeks. It would be a surprise, however, if Sony were equally confused. If they haven’t gotten the message by now, they probably never will: their view of the marketplace is dying. The Internet is killing it.
When Sony launched the PlayStation 3, they pre-empted the homebrew scene, who’s efforts to unlock the Dreamcast and the original XBox drastically increased the functionality of those devices, but put their go-to-market strategy under strain. Sony decided to include a feature called OtherOS which allowed tinkerers a “sandbox” in which they could install Linux (or another OS) and experiment with the powerful hardware in the game machine. It was a good move.
So a few years after the fact, when they released a software update that added no new features, but did permanently disable the OtherOS feature, you can understand why it was pretty much universally considered a bad move.
When a brilliant but harmless at-home tinkerer, the young GeoHot, exploited a gaping hole in the Playstation 3’s hardware architecture, re-enabling the ability to launch Linux, Sony didn’t respond by patching the hole, or by re-activating the feature. Instead they sued the young college student for all he was worth, filing legal injunctions to halt his PayPal account, take down his website and require social media websites, like YouTube, to turn over identifying information of anyone who might have viewed GeoHot’s posts of how he unlocked the hardware that he had purchased with his own money.
To be clear, GeoHot owned the hardware, used his own tools and his own observations to learn how the device worked and then enabled it to be more useful. I’m pretty sure inventors have been doing this for centuries. Only now its illegal.
Unfortunately for Sony, that legality is a relic of an era they have failed to outgrow. An era when only big companies owned innovation and ideas, when only bureaucracies had the authority to publish information, and where the group with the most money usually wins. Yes, GeoHot settled and agreed never again to publish information about the PS3, but that doesn’t mean Sony won.
Sure, the giant company squashed the tiny individual. But now tiny individuals all over the Internet have responded in kind, exploiting more gaping holes in Sony’s old-world technology offerings. Long-known backdoors not sealed, unencrypted username and password lists, credit card information stored in plain text on exposed servers… Sony is a stupid, plodding behemoth, swatting at a million flies who combined are smarter, more innovative and more effective than ironfisted control over information and ideas.
We see it happening over-and-over again: a shift of power from organizations to individuals. Whether its a thousand hackers turning from curious exploration to retaliation against a company that thinks it owns ideas and the people who buy their wares, or individuals across the nation of Egypt sparking a Jasmine Revolution over the Internet and over-throwing a dictator who’d held onto power for decades. The Internet connects people faster and across greater distances than ever before. A heirarchy is not needed to spark and shape ideas: a swarm can do it better. Loyalty is to ideas and activities that allow us to explore them, not to companies or boards of directors. A company or a government cannot afford to be stagnant — and they certainly cannot afford to move backward — because if we can’t find what we want, we can invent it ourselves: without a budget or a chairperson or an executive committee. The world is a smaller place, and whether you go by the name Sony, or Mubarak, if you can’t change, you can die by a thousand cuts…

What is "The Cloud"?

It seems to be one of those buzz words that everyone uses, and no one can define. In fact, you couldn’t pick a more amorphous descriptor for a technology if you tried! Clouds by their definition are hazy, ever shifting things!
But despite the buzz and the confusion, the Cloud is real, and it represents the biggest shift in computing since the Internet itself. The Cloud changes everything — and its inevitable.
Actually, its been inevitable for about a decade now. Most of us saw it coming, but few could predict its scope. “Smart devices and the Cloud”, or the “Post PC era” are sound bytes from two competitors who understand the same reality: things are changing.
For years computers were silos of data and processing power sitting on your desk or your lap. With early networks, and then the Internet, those computers began to be able to share their data, and sometimes even their compute cycles. Another class of computer, called a server, existed primarily for these sharing functions. Web sites began changing from online newsprint to online services, where users could access the functionality of other computers to perform tasks, like uploading and encoding videos, or executing banking transactions. As both the servers and the computers accessing these servers (called “clients”) got smarter, web services started to become richer and more interactive. Soon the experience you got using a website began to look and behave a lot like the experience you got using a program you installed on your home computer. The so called “Web 2.0” was born when a critical mass of technologies began to support this kind of interaction.
The Cloud represents the logical conclusion of that progress: if services on the web can be as rich and usable as programs on your home computer — and in fact, richer because of their connected nature — then why bother installing programs on your computer at all? No one has to download and install Facebook on their PC or Mac — but its still one of the most used applications in history. Even the Facebook “app” you install on your smart phone is really just a curated Internet — more a web application than a traditional application (and if you don’t believe me, just trying using it with your WiFi and data connection off!)
More and more applications have a “web” version, that runs in the browser. In some cases these versions aren’t quite as rich as their traditional installed (aka “thick client”) counterparts, but many are quite usable and useful. Of course some applications still need direct access to the Client hardware. Photoshop won’t be in the Cloud any time soon — but its not unreasonable to predict that it could be enriched by the Cloud in the near future.
Cloud data centers offer more than just “software as a service” (SaaS). They also offer massive amounts of scalable compute power. Rendering 60 seconds of 30 frames per minute 3D video for a Pixar movie may take one machine 10 hours, but farm that processing up to 100 servers in the Cloud to run in parallel, and suddenly you’ve got your result in 10 minutes.  And storing terabytes of Hollywood movies in hard drives at home would be both cost and space prohibitive, but if a 300 computers in a Cloud data center store them, and you can access them from your XBox for a small monthly fee, suddenly you have decades of entertainment at your fingertips. This is called “infrastructure as a service” (IaaS).
The final step is called “platform as a service” (PaaS) and its ambitious goal is to begin augmenting and replacing the need for companies to own their own servers at all. When a mobile AIDS health station moves through Africa, they shouldn’t need to move computers and database servers with them — a smart, tablet style device and a satellite Internet connection can allow them access to a world of services, applications, and data with equipment that will fit in a back-pack.
As PaaS evolves and more applications and services are moved into the Cloud, the technology needs for first individuals and small companies, and later for large companies and enterprises declines. Software requirements are pushed into data centers, freeing up the Client device manufacturers to innovate around more natural interfaces. Rather than pouring R&D money into making a computer 1% faster than last year’s, that money goes into understanding human speech, facial expression or hand gestures. The processing and storage requirements are handled by the Cloud, while the interaction requirements are handled by the device the human being sees.
We’ve seen this already with touch-aware devices like the iPhone (and Android and Windows Phone) and with gesture-aware devices like the Kinect. Consumers are more interested in the experience than in installing and configuring applications. They don’t want to organize and train their computer to meet their needs, they want the device to be intuitive. In the past, software developers had a focus split between meeting the needs of the PC their software would run on, and the needs of the user accessing that software. In the Cloud, the software runs elsewhere, and the interface the user sees can be customized to the device and interaction model being used.
This transition won’t be a quick one — but its already begun. And analysts predict that by 2015 PaaS will be dominant in the industry. That means better devices, new kinds of software, and more connected experiences. For those who think Facebook is too invasive (or pervasive): prepare yourself. The always-on connected world is coming. And its gonna be cool.
The Terminator’s SkyNet may not have become self aware on April 19, 2011 but a massive network of intelligent computers running everything is closer to reality than science fiction…