The Long Eighties: What are computers even for?
The home computing craze of the 1980s left unanswered the question of what these machines were good for. As it turned out, the answer could be found in the earliest uses of networked computers...
This post is part of a series devoted to exploring what I am calling “The Long Eighties”. This is the extended decade between 1979 to 1993, which, as I read it, marked the last days of analog culture and our full transition to a fully digital society. You can read the first post here. — ap
ONE DAY WHEN I WAS ELEVEN I was watching as my father flipped through a consumer catalogue, and at one point he stopped and pointed to a small, black lozenge that looked like a cheap typewriter. “What is it?” I asked him. “A computer,” he replied. “I’m thinking of getting one.” I remember glancing at the price — $99.95 — and thinking it was awfully expensive. “What can you do with it?” I asked him. “Anything” he replied. “It will do whatever you program it to do.”
The machine was a Timex-Sinclair ZX81 home computer, and it was actually tremendously inexpensive by the standards of the day. Among the competing models on the market at the time were the Radio Shack TRS-80 (US$399), The Commodore Vic-20 ($260) and the Apple II plus ($1330). It was also tremendously underpowered, with 1k of memory, a cassette tape for storage, and a barely readable screen resolution of 64x48 pixels. It was a computer, but not a very good one, and my father’s tech-bravado notwithstanding, there was very little you could do with it. And while my dad didn’t end up buying one my best friend’s father did, and when I was over visiting we would sometimes poke at it a bit, trying to coax something interesting out of the single, monochrome command line, before giving up and heading to the TV room to play with his Atari 2600 game machine.
The home computer craze began slowly, starting mid-decade in 1975 when Bill Gates and Paul Allen sold their first piece of software under their newly named company, Micro-Soft (original name: Traf-O-Data). A year later, Steve Jobs and Steve Wozniak built a machine for hobbyists, called it an Apple, and gave their company the same name. The Apple II — the first PC to offer colour — came out in 1977, which was also the year the Atari VCS game system was released.
But things really caught fire in 1979, when USENET was created and the first BBS’s came online, CompuServe provided email to the common man and woman, and the TC/IP protocol that kick-started the internet was installed. And the hits kept coming: Tim Berners Lee invented hypertext in 1980, the IBM PC (as well as the ZX-81) came out in 1981, and in 1982 Time Magazine named the PC its “Man of the Year”.
But the problem that afflicted the ZX-81 went beyond its limited technological capacities, and the basic question — “what is a home computer for?” — is one that bedeviled the industry from the very start. And so one of the great paradoxes of the computer revolution that kicked off the Long Eighties lies in this: On the one hand, there was an astonishing amount of innovation crammed into a few very short years. But on the other, it was never really clear what the point of it all was.
This was actually a fundamental feature of the PC revolution that began in 1979 and culminated with the birth of the world wide web in 1993: Over the course of the 1980s, even as computers got smaller and more powerful and as mouse-driven graphical interfaces took over from command-line prompts (the GUI was invented in 1984), a lot of people, even the people building them, had no idea what to use a computer for. Indeed, convincing people that they needed a personal computer was an ongoing problem for the industry even into the 1990s.
As a result, personal computer sales were highly unpredictable in the early going, with huge swings from year to year in both sales and profitability. In January 1983, the industry was predicting that computer sales for the year would only be half what they had been in 1981. “Home computer makers face challenge of finding real uses” read one headline in the fall of 1983. “Home computers are hindered by high prices, dearth of uses” read another headline a year later.
Commodore, manufacturer of the popular Commodore Vic-20 and Commodore 64 home computers, tried to address this problem with a fun series of ads that started running in 1983, all based on a jingle that proclaimed “I adore my 64”. The song was an exercise in persuasion, and it basically listed all the various things you could do with the computer, including compose music, make art, book travel, do your taxes or personal finances, and so on. But the truth is while the computer could do all of these things, it was not easy. The machine was underpowered, the various interfaces were far from plug and play, and getting it to do all these things was just difficult. We had one, and we mostly used it for games.
It was a similar story for most home computers of the era, and so people stuck with what the things could do natively — namely, word processing and video games. But for both these tasks there were strong competitors: dedicated consoles for games, and a surprisingly healthy typewriter industry for word processing. And so as late as 1990, Bill Gates could be found at a trade show complaining that consumers still didn’t really know why they needed computers.
Little did Gates (or just about anyone else) realize that computer users had already figured out what they wanted to do with the technology. From the earliest days of networked computers, there were two killer apps, neither of which were part of the original design of the system: electronic mail, and multiplayer games. Contrary to the prevailing notion that computers were alienating and isolating, machines for loners, it became clear that people were using the technology to overcome the most long-standing and pervasive obstacles to human connection, namely, the real-world limitations of speed and bandwidth. The computers were being used to eliminate scarcity, but to do that effectively, they needed to become networked.
ACCORDING TO POPULAR MYTHOLOGY, the internet was created to protect communications in the event of a nuclear war. As the story goes, in the early 1960s the US military started to worry about preserving command and control in the face of an all-out nuclear exchange with the Soviet Union, and tasked the Defence Advanced Research Projects Agency (DARPA) to come up with a solution. The result was the ARPANET, a robust and redundant computer network that served as the basis for what eventually evolved into the full-fledged internet we have today. But while it is true that the ARPANET was designed from the start to be a both fast and reliable computer network, it had very little to do with the needs of the military, and everything to do with the sheer limitations of computer technology of the day. *
In fact, it arose precisely out of a desire to get the ARPA’s computer whizzes out of the business of brute force calculations or just running after every hare-brained ask that came out of the defence establishment. By the mid-1960s, the demands for computing resources from academics and researchers working on contract for DARPA were starting to get out of hand. One possible answer to the computing crunch was resource sharing over a computer network. But in order to work, it had to be fast and it had to be reliable.
The solution they hit upon was packet switching within a distributed network architecture. Unlike a centralized network (like the existing analog phone network) where every node was connected through a central routing switchboard, the nodes in a distributed network are connected to a handful of their neighbours. There do not have to be too many connections to make the system rather robust — as it turns out, if each node has as little as three or four connections, the network would be rugged enough to survive just about any reasonable amount of breakdown (yes, including a nuclear strike).
The more theoretically interesting development was what came to be called packet switching. Like many revolutionary scientific advances (think of the calculus, discovered more or less simultaneously by Leibniz and Newton, or evolution by natural selection, hit upon independently by Darwin and Wallace) packet switching was invented in the early sixties by two researchers working independently: A Brit named Donald Davies and Polish immigrant to the U.S. named Paul Baran.
For both men, the key insight was that, unlike analog voice communications, which needed more or less constant bandwidth, digital computer data was by nature fragmented and bursty, with periods of intense traffic interspersed with long idle breaks. And so they realized that it made sense to break the digital messages into discrete blocks of uniform size. The advantage was that since it didn’t matter what order the blocks arrived in (they could be reassembled at the destination), then each packet was free to follow the fastest route through the network.
Despite these theoretical advances, the idea of sharing computer resources over a network had more than its share of critics, and they came in all stripes. There were those who thought it wouldn’t work, and then there were those who thought that even if it did work, there wasn’t much use for it. And even those who thought it had certain narrow scientific uses didn’t think there would be much wider demand for it.
The skepticism was profound. IBM apparently thought it was too complicated, too expensive, and probably wouldn’t work. For its part, AT&T had been hostile to the idea of packet-switching from the very start, had jealously guarded its monopoly throughout the development of the ARPANET, and when DARPA pretty much offered to hand the entire thing – the whole internet! – over to AT&T free of charge in 1971, the phone company declined.
They weren’t entirely wrong to be skeptical. After all, the ostensible point of the thing, resource sharing, had turned out to be a bit of a damp squib. Instead, the scientists and researchers and grad students who became the heaviest users of the network started using it for the most mundane of purposes: to play games, to argue, and to just chat. Email in particular was the major catalyst for the internet’s early growth.
The internet, it turns out, was always destined to be a social platform, the killer apps were there from the very start. It just took decades for anyone to realize it. The turning point, as we will see, was 1993…
* For this section I’m drawing pretty heavily from Where Wizards Stay Up Late, the must-read history of the internet by Katie Hafner and Matthew Lyon
As always, thanks for reading and please, if you like this, I’d appreciate you sharing it with anyone who might it enjoyable as well. — ap
Loved the historical background.
However, man has lost sight of the end-goal. Computers were supposed to make tasks easier - ie help man to achieve a goal.
Today the roles seem to be reversed in that man is making the inputs to allow the computer to achieve a goal.
Modern airliners are designed to allow the computer to fly the aircraft - with man making the inputs. Next generation airliners are on the drawing boards with only one pilot in the cockpit - and multiple computer input methods.
What is next? Airliners with NO pilots and just a pre-loaded program?
"Ladies and gentlemen. Welcome aboard XX flight 936 from Los Angeles to New York. This is the first totally automated flight - there is no pilot on board. Please sit back and relax as absolutely nothing can go wrong - can go wrong - can go wrong - can go wrong"
Another great article and trip down memory lane. My first computer was a TI-99/4A in 1981 at age 9. My first goal was to write a rudimentary Pacman program, which I did sort of.
I question some of the milestones. Hypertext predates the 80s: https://cs.wellesley.edu/~cs215/Lectures/L00-HistoryHypermedia/FromHypertextToWWW.html., as does the GUI: https://spectrum.ieee.org/graphical-user-interface.
The killer apps for PCs in the early days were games and word processing. Not having to recopy home work every time you made a spelling error or poorly formed cursive was revolutionary.