Computers Are Bad is a newsletter on the history of the computer
and communications industry. It will be thrown directly at your doorstep on
semi-regular schedule, to enlighten you as to why computers are that
way.
I have an MS in information security, several certifications, and ready
access to a keyboard. These are all properties which make me ostensibly
qualified to comment on issues of computer technology. I do my best to stay
away from my areas of professional qualification, though. Instead, I talk
about things that are actually interesting. Think mid-century
telecommunications history, legacies of the Cold War, and the rise and fall
of the technology industry's stranger bit players.
You can read here, on the information superhighway, but to keep your
neighborhood paperboy pedaling down that superhighway on a bicycle please subscribe. This also contributes
enormously to my personal self esteem. There is an RSS feed for those who really want it. Fax
delivery available upon request.
Last but not least, consider supporting me on Ko-Fi. Monthly
supporters receive eyes only, a special bonus
edition that is lower effort and higher sass, covering topics that don't
quite make it to a full article.
Light: it's the radiation we can see. The communications potential of light is
obvious, and indeed, many of the earliest forms of long-distance communication
relied on it: signal fires, semaphore, heliographs. You could say that we still
make extensive use of light for communications today, in the form of fiber
optics. Early on, some fiber users (such as AT&T) even preferred the term
"lightguide," a nice analogy to the long-distance waveguides that Bell
Laboratories had experimented with.
The comparison between lightguide and waveguide illuminates (heh) an important
dichotomy in radiation-based communication. We make wide use of radio frequency
in both free-space applications ("radio" as we think of it) and confined
applications (like cable television). We also make wide use of light in confined
fiber optic systems. That leaves us to wonder about the less-considered fourth
option: free-space optical (FSO) communications, the use of modulated light
without a confined guide.
Well, if I had written this two or three years ago, free-space optical might
have counted as quite obscure. The idea of using a modulated laser or LED light
source for communications over a distance is actually quite old. Commercial
products for Ethernet-over-laser have been available since the late 1990s and
achieved multi-gigabit speeds by 2010. Motivated mostly by Strategic Defense
Initiative and Ballistic Missile Defense Organization requirements for hardened
communications within satellite constellations, experiments on a gigabit laser
satellite-to-ground link were underway in 1998, although the system ultimately
only provided satisfactory performance at a rate of around 300 Mbps. As it
turns out, FSO computer networking is nearly as old as computer networking
itself, with a 1973 experimental system briefly put into use at Xerox PARC.
Despite the fact that FSO systems have been generally available and even quite
functional for decades, they remained a niche technology with very little public
profile until the phenomenon of low-orbit communications constellations (namely
Starlink) put the concept of intra-satellite laser communication into the
spotlight. Despite various experimental satellite-to-satellite systems dating
back to the early 2000s, and more or less clandestine military applications
over the same period, the first real production system is probably the EU's
EDRS, which went live in 2016. Starlink didn't really get the laser technology
working until 2022. That's one of the interesting things about FSO: it seems
intuitively like it should work, it does work, but it's a technology that has
often sat dormant for many years at a time.
Years ago, when I was in college, I had one of those friends who never quite had
it together. You know the type; I'm talking lost a debit card and took three
months to get a new one because of some sort of "mixup" with the credit union
that I think consisted mostly of not calling them for three months. In the mean
time, our mutual friend ended up in a quandry: at WalMart, at one in the
morning, with a $2 purchase and no cash. Well, this was no problem for that
particular space case: he had his checkbook.
If you think about it, it's actually pretty remarkable that grocery stores
accept personal checks. It's a very high risk form of payment. Even if the check
is genuine, the customer could be writing it against an empty account. On top of
that, with modern printers and the declining use of MICR, forging checks is
trivial. When you offer a check, the retailer has very little to go on to
decide whether or not you're good for the money. Surely, fraud must run out of
hand—and yet, just about every major grocer still accepts personal checks.
Retail point-of-sale acceptance of personal checks is the product of an
intriguing industry that handles all the challenges of checks at once: a
combination of digital payment network, credit reporting firm, insurer, and debt
collector known as a check guarantee service. The check guarantee is older than
the ATM, and depending on how you
squint, check guarantees are quite possibly the first form of real-time,
telecommunications-based point of sale payment processing.
Harry M. Flagg was born in Frankfurt in 1935, but spent most of his childhood in
Milwaukee, Wisconsin. He attended MIT, major unknown, and graduated in 1957. I
think he was probably an ROTC student, because some sort of Navy service took
him from Massachusetts to Hawaii, where just a few years later he was out of the
Navy and working as some sort of "management consultant." Flagg was
entrepreneurial to his core, so while I knew few details about it his consulting
work is unsurprising given the wide variety of business ventures he was soon
involved in. We can be fairly confident, though, that his clients included
retailers—retailers who struggled with personal checks. In 1964, Flagg quit
consulting to focus on checks alone.
I tend to focus on the origin of the computer within the military. Particularly
in the early days of digital computing, the military was a key customer, and
fundamental concepts of modern computing arose in universities and laboratories
serving military contracts. Of course, the war would not last forever, and
computing had applications in so many other fields—fields that, nonetheless,
started out as beneficiaries of military largesse.
Consider education. The Second World War had a profound impact on higher
education in the US. The GI bill made college newly affordable to veterans, who
in the 1950s made up a large portion of the population. That was only the tip of
the iceberg, though: military planners perceived the allied victory as a result
of technical and industrial excellence. Many of the most decisive innovations
of the war—radar and radionavigation, scientific management and operations
research, nuclear weapons—had originated in academic research laboratories at
the nation's most prestigious universities. Many of those universities, MIT,
Stanford, University of California, created subsidiaries and spinoffs that
act as major defense contractors to this day.
Educational institutions bent themselves, to some degree, to the needs of the
military. The relationship was not at all one-sided. Besides direct funding for
defense-oriented research, in the runup to the Cold War the military started to
shower money on education itself. Research contracts from uniformed services and
grant programs from the young DoD supported all kinds of educational programs.
For the military, there were two general goals: first, it was assumed that
R&D in civilian education would lead to findings that directly improved the
military's own educational system. Weapons and tactics of war were increasingly
technical, even computer controlled, and the military was acutely aware that
training a large number of 18-year-old enlistees to operate complex equipment
according to tactical doctrine under pressure was, well, to call it a challenge
would be an understatement.
In the United States, we are losing our fondness for cash. As in many other
countries, cards and other types of electronic payments now dominate everyday
commerce. To some, this is a loss. Cash represented a certain freedom from
intermediation, a comforting simplicity that you just don't get from Visa.
It's funny to consider, then, how cash is in fact quite amenable to automation.
Even Benjamin Franklin's face on a piece of paper can feel like a mere proxy
for a database transaction. How different is cash itself from "e-cash", when
it starts and ends its lifecycle through automation?
Increasing automation of cash reflects the changing nature of banking: decades
ago, a consumer might have interacted with banking primarily through a "passbook"
savings account, where transactions were so infrequent that the bank recorded
them directly in the patron's copy of the passbook. Over the years, increasing
travel and nationwide communications led to the ubiquitous use of inter-bank
money transfers, mostly in the form of the check. The accounts that checks
typically drew on—checking accounts—were made for convenience and ease of
access. You might deposit your entire paycheck into an account—it might even
be sent there automatically—and then when you needed a little walking around
money, you would withdraw cash by the assistance of a teller. By the time I was
a banked consumer, even the teller was mostly gone. Today, we get our cash from
machines so that it can be deposited into other machines.

Cash handling is fraught with peril. Bills are fairly small and easy to hide,
and yet quite valuable. Automation in the banking world first focused on solving
this problem, of reliable and secure cash handling within the bank branch. The
primary measure against theft by insiders was that the theft would be discovered,
as a result of the careful bookkeeping that typifies banks. But, well, that
bookkeeping was surprisingly labor-intensive in even the bank of the 1950s.
The way I see it, few parts of American life are as quintessentially American
as buying gas. We love our cars, we love our oil, and an industry about as old
as automobiles themselves has developed a highly consistent, fully automated,
and fairly user friendly system for filling the former with the latter.
I grew up in Oregon. While these rules have since been relaxed, many know
Oregon for its long identity as one of two states where you cannot pump
your own gas (the other being New Jersey). Instead, an attendant, employee
of the gas station, operates the equipment. Like Portland's lingering indoor
gas station, Oregon's favor for "full-service" is a holdover. It makes sense,
of course, that all gas stations used to be full-service.
The front part of a gas station, where the pumps are and where you pull up your
car, is called the Forecourt. The practicalities of selling gasoline, namely
that it is a liquid sold by volume, make the forecourt more complex than you
might realize. It's a set of devices that many of us interact with on a regular
basis, but we rarely think about the sheer number of moving parts and
long-running need for digital communications. Hey, that latter part sounds
interesting, doesn't it?
Electric vehicles are catching on in the US. My personal taste in vehicles
tends towards "old" and "cheap," but EVs have been on the market for long
enough that they now come in that variety. Since my daily driver is an EV,
I don't pay my dues at the Circle K nearly as often as I used to. One
of the odd little details of EVs is the complexity hidden in the charging
system or "EVSE," which requires digital communications with the vehicle
for protection reasons. As consumers across the country install EVSE in
their garages, we're all getting more familiar with these devices and their
price tags. We might forget that, well, handling a fluid takes a lot of
equipment as well... we just don't think about it, having shifted the whole
problem to a large industry of loosely supervised hazardous chemical
handling facilities.