COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.
I have an M. S. in information security, more certifications than any human should, and ready access to a keyboard. This are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in engineering for a small company in the healthcare sector. I have a background in security operations and DevOps, but also in things that are actually useful like photocopier repair.
You can see read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.
Given the sources of most of my readers, some of you may have seen this
article about A/UX,
Apple's failed earlier effort towards delivering an Apple user experience on
a POSIX operating system. A/UX was driven primarily by demands for a "serious
workstation," which was a difficult category to obtain at any kind of reasonable
price in the 1980s. It is also an example of Apple putting a concerted effort
into attracting large enterprise customers (e.g. universities), something that
is not exactly part of the Apple brand identity today.
I wanted t, expand on A/UX by discussing some of Apple's other efforts to make
serious inroads into the world of Big Business. In the 2000s and 2010s, Apple
was frequently perceived as focusing on consumers and small organizations
rather than large organizations. This was perhaps a result of Apple's
particular nexus to the personal computing ethos; Apple believed that computers
should be affordable and easy to use rather than massively integrated.
This doesn't seem to have been a choice made by intention, though: through the
'90s Apple made many efforts to build a platform which would be attractive to
large enterprises. Many major events in the company's history are related to
these attempts, and they're formative to some aspects of Apple's modern
products... but they were never really successful. There's a reason that our
usual go-to for an integrated network computing environment is the much
maligned Microsoft Active Directory, and not the even more maligned Apple
Indeed, somewhere around 2010 Apple seems to have lost interest in enterprise
environments entirely, and has been slowly retiring their products aimed at
that type of customer. On the one hand, Apple probably claims that their iCloud
offerings replace some of the functionality of their on-prem network products.
On the other hand, I think a far larger factor is that they just never sold
well, and Apple made a decision to cut their losses.
But that's starting at the end. Let's take a step back to the '80s, during the
project that would become A/UX, to Apple's first major step into the world of
I have previously
mentioned the topic of
network operating systems, or NOS. NOS can be tricky because, at different
points in time, the term referred to two different things. During the '80s, an
NOS was an operating system that was intended specifically to be used with a
network. Keep in mind that at the time many operating systems had no support
for networking at all, so NOS were for a while an exciting new category. NOS
were expected to supported file sharing, printer sharing, messaging, and other
features we now take for granted .
Apple never released an NOS as such (stories about the 'i' in iMac standing for
'internet' aside), but they were not ignoring the increasing development of
microcomputer networks. The Apple Lisa appears to have been the first Apple
product to gain network support, although only in the most technical sense.
Often referred to as AppleNet , this early Apple effort towards networking
was directly based on Xerox XNS (which will be familiar to long-time readers,
as many early PC network protocols were derived from XNS). AppleNet ran at
1mbps using coaxial cables, and failed to gain any meaningful traction. While
Apple announced Lisa-based AppleNet file and print servers, I'm not sure
whether they ever actually shipped a server product at all (at least the file
server was vaporware).
Apple's second swing at networking took the form of AppleBus. AppleBus was
fundamentally just a software expansion of the Macintosh serial controller into
an RS-422 like multi-drop serial network scheme. Because AppleBus also served
as the general-purpose peripheral interconnect for many Apple systems into the
'90s, it could be seen as an unusually sophisticated interconnect in terms of
its flexibility. On the other hand, consumers could find it confusing that a
disk drive and the network could be plugged into the same port, a confusion
that in my experience lasted into the '00s among some devoted Macintosh users.
Nonetheless, the idea that inter-computer networking was simply an evolution of
the peripheral bus was a progressive one that would continue to appear in
Apple-influenced interconnects like Firewire and to a lesser extent
Thunderbolt, although it would basically never be successful. In a different
world, the new iMac purchaser that thought USB and Ethernet were the same thing
might have been correct. Like the proverbial mad prophet, they have a rare
insight into a better way.
For all of my optimism about AppleBus, it was barely more successful than
AppleNet. AppleBus, by that name, came and went with barely any actual computer
inter-networking applications. AppleBus was short-lived to the extent that it
is barely worth mentioning, except for the interesting fact that AppleBus
development as a computer network was heavily motivated by the LaserWriter.
One of the first laser printers on the market, the LaserWriter was formidably
expensive. AppleBus was the obvious choice of interface since it was well
supported by the Macintosh, but the ability to share the LaserWriter between
multiple machines seemed a business imperative to sell them. Ipso facto, the
peripheral interconnect had to become a network interconnect.
In 1985, Apple rebranded the AppleBus effort to AppleTalk, which will likely be
familiar to many readers. Although AppleTalk was a fairly complete rebrand and
significantly more successful than AppleBus, it was technically not much
different. The main component of AppleTalk outside of the built-in serial
controller was an external adapter box which performed some simple level
conversions to allow for a longer range. Unfortunately this simple design lead
to limitations, the major one being a speed of just 230kbps which was decidedly
slow even for the time.
As early as the initial development of AppleBus, token ring seem to be the
leading direction for microcomputer networking and, indeed, Apple had been
involved in discussions with IBM over use of token ring for the Macintosh.
Unfortunately for IBM, their delays in having token ring ready for market lead
somewhat directly to Apple's shift towards ethernet. Since token ring was not
an option for the LaserWriter, Apple pushed forward with serial AppleBus for
several years, by which point it became clear that Ethernet would be the
victor. In 1987, AppleTalk shifted to Ethernet as a long-range physical layer
(called EtherTalk) and the formerly-AppleBus serial physical layer was
rebranded as LocalTalk.
AppleTalk was a massive success. For fear of turning this entire post into a
long discussion of Apple networking history, I will now omit most of the fate
of AppleTalk, but LocalTalk remained in use to the late '90s even as IP over
Ethernet became the norm. AppleTalk was for a time the most widely deployed
microcomputer networking protocol, and a third-party accessory ecosystem
flourished that included alternate physical media, protocol converters,
switches, etc. Of course, this all inevitably fell out of use for
inter-networking as IP proliferated.
The summation of this history is that Apple has offered networking for their
computers from the mid-'80s to the present. But what of network applications?
One of the great selling points of AppleTalk was its low-setup,
auto-configuring nature. As AppleTalk rose to prominence, Apple broke from
conventional NOS vendors by keeping to more of a peer-to-peer logical
architecture where basic services like file sharing could be hosted by any
Macintosh. This seems to have been an early manifestation of Apple's dominance
in zero-configuration network protocols, which is perhaps reduced today by
their heavy reliance on iCloud but is still evident in the form of Bonjour.
This is not at all to say that Apple did not introduce a server product,
although Apple was slow to the server market, and the first AppleTalk file and
print servers were third-party. In 1987, Apple released their own effort:
AppleShare. AppleShare started out as a file server (using the AFP protocol
that Apple designed for this purpose), but it gained print and mail support,
which put it roughly on par with the NOS landscape at the time.
But what of directory services? Apple lagged on development of their own
directory system, but had inherited NetInfo from their acquisition of NeXT.
NetInfo could be used via AppleTalk, and that seems to have been at least
somewhat common in universities, but it's hard to find much information on
this---in part because directory services have never been especially common
in Apple environments, compared to UNIX and Windows ones. The story of Apple
directory services becomes clearer with Apple Open Directory, an LDAP-based
solution introduced in 2002. Open Directory is largely similar to competitors
like MS AD and Red Hat IDM. As a result of the move to open standards, it has
occasionally, with certain versions of the relevant operating systems, been
possible to use OS X Server as an Active Directory domain controller or join OS
X to an Active Directory domain. I have previously worked with OS X machines
joined to an Oracle directory, which was fine except for all the Oracle parts
and the one time that Apple introduced a bug where OS X didn't check user's
passwords any more. Haha, wacky hijinx!
As I mentioned, Apple has been losing interest in the enterprise market. The
primary tool for management of Apple directory environments, Workgroup Manager,
was retired around 2016 and has not really received a replacement. It is still
possible to use Open Directory, but no longer feels like a path that Apple
intends to support.
That of course brings us naturally to the entire topic of OS X Server and the
XServe hardware. Much like Windows Server, for many years OS X Server was a
variant of the operating system that included a variety of pre-configured
network services ranging from Open Directory to Apache. As of now, OS X Server
is no longer a standalone product and is instead offered as a set of apps to be
installed on OS X. This comes along with the end of the XServe hardware in
2011, meaning that it is no longer possible to obtain a rack-mount system which
legitimately runs OS X.
The industry norm now appears to be affixing multiple Mac Minis to a 19" sheet
of plywood with plumber's tape. And that, right there, is a summary of Apple's
place in the enterprise market. In actuality, Apple has come around somewhat
and now offers an official rack ear kit for the Mac Pro. That said, it's 5U for
specifications that other vendors fit in 1U (save the GPU which is not of
interest to conventional NOS applications), and lacks conventional server
usability features like IPMI or even a cable arm. In general, Apple continues
to be uninterested in the server market, even to support their own
workstations, and really offers the Mac Pro rack option only for media
professionals who have one of the desks with 19" racks integrated .
I originally set out to write this post about Apple's partnership with IBM,
which has delightful parallels to Microsoft's near simultaneous partnership
with IBM as both were attempting to develop a new operating system for the
business workstation market. I find great irony and amusement in the fact that
both Microsoft, which positioned itself as Not IBM, and Apple, which positioned
itself as even more Not IBM, both spent an appreciable portion of their
corporate histories desperately attempting to court IBM. While neither was
successful, Apple was even less successful than Microsoft. Let's take a look at
that next time---although I am about to head out on a vacation and might either
not write for a while or end up writing something strange related to said trip.
 Amusingly, despite many efforts by vendors, local file and printer sharing
can still be a huge pain if there are heterogeneous product families involved.
The more things change, the more they stay the same, which is to say that we
are somehow still fighting with SMB and NFS.
 AppleBus, AppleNet, and AppleTalk all refer to somewhat different things,
but for simplicity I am calling it AppleNet until the AppleTalk name was
introduced. This is really just for convenience and because it matches the
terminology I see other modern sources using. The relationship between AppleBus
and AppleNet was as confusing at the time as it is now, and it is common for
'80s articles to confuse and intermix the two.
 I would absolutely be using one of these if I had room for one.
Let's return, for a while, to the green-ish-sometimes pastures of GUI systems.
To get to one of my favorite parts of the story, delivery of GUIs to terminals
over the network, a natural starting point is to discuss an arcane, ancient
GUI system that came out of academia and became rather influential.
I am referring of course to the successor of W on V: X.
Volumes could be written about X, and indeed they have. So I'm not intending to
present anything like a thorough history of X, but I do want to address some
interesting aspects of X's design and some interesting applications. Before we
talk about X history, though, it might be useful to understand the broader
landscape of GUI systems on UNIX-like operating systems, though, because so far
we've talked about the DOS/VMS sort of family instead and there are some
Operating systems can be broadly categorized as single-user and multi-user.
Today, single-user operating systems are mostly associated only with embedded
and other very lightweight devices . Back in the 1980s, though, this divide
was much more important. Multi-user operating systems were associated with "big
iron," machines that were so expensive that you would need to share them for
budget reasons. Most personal computers were not expected to handle multiple
users, over the network or otherwise, and so the operating systems had no
features to support this (no concept of user process contexts, permissions,
Of course, you can likely imagine that the latter situation, single-user
operating systems, made the development of GUI software appreciably easier.
It wasn't even so much about the number of users, but rather about where they
were. Single-user operating systems usually only supported someone working
right at the console, and so applications could write directly to the graphics
hardware. A windowing system, at the most basic, only really needed to concern
itself with getting applications to write to the correct section of the frame
On a multi-user system, on the other hand, there were multiple terminals
connected by some method or other that almost certainly did not allow for
direct memory access to the graphics hardware. Further, the system needed to
manage what applications belonged on which graphics devices, as well as the
basic issue of windowing. This required a more complicated design. In
particular, server-client systems were extremely in at the time because they
had the same general shape as the computer-terminal architecture of the system.
This made them easier to reason about and implement.
So, graphics systems written for multi-user systems were often, but not always,
server-client. X is no different: the basic architecture of X is that of a
server (running within the user context generally) that has access to a
graphics device and input devices (that it knows how to use), and one or more
clients that want to display graphics. The clients, which are the applications
the user is using, tell the server what they want to display. In turn, the
server tells the clients what input they have received. The clients never
interact directly with the display or input hardware, which allows X to manage
multiple access and to provide abstraction.
While X was neither the first graphics system for multi-user operating systems,
nor the first server-client graphics system, it rapidly spread between academic
institutions and so became a de facto standard fairly quickly . X's
dominance lasts nearly to this day, Wayland has only recently begun to exceed
it in popularity. Wayland is based on essentially the same architecture.
X has a number of interesting properties and eccentricities. One of the first
interesting things many people discover about X is that its client-server
nature actually means something in practice: it is possible for clients to
connect to an X server running on a different machine via network sockets.
Combined with SSH's ability to tunnel arbitrary network traffic, this means
that nearly all Linux systems have a basic "remote application" (and even full
remote desktop) capability built in. Everyone is very excited when they first
learn this fact, until they give it a try and discover that the X protocol is
so hopelessly inefficient and modern applications so complex that X is utterly
unusable for most applications over most internet connections.
This gets at the first major criticism of X: the protocol that clients use to
describe their output to X is very low level. Besides making the X protocol
fairly inefficient for conveying typical "buttons and forms" graphics, X's lack
of higher-level constructs is a major contributor to the inconsistent
look-and-feel and interface of typical Linux systems. A lot of basic
functionality that feels cross-cutting, like copy-and-paste, is basically
considered a client-side problem (and you can probably see how this leads to
the situation where Linux systems commonly have two or three distinct copy
and paste buffers).
But, to be fair, X never aimed to be a desktop environment, and that type of
functionality was always assumed to occur at a higher level.
One of the earliest prominent pseudo-standards built on top of X was Motif,
which was used as a basis for the pseudo-standard Common Desktop Environment
presented by many popular UNIX machines. Motif was designed in the late '80s
and shows it, but it was popular and both laid groundwork and matched the
existing designs (Apple Lisa etc) to an extent that a modern user wouldn't have
much trouble using a Motif system.
Motif could have remained a common standard into the modern era, and we can
imagine a scenario where Linux had a more consistent look-and-feel because
Motif rose to the same level of universality as X. But it didn't. There are a
few reasons, but probably the biggest is that Motif was proprietary and not
released as open source until well after it had fallen out of popularity. No
one outside of the big UNIX vendors wanted to pay the license fees.
There were several other popular GUI toolkits built on top of X in the early
days, but I won't spend time discussing them for the same reason I don't care
to talk about Gnome and KDE in this post. But rest assured that there is a
complex and often frustrating history of different projects coming and going,
with few efforts standing the test of time.
Instead of going on about that, I want to dig a bit more into some of the less
discussed implications of X's client-server nature. The client and server could
exist on different machines, and because of the hardware-independence of the X
protocol could also be running different operating systems, architectures, etc.
This gave X a rather interesting property: you could use one computer to "look
at" X software running on another computer almost regardless of what the two
computers were running.
In effect, this provided one of the first forms of remote application delivery.
Much like textual terminals had allowed the user to be physically removed from
the computer, X allowed the machine that rendered the application and collected
input to be physically removed from the actual computational resources running
the software. In effect, it created a new kind of terminal: a "dumb" machine
that did nothing but run an X server, with all applications running on another
The terminology around this kind of thing can be confusing and is not well
agreed upon, but most of the time the term "thin terminal" refers to exactly
this: a machine that handles the basic mechanics of graphical output and user
input but does not run any application software.
Because of the relatively high complexity of handling graphics outputs, thin
terminals tend to be substantially similar to proper "computers," but have very
limited local storage (usually only enough for configuration) and local
processing and memory capacity that are just enough to handle the display task.
They're like really low-end computers, basically, that just run the display
server part of the system.
In a way that's not really very interesting, as it's conceptually very similar
to the block terminals used by IBM and other mainframes. In practice, though,
this GUI foray into terminals took on some very odd forms over the early
history of personal computers. The terminal was decidedly obsolete, but was
also the hot new thing.
Take, for example, DESQview. DESQview was a text-mode GUI for DOS that I
believe I have mentioned before. After DESQview, the same developer,
Quarterdeck, released DESQview/X. DESQview/X was just one of several X servers
that ran on DOS. X was in many ways a poor fit for DOS, given DOS's single task
nature and X's close association with larger machines running more capable
operating systems, but it was really motivated by cost-savings. DOS was cheap,
and running an X server on DOS allowed you to both more easily port
applications written for big expensive computers, and to use a DOS machine as
an X server for an application running on another machine. The cheap DOS PC
became effectively a hybrid thin terminal that could both run DOS software and
"connect to" software running on a more expensive system.
One way to take advantage of this functionality was reduced-cost workstations.
For example, at one time years ago the middle school I attended briefly had a
computer lab which consisted of workstations (passed down from another better
funded middle school) with their disks removed. The machines booted via PXE
into a minimal Linux environment. The user was presented with a specialized
display manager that connected to a central server over the network to start
a desktop environment.
The goal of this scheme was reduced cost. In practice, the system was dog slow
 and the unfamiliarity of KDE, StarOffice, and the other common applications
on the SuSE server was a major barrier to adoption. Like most K-12 schools at
the time the middle school was already firmly in the grasp of Apple anyway.
Another interesting aspect of X is the way it relates to the user model. I will
constrain myself here to modern Linux systems, because this situation has
varied over time. What user does X run as?
On a typical Linux distribution (that still uses X), the init system starts a
desktop manager as root and then launches an X server to use, still with root
privileges. The terminology for desktop things can get confusing, but a
desktop manager is responsible for handling logins and setting up graphical
user sessions. It's now common for the display manager to actually run as its
own user to contain its capabilities (by switching users after start), so it
may use setuid in order to start the X server with root capabilities.
Once a user authenticates to the display manager, a common display manager
behavior is to launch the user's desktop environment as the user and then hand
it the information necessary to connect to the existing X instance. X runs as
root the whole time.
It is completely possible to run X as a non-privileged user. The problem is
that X handles all of the hardware abstraction, so it needs direct write access
to hardware. For numerous reasons this is typically constrained to root.
You can imagine that the security concerns related to running X as root are
significant. There is work to change this situation, such as the kernel mode
setting feature, but of course there is a substantial problem of inertia: since
X has always run with root privileges, a great many things assume that X has
them, so there are a lot of little problems that need solving and this usage is
not yet well supported.
This puts X in a pretty weird situation. It is a system service because it
needs to interact directly with scarce hardware, but it's also a user
application because it is tied to one user session (ultimately by means of
password authentication, the so-called X cookie). This is an inherent tension
that arises from the nature of graphics cards as devices that expect very
low-level interaction. Unfortunately, continuously increasing performance
requirements for graphical software make it very difficult to change this
situation... as is, many applications use something like mesa to actually
bypass the X server and talk directly to the graphics hardware instead.
I am avoiding talking about the landscape of remote applications on Windows,
because that's a topic that deserves its own post. And of course X is a fertile
field for technology stores, and I haven't even gotten into the odd politics of
Linux's multiple historic X implementations.
 Windows looks and feels like a single-user operating system to the extent
that I sometimes have to point out to people that NT windows releases are fully
multi-user. In fact, in some ways Windows NT is "more multi-user" than Linux,
since it was developed later on and the multi-user concept is more thoroughly
integrated into the product. Eventually I will probably write about some
impacts of these differences, but the most obvious is screen locking: on Linux,
the screen is "locked" by covering it with a window that won't go away. On
Windows, the screen is "locked" by detaching the user session from the local
console. This is less prone to bypasses, which perennially appear in Linux
 The history of X, multi-user operating systems from which UNIX inherited,
and network computing systems in general is closely tied to major projects at a
small number of prominent American universities. These universities tended to
be very ambitious in their efforts to provide a "unified environment" which
lead to the development of what we might now call a "network environment," in
the sense of shared resources across an institution. The fact that this whole
concept came out of prominent university CS departments helps to explain why
most of the major components are open source but hilariously complex and
notoriously hard to support, which is why everyone today just pays for
Microsoft Active Directory, including those universities.
 Given the project's small budget, I think the server was just under-spec'd
to handle 30 sessions of Firefox at once. The irony is, of course, that as
computers have sped up web browsers have as well, to the extent that running
tens of user sessions of Firefox remains a formidable task today.
Note: several corrections made, mostly minor, thanks to HN users smcameron,
yrro, segfaultbuser. One was a larger one: I find the startup process around X
to be confusing (you hopefully see why), and indeed I described it wrong. The
display manager starts first and is responsible for starting an X server to
use, not the other way around (of course, when you use xinit to start X after
login, it goes the way I originally said... I didn't even get into that).
The strategic and tactical considerations surrounding nuclear weapons went
through several major eras in a matter of a few decades. Today we view the
threat of nuclear wear primarily through the "triad": the capability to deliver
a nuclear attack from land, sea, and air. This would happen primarily through
intercontinental ballistic missiles (ICBMs), so-called because they basically
launch themselves to the lower end of space before strategically falling
towards their targets (ballistic reentry). ICBMs are fast, taking about 30
minutes to arrive across the globe. The result is that we generally expect to
have very little warning of a nuclear first trike.
The situation in the early Cold War was quite different. ICBMs, and long-range
missiles in general, are complex and took some time to develop. From the end of
World War II to roughly the late '60s, the primary method of delivery for
nuclear weapons was expected to be by air: bombs, delivered by long-range
bombers. The travel time from the Soviet Union would be hours, allowing
significant warning and a real opportunity for air defense intervention.
The problem was this: we would have to know the bombers were coming.
Many people seem to assume that the United States has the capability to detect
and track all aircraft flying in US airspace. The reality is quite a bit
different. The problem of detecting and tracking aircraft is a surprisingly
difficult one, and even today our capabilities are limited. Nonetheless,
surveillance of airspace is considered a key element of "air sovereignty," or
our ability to maintain military and civil control of our airspace.
Let's take a look at the history of the United States ability to monitor our
During the 1940s, it was becoming clear that airspace surveillance was an
important problem. Although the United States did not then face attacks on the
contiguous US (and never would), were the Axis forces to advance to the point
of bombing missions on CONUS it would be critical to be able to detect the
incoming aircraft. The military invested in a system for Aircraft Control and
Warning, or AC&W. Progress was slow: long-range radar was primitive and
expensive, and the construction of AC&W stations was not a high-level priority.
By the end of the 1948, there were only a small number of AC&W stations, they
were considered basically experimental, and the ability to integrate data
coming from the several stations was very limited. Efforts to expand the air
surveillance system routinely failed due to lack of funding.
The Lashup project, launched in '48, made up the first major effort to build an
air surveillance system. As the name suggests, Lashup was only intended to be
temporary, funded by the congress as a stopgap measure until a more complete
system could be designed. Over the next two years, 44 radar stations were built
focused around certain strategically important areas. Lashup provided nothing
near nationwide coverage, but was expected to detect bombing runs directed
towards the most important military targets. Lashup included three stations
surrounding my own Albuquerque, due to the importance of the Sandia and Manzano
Amy Bases and the Z Division of Los Alamos.
Lashup sites used sophisticated radar sets for the time, but perhaps the most
important innovation of Lashup was the command and control infrastructure built
around it. Lashup stations were connected to the air defense command by
dedicated telephone lines, the air defense command was connected to the
continental air command by another dedicated telephone line, and ultimately
dedicated lines were connected all the way to the White House. This was the
first system built to allow a prompt nuclear response by informing the
commander in chief of an impending nuclear attack as quickly as possible.
If you've read any of my other material on the cold war, you might understand
that this is the core of my fascination with cold war defense history: the
threat of nuclear attack was, for the large part, the first thing to motivate
the development of a nationwide rapid communications system. For the first half
of the 20th century there simply wasn't a need to deliver a message from the
west coast to the President in minutes, but in the second half of the 20th
century there most definitely was.
The fear of a Soviet nuclear strike, and the resulting government funding, was
perhaps the largest single motivator of progress in communications and
computing technology from the 1940s to the 1980s. Most of the communications
technology we now rely on was originally built to meet the threat of a first
We see this clearly in the case of air defense radar. While Lashup nominally
had the capability to deliver a prompt warning of nuclear attack, the entire
process was rather manual and thus not very reliable. Fortunately, lashup was
temporary, and just as construction of the Lashup sites was complete work
started on its replacement: the Permanent System.
The Permanent System consisted of a large number of radar stations, ultimately
over 100. More importantly, though, it consisted of a system of communications
and coordination centers intended to quickly confirm and communicate a nuclear
It will help in understanding this system to understand the strategic principal
involved. The primary defense against a nuclear attack by bombers was a process
called ground-controlled intercept, or GCI. The basic concept of GCI was that
radar stations would provide up-to-date position and track information on
inbound enemy aircraft, which would be used to vector interceptor aircraft
directly towards the threat. The aid of ground equipment was critical to an
effective response, as fighter aircraft of the time lacked sophisticated
targeting radar and had no good way to search for bombers.
To this end, the Permanent System included Manual Air Defense Control Centers
(ADCC) (the "manual" was used to differentiate from automatic centers in the
later SAGE system). The ADCCs received information on radar targets from the
individual radar sites via telephone, and plotted them with wet erase marker on
clear plexiglass maps (perhaps the source of the clear whiteboard trope now
ubiquitous in films) in order to correlate multiple tracks. They then reported
these summarized formations and tracks to the Air Defense Command, at Ent AFB
in Colorado, for use in directing interceptors.
The Permanent System was extended beyond CONUS, although Alaska continued to
have a distinct air defense program. The biggest OCONUS extension of the
Permanent System was into Canada, with the Pinetree Line (the first of the
cross-Canadian early warning radar networks) roughly integrated into the
Permanent System. Perhaps most interestingly, the Permanent System also saw an
early effort at extension of early warning radar into the ocean. This took the
form of the Texas Towers, a set of three awkward offshore radar stations that
were later abandoned due to their poor durability against rough seas .
Technology was advancing extremely rapidly in the mid-20th century, and by the
time the Permanent System reached nearly 200 radar stations it had also become
nearly obsolete. For its vast scale, the capabilities of the Permanent System
were decidedly limited: it could only detect large aircraft, it performed
poorly at low altitudes (often requiring mitigation through "gap filler"
stations), and interpretation and correlation of radar data was a manual
process, costing precious minutes in the timeline of a nuclear reprisal.
Here in Albuquerque, Kirtland Air Force Base was host to the Kirtland Manual
ADCC, activated in 1951. 13 radar stations around New Mexico, eastern Arizona,
and western Texas reported to Kirtland AFB. Each of these 13 radar stations was
itself a manned Air Force Station including housing and cantonment. The
Continental Divide Air Force Station, for example, consisted of some fifty
people in remote McKinley County. The station included amenities like a library
and gym, housing and a trailer park, and two radars: an early warning radar and
a height-finding radar. Finally, a ground-air transmit-receive (GATR) radio
site provided a route for communications with interceptors.
Continental Divide AFS was deactivated in 1960. You can still see the remains
today, although there is little left other than roads and some foundations.
Like Continental Divide AFS, the Permanent System as a whole failed to make it
even a decade. In 1960, it was as obsolete as Lashup, having been replaced not
only by improved radar equipment but, more importantly, by a vastly improved
communications and correlation system: the Semi-Automatic Ground Environment,
or SAGE---by most measures, the first practical networked computer system.
We'll talk about SAGE later, but for now, check out a list of Permanent System
sites. There might be one in your
area. Pay it a visit some time; in many ways it's the beginning of the computer
revolution: a manual data collection network obsoleted in just a few years by
the development of the first nationwide computer network.
 The Texas Towers were connected to shore via troposcatter radio links, one
of my favorite communications technologies and something that will surely get
a full post in the future.
The use of electronics to administer elections has been controversial for some
time. Since the "hanging chads" of the 2000 election, there's been some degree
of public awareness of the use of technology for voting and its possible
impacts on the accuracy and integrity of the election. The exact nature of the
controversy has been through several generations, though, reflecting both
changes in election technology and changes in the political climate.
Voting is a topic of great interest to me. The administration of elections is
critical to a functioning democracy, and raises a variety of interesting
security and practical challenges. In particular, the introduction of
automation into elections presents great opportunity for cost savings and
faster reporting, but also a greater risk of intentional and accidental
interference in the voting process. Back when I was in school, I focused some
of my research on election administration. Today, I continue to research the
topic, and have added the practical experience of being a poll worker in two
states and for many elections .
Given my general propensity to have opinions, it will come as no surprise that
this has all left me with strong opinions on the role of computer technology in
election administration. But before we get to any of that, I want to talk a bit
about the facts of the matter.
The thing that most frustrates me about controversies surrounding electronic
voting is the generally very poor public understanding of what electronic
voting is. If you follow me on Twitter, you may have seen a thread about this
recently, and it's a ramble I go on often. There is a great deal of public
misconception about the past, present, and future role of electronics in
elections. These misunderstandings constantly taint debate about electronic
In an on-and-off series of posts, I plan to provide an objective technical
discussion of election technology, "electronic voting," and security concerns
surrounding both. I will largely not be addressing recent "stolen election"
conspiracy theories for a variety of reasons, but will undoubtedly touch on
them occasionally. At the very least, because I can never turn down an
opportunity to talk about J. Hutton Pulitzer, an amazing wacko who has a
delightful way of appearing with a huge splash, making a fool of himself, and
then disappearing... to pop up again a couple years later in a completely
I will restate that my goal here is to remain largely apolitical (mocking J.
Hutton Pulitzer aside), and as a result I will not necessarily respond to any
given election fraud or interference claim directly. But I do think anyone
interested in or concerned by these theories will find the technical context
that I can provide very useful.
Who runs elections?
One of the odd things about the US, compared to other countries, is the general
architecture of election administration. In the US, elections are mostly
administered by the county clerk, and the election process is defined by state
law. Federal law imposes only minimal requirements on election administration,
leaving plenty of room for variation between states.
Although election administration is directly performed by the county clerk, for
state-level elections (which is basically all the big ones) the secretary of
state performs many functions. It's also typical for the secretary of state to
provide a great deal of support and policy for the county clerks. So, while
county clerks run elections, it's common for them to do so using equipment,
software, and methods provided by the state. It's ultimately the responsibility
of states to pay for elections, which is probably the greatest single problem
with US election integrity, because states are poor.
While it seems a little odd that, say, a presidential election is run by the
county clerks, it can also be odd the other way. Entities like municipalities,
school districts, higher education districts, flood control districts, all
kinds of sub-county entities may also have elected offices and the authority to
issue bond and tax measures. These are typically (but not always) administered
by the county or counties as well, usually on a contract basis.
What is electronic voting?
Debate around electronic voting tends to focus purely around "voting machines,"
a broad category that I will define more later. The reality is that voting
machines are only a small portion of the overall election apparatus, and are
not always the most important part. So before I get into the world of election
security theory, I want to talk a bit about the moving parts of an election,
and where technology is used.
The general timeline of an election looks like this:
Registration of voters
Registration/certification of candidates
Preparation of ballots
Preparation of pollbooks
Election day: use of pollbooks, issuance of ballots, collection of ballots,
possibly continuous tallying.
Election night: rapid reporting of totalized results
Canvassing: review of problem ballots, investigation of provisional ballots,
final tallying of votes.
Certification: final audit and approval of election process and results.
To meet these ends, election administrators use various different systems.
There's a great deal of mix-and-match between these systems, many vendors offer
a "complete solution" but it's still common for election administrators to use
products from multiple vendors.
Voter registration management system (long-term)
Ballot printing system
Ballot marker or direct-recording electronic machine
Totalizing/tallying system (election management system)
Canvassing support systems - ballot adjudication, bulk scanning, etc.
Each of these systems poses various integrity and security concerns. However,
election systems can be roughly divided into two categories: tabulating systems
and non-tabulating systems.
Tabulating systems, such as tabulators and direct recording electronic (DRE)
machines, directly count votes which they record in various formats for later
totalization. Tabulating systems tend to be the highest-risk element of an
election because they are the key point at which the outcome of an election
could be altered by, for example, changing votes.
Non-tabulating systems perform support functions such as design of ballots,
registration of voters, and totalizing of tabulated votes. These systems tend
to be less security critical because they produce artifacts which are
relatively easy to audit after the fact. For example, a fault in ballot design
will be fairly obvious and easy to check for. Similarly, totalizing of
tabulated votes can fairly easily be repeated using the original output of
the tabulators (and tabulators typically output their results in multiple
independent formats to facilitate this verification).
This is not to say that tabulating systems are not subject to audit. When a
paper form of the voter's selections exists (a ballot or paper audit trail),
it's possible to manually recount the paper form in order to verify the
correctness of the tabulation. However, this is a much more labor intensive and
costly operation than auditing the results of other systems. In the case of DRE
systems with no paper audit trail, an audit may not be possible.
We will be discussing all of these systems in more detail in the future.
Why electronic voting
There is one fundamental question about electronic voting that I want to
address up front, in this overview. That is: why electronic voting at all?
Most of the fervor around electronic voting has centered around direct
recording electronic (DRE) machines that lack a voter verifiable paper audit
trail (VVPAT) . These machines, typically touchscreens, record the voter's
choices directly to digital media without producing any paper form. As a
result, there is typically no acceptable way to audit the tabulation performed
by these machines. Software bugs or malicious tampering could result in an
incorrect tabulation that could not be readily detected or corrected after the
It's fairly universally accepted that these machines are a bad idea. Basically
no one approves of them at this point. So why are they so common?
Well, this is the first major misconception about the nature of electronic
voting: DRE machines with no VVPAT are rare. Only ten states still use them,
and most of those states only use them in some polling places. Year by year,
the number of DRE w/o VVPAT machines in use decreases as they are generally
being replaced with other solutions.
The reason is simple: they are extremely unpopular.
So why did anyone ever have DRE machines? And why do we use machines at all
instead of paper ballots placed in a simple box?
The answer is the Help America Vote Act of 2002 (HAVA). The HAVA was written
with a primary goal of addressing the significant problems that occurred with
older mechanical voting systems in the 2000 election, including accessibility
problems. Accessibility is its biggest enduring impact: the HAVA requires that
all elections offer a voting mechanism which is accessible to individuals with
various disabilities including impaired or no vision.
In 2002, there were few options that met this requirement.
The other key ingredient is, as we discussed earlier, the nature of election
administration in the US. Elections are not just administered but funded at the
state and county level. State budgets for elections have typically been very
slim, and suddenly, in 2002, most states suddenly faced a requirement that
they replace their voting systems.
The result was that, in the years shortly after 2002, basically the entire
United States replaced its voting systems on a shoestring budget. Many states
were forced to go for the cheapest possible option. Because paper handling adds
an appreciable amount of complexity, the cheapest option was to do it in
software: "paperless," or non-auditable, DRE machines.
To the extent that DRE w/o VVPAT machines are still in use in 2021, we are still
struggling with the legacy of the HAVA's good intentions combined with the US's
decentralized and tiny budget for the fundamental administration of democracy.
We don't have non-auditable voting systems because someone likes them. We have
them because they were all we could afford in 2003, and because we haven't since
been able to afford to replace them.
Basically the entire electronic voting landscape revolves around this single
issue: there is enormous pressure in the US to perform elections as cheaply as
possible, while still meeting sometimes stringent but often lax standards.
The driver on selection of election technology is almost never integrity, and
seldom speed or efficiency. It is nearly always price.
In upcoming posts, I will be expanding on this with (at least!) the following
The philosophy of the "Australian" or "Massachusetts" ballot
Tabulating systems - central tabulation vs precinct tabulation vs DRE
Electronic pollbooks, voter identification, and ballot preparation
Administration of voter registration and the practical issues around access
to the polls
Election reporting ("unofficial" results) and canvassing ("official"
 I highly recommend that anyone with an interest in election administration
step up as a poll worker. You will learn more than you could imagine about the
practical considerations around elections.
 We will talk more about VVPAT and how it compares to a paper ballot in the
programming note update: the ongoing reliability problems with computer.rip
have been tracked down to a piece of hardware which is Not My Problem, and so
I anxiously await the DC installing a replacement. Hopefully the problem will
be resolved shortly.
And now for more about telephones, because I am on vacation in Guadalajara and
telephones are decidedly a recreational topic. If you follow me on
twitter I am probably about to provide an
over-length thread on some Mexican telephone trivia.
Back when I was talking about turrets,
I mentioned their relationship to key systems. While largely forgotten today,
key systems were an important step in the evolution of business telephone
systems and remain influential on business telephony today. Let's talk a bit
about key systems, including some particularly notable ones.
But first, it would be helpful to understand the landscape of business
telephony systems. I'm writing this from the perspective of today, but I think
this overview will be helpful in understanding the context in which the key
system was invented and became popular.
Most businesses have a simple problem: they have, say, ten employees, each with
a phone, but they do not want to pay the considerable expense of having ten
telephone lines in service. It would be much better to have, say, two telephone
lines, which were shared among the employees. The first and most obvious
solution was the private branch exchange, often abbreviated PBX. In a
classic PBX arrangement, one or more outside lines terminate at a small manual
exchange (the type with operators that insert plugs to connect lines). The PBX
can provide the same services as a telco exchange, including answering incoming
calls and directing them to inside lines, but comes at the significant
disadvantage of requiring an operator.
Today, it's not unusual for a front-desk receptionist or other similar employee
to serve as the de facto telephone operator (usually today called an
"attendant" to differentiate from the older position of a dedicated operator),
answering incoming calls and directing them appropriately. The design of a
manual telephone exchange made this impractical, though, as even small manual
exchanges were pretty large and nearly required wearing a headset... wearing
a headset and sitting behind a plugboard was not amenable to greeting guests or
other typical receptionist tasks, so a dedicated, full-time telephone operator
was basically required. This made PBXs very expensive to operate, in addition
to the considerable expense of purchasing one.
The solution here seems obvious: the Private Automated Branch Exchange, or
PABX. A PABX uses automatic switching rather than manual. Outbound calls can be
made by dialing, while inbound calls can be managed by various techniques like
DID or an automated attendant. In the case of DID, Direct Inward Dialing, the
telephone company assigns a unique telephone number to each employee of a
company even though the company does not have that many lines (for practical
reasons related to how mechanical switches hunted for available lines, in early
cases these numbers usually had to be sequential). When the telco connected a
call to the PABX, it used some technique to indicate the number the call had
been dialed to originally---early on this was often the delightfully named
Revertive Pulsing, where once the PABX "answered" the line the exchange
pulse-dialed back to the PABX, often with the last n digits of the called
In the case of an automated attendant (AA), the PABX answers and plays an audio
recording prompting the caller to enter an extension. It then connects the call
appropriately. The AA may optionally provide a menu of usually single-digit
options, although this is a bit more complicated to implement and was not as
common on early PABXs.
DID and AA are both ubiquitous today. The use of telephone extensions inside
of businesses has generally decreased over the years as DID has become easier
and cheaper to implement, but AAs remain common for telephone menus, which may
straddle the line between a "mere" AA and the more complicated interactive
voice response (IVR) system.
Here's the problem, though: in the early days of business telephony, DID and
AAs were both very complex to implement. Early PABXs were mechanical, even
Strowger (also called step-by-step or SXS), and the introduction of DID
significantly complicated the switching matrix. The lack of good, reliable
audio playback devices and the lack of universal DTMF signaling made AAs
impractical for quite some time.
So, here is the problem: for smaller organizations, which could not justify the
expense of employing a telephone operator during business hours, there were few
practical options. PABXs were too expensive and too limited, often still
requiring a full-time operator to handle incoming calls .
The key system was introduced as a compromise. Like a PABX, it does not require
an operator. But, a key system is substantially less complex and expensive than
a PABX. What's the trick? A key system makes everyone act as the operator.
When I previously mentioned key systems I put it like this: a PABX connects
many users to each line. A key system connects many lines to each user.
Lets say again you are a small organization with about ten employees and you
want to pay for two lines. When you install a key system, you connect the two
outside lines to a Key Service Unit (KSU). The KSU is then connected to each of
the ten telephones by a large, multi-pair cable, often a 25-pair Amphenol type
connector. Superficially, it may look like a PABX, but the use of the
multi-pair cable is a big hint to what's going on: the KSU only provides very
minimal electrical conversions and mostly just acts as a jumper matrix. All of
the actual logic is in the telephones, each of which have all of the outside
lines connected directly to them.
The "key" in "key system" refers to the "line keys" on each phone. In our
notional two-line system, each phone has two buttons labeled "line 1" and "line
2." Whenever a line is in use, the button lights. When a line is ringing, the
light flashes and the phone may ring depending on configuration (ringing can
usually be enabled/disabled per line to provide a simple concept of "call
groups" if the outside lines have different numbers).
To place a call, a user presses a line key that is not lit, which connects
their phone directly to that outside line. They then dial normally. To answer a
call, the user presses the flashing line key and then picks up the phone. All
they really have is a phone that is connected to all of the outside lines, the
key system just makes it possible to have many phones connected this way at
Of course, early on key systems sprouted additional features. Even the earliest
key systems started to offer an "intercom" feature, in which one or more pairs
on each phone were connected to an "intercom bridge" in the KSU. This provided
a feature that is superficially like a PABX's inside calling: a user can press
an intercom key and then dial a number, which causes another phone on the system
to ring. When that person answers, they can have a conversation. Of course the
simple design of the feature imposes a lot of limitations, and generally only one
intercom call can be made for each assigned intercom bridge on the system. This
was often only one or two.
You can also see that key systems pose a significant risk of "collisions."
Later key systems often included a "privacy" feature that locked out phones
from connecting to a line when it was currently in use, so that other users
could not eavesdrop on your calls. The feature could similarly prevent someone
trying to make an intercom call suddenly being placed in an existing call. Of
course these features meant that if all outside lines or all intercom bridges
were in use, it was simply not possible to make a call. The line key lights
served an important purpose in showing users when a line was available for
Perhaps the quintessential key system is the Western Electric 1A and
descendants, which were in widespread use for decades around the mid century.
Later revisions of the 1A such as the 1A2 supported as many as 29 lines to each
phone (this required multiple 25-pair cables per phone!) and advanced (for the
time) features such as attended transfer and music on hold.
Key systems were often designed flexibly to reduce cost of installation. For
example, outside lines might be allocated to different departments. Most phones
would only need to be connected to the lines for their department, but a
receptionist might have a "call director" phone that presented all lines so
that they could answer calls for multiple departments .
My favorite key system, though, is the AT&T Merlin. The Merlin was a late
digital key system, introduced in 1983, and so began to blur the line between
key system and PABX. Most importantly, though, the Merlin telephone instruments
were beautiful. Seriously, look at
An advertising campaign including product placement in films and television
reinforced the aesthetic cache of the Merlin. The campaign is said to have been
so successful that the Merlin instruments became something of a status symbol,
and client-contact organizations like law firms would upgrade from 1A2 to
Merlin just for the desk decorations. I recall having read once that the Merlin
was a key inspiration for the design of the NeXT Cube under Steve Jobs, but I
cannot find a source on this now so perhaps I just made it up. I certainly hope
It might seem that key systems would be an artifact of history today, entirely
outmoded by the availability of inexpensive PABX systems. There were a lot of
disadvantages to key systems. Besides the issue of users having to manually
select lines, and limited logic on ring groups, the large multi-pair cables
required to telephone instruments made key systems expensive to install and
not amenable to reuse of existing phone cabling in a building.
The funny thing is that sort of the opposite happened. The low-cost PABXs that
became readily available in the 1990s were actually more descended from key
systems than the earlier electromechanical PABXs. The small business PABX I
have in my house, for example, the Comdial DX80, is basically an overgrown
key system. Yet it has many of the advantages of an earlier PABX!
Here's the trick: the availability of computer-controlled digital switching and
communications allowed for implementing a "key system" using a standard
two-pair line to each telephone. Small businesses were usually upgrading from
key systems and expected similar behavior. So it just made sense to take a
suite of PABX features and shove them into a key system, using digital
signaling to simplify the installation of the system.
So the DX80 for example works like this: the KSU communicates with the phones
using a digital protocol over a single-pair telephone line. Each telephone
instrument can be equipped with a full set of line keys for the KSU's up-to-16
outside lines, but the KSU is also capable of automatically selecting outside
lines and automated incoming call routing based on DID or an auto-attendant.
Internal calling between phones is managed digitally and is not limited to one
or two intercom lines. All this adds up to flexibility: you can use the DX80 as
either a key system or a PABX, depending on how you configure it. You can
leave automated line selection un-configured and present line keys on the
phones, or you can remove the line keys from phones (reallocating them to other
uses) and set up fully automatic call handling.
Many organizations ended up doing both!
A lot of '90s to '00s PABXs were like this. They had sort of an identity crisis
between key system and PABX where they wanted to present the convenience of a
PABX without removing the familiar line keys for direct access to outside
lines. Those line keys could be important, after all, as not all businesses had
a DID arrangement (or even disconnect supervision) from their telco, so the use
of the line keys allowed for connecting the PABX directly to a "normal"
telephone line without needing to get the telco to enable additional features.
Today, most business telephone systems are being converted to VoIP which can
provide additionally flexibility and features, and basically obsoletes the
concept of a key system since the "number of lines" on a VoIP trunk is a
largely synthetic concept. Nonetheless, most VoIP systems can be configured for
key-system-like behavior if you really want it.
 I have omitted from this discussion the Centrex and other forms of telco-
operated PABXs. I will probably do a full post on these in the future. For a
short time I worked for a large organization which owned a formerly
AT&T-operated 5ESS as their PABX and had the pleasure of getting an extensive
tour of the system from one of its few remaining on-site technicians. It has
since been decommissioned. As a basic hint, when an organization is large enough
to have one or more exchange codes to itself (often seen with universities and
older large corporations), it's likely that they had an on-site PABX provided
by the telco. If an organization had a set of sequential numbers but no on-site
switch, they probably used Centrex, which was basically the same arragement
except for the switch was located in a telco office (and often "virtualized" on
an existing ESS). Centrex was also popular with organizations that were very
large but had multiple facilities, like school districts, since the existing
telco exchange office was as convenient of a central location as anywhere else.
That said, the nature of their close relationship to government meant that
school districts often found it convenient to run their own private trunk lines
between buildings, and so they may have still used an on-site switch.
 The term "call director" is still sometimes used today to refer to phones
with an unusually large number of line buttons, often on a device like a
"receptionist sidecar". The terminology is confused by "Call Director" also
being the name of various PABX products and features.