Many readers may know that, historically, telephone instruments (e.g. the thing
you hold up to your face and talk into) have been considered part of the
telephone system proper. In practice, that meant that up until the '70s most
people leased their phones from the telephone company, and the telephone
company maintained everything from the exchange office to the side of your
The biggest landmark in the shift of telephone instruments from "part of AT&T"
to personal property that you can buy at WalMart is the "Carterfone decision."
The Carterfone precedent, that telephone users could connect anything they want
to the telephone network as long as it is reasonably well-functioning and
compatible, created the entire modern industry of telephones and accessories.
What's only remembered a little hazily, though, is what the Carterfone actually
was---the original device that fought for telephone interconnection.
The Carterfone was an acoustic coupler that allowed a telephone handset to be
connected to a two-way radio. Essentially, it was the first form of telephone
patch, although it was very simple and did not provide the automation that
would later be expected from phone patches. A typical use-case for the
Carterfone was to allow a dispatcher to connect someone in the field (using a
mobile radio) to someone by telephone, with the dispatcher doing all of the
dialing and supervision of the call.
The Carterfone was a long way from the wireless telephony systems we use today,
but it is a common ancestor to most of them. It, and several other early
telephone radio patch systems, introduced the concept that telephones could be
divorced from their wires. Of course this lead to the development of
radiotelephony, cellular phones, etc, but great distances weren't required for
the radiotelephone to be useful.
Well into the '90s it was common to see people walking around their homes
trailing 50' of telephone wire. A great deal of couplers and long cables were
available for the purpose, and it either replaced or complemented (depending on
budget) the also common practice of providing a telephone jack in each room.
Wouldn't it be easier to avoid the need for in-wall, under-carpet,
along-baseboard, and trip-hazard wiring entirely by using some sort of local,
I am, of course, describing the cordless phone, and early cordless phones were
not much more complex than low-power two-way radios. There is one primary way
in which a telephone is different from a radio: telephones are full-duplex,
while "radio" is generally taken to refer to a half-duplex system in which it
is necessary to use a push-to-talk or other control to switch to transmit.
Nowadays we tend to take full-duplex communications somewhat for granted,
because it is reasonably easy to simulate it through packetization. In the '80s,
though, when cordless phones became popular, packetized digital voice
technology was still some ways from being sufficiently small and cheap to put
in a consumer phone. Instead, full-duplex communication required that the handset
and base station both transmit and receive simultaneously.
So, a cordless handset contained two radios, a transmitter and a receiver, and
the base station contained two radios as well. There is a basic problem with
operating two radios like this: the receive radio will typically be overwhelmed
by the signal emitted by the transmit radio, and so unable to receive anything.
There are various ways to solve this problem, but it becomes far easier if the
simultaneous transmit and receive are at very different frequencies. For this
reason, early cordless phone systems made use of pairs.
A typical design was this: the base station transmitted to the handset at
around 1.7MHz, and the handset transmitted back to the base station at around
27MHz. The use of two different bands made it relatively easy for each receiver
to filter out the signal from the nearby transmitter. Of course, the
split-system between two bands can now result in confusion as it's not always
clear what a "27MHz phone" for example even means.
Signaling was extremely simple, generally constrained to something like the
phone emitting one fixed tone to indicate that the base should go off-hook, and
another fixed tone to indicate it should go on-hook. Since DTMF was generated
on the handset, no other signaling was required. This allowed the
implementation to be almost entirely analog.
Actual modulation was FM, and on early cordless phones channel selection was
manual with only a small number of discrete channels available. This meant
that interference and crosstalk between different cordless phones was a common
problem, one worsened by cordless phones sharing frequency allocations with
some devices like baby monitors. Security was a very real problem, people did
indeed listen in on other people's cordless phone calls by repurposing consumer
devices like baby monitors or using wide-band receive radios.
Later cordless phones continued to use this basic scheme, although a 47Mhz band
was added into the mix. The use of the higher frequency bands had the major
benefit of making the antennas smaller; 1.7MHz phones had required a telescopic
whip antenna while 27MHz/49MHz phones could generally make do with a "rubber
ducky" antenna more typical of cordless phones today.
More interestingly, though, these later phones in the VHF low-band were capable
of quite a few more channels and so introduced automatic channel selection. The
need to implement link negotiation lead naturally to a more sophisticated
signaling system between the handset and the base. Digital signaling allowed
the base and handset to exchange commands, first enabling the "find handset"
button on the base and later allowing the handset to control an answering
machine integrated into the base.
The introduction of 900MHz phones (900MHz is a common ISM band used by a
variety of devices) occurred somewhat in a transitional period, as there are
both analog and digital phones made for 900MHz with various combinations of
signaling and security features. Digital encoding and the use of spread
spectrum tended to improve audio quality but also security, because even if
there was no encryption decoding required a specialized device... and spread
spectrum is itself a form of security if the sequence is determined in a secure
fashion. Digital cordless phones largely eliminated eavesdropping except by
particularly determined opponents, but generally only did so by raising the
attack cost (requiring a decoder) rather than by employing strong security.
The 900MHz band is crowded with devices, as are 2.4GHz and 5.8 GHz where
various late-'90s and early-'00s digital cordless phones operated. This often
translated into disappointing range and reliability, and moreover there was
a serious lack of standardization. Standardization tends to be less important
for cordless phones because the base and handsets are sold together to the
extend that consumers have no real expectation of them being interchangeable
(technically they often are but it is difficult to determine between which
devices). A bigger issue was a lack of consumer understanding of the different
features that cordless phones carried. Was any given phone secure (in that it
employed some type of encryption)? Was it going to be more or less subject to
interference from the microwave oven? What about range?
All of these questions would be best addressed by the industry concentrating
on a standard cordless phone implementation, and the allocation of a dedicated
band for these devices would significantly reduce interference problems. The
confluence of these two interests lead to a major standards effort in the early
'00s that culminated in the adoption of a European standard which had been the
norm in Europe since the '90s: DECT.
DECT is a very interesting beast. While DECT is now strongly associated with
consumer cordless phones, it was always intended for much more ambitious
use-cases. DECT could serve as the entire wireless plane for a corporate PBX
system, for example, using centralized base stations (ceiling mount perhaps) to
set up calls and signaling between a variety of handsets carried by employees
. DECT was even contemplated as a cellular telephony standard, with
urban-area base stations managing large numbers of subscriber handsets.
In practice, though, DECT has a much more modest role in the US. First, there
is a matter of naming. DECT in the US is referred to as DECT 6.0, which has
created a false impression that it is either version 6 or operates at 6.0 GHz.
In fact, DECT 6.0 operates in a 1.8GHz band allocated for that purpose and the
name is just marketing fluff because 6 is a bigger number than the 5.8GHz
cordless phones that were on the market at the time.
DECT makes use of frequency-hopping techniques as well as digital encoding
that allow for active mitigation of interference based on time-division
multiplexing. The takeaway is that, generally speaking, DECT phones will not
interfere with other DECT phones. This addressed the biggest performance
problems seen in congested areas.
DECT is, of course, packetized, and took many tips from the simultaneous work
on ISDN in Europe. The DECT network protocol, LAPC, is based on the ISDN data
link layer. Both are based on HDLC, an ISO network protocol that was derived
from IBM's SNA. So, in a vague historical sense, modern cordless phones speak
the same language as late '70s big iron, but over a wireless medium.
LAPC is a fairly complete network protocol, even from a modern perspective, and
provides both connection-oriented and connectionless communications. Like most
network protocols from the telecom industry, DECT supports defined-bandwidth
connections by means of allocating "slots" in the network scheduling.
On top of LAPC a variety of functionality has been implemented, but most
importantly basic call control (supervision) functionality and set up of
real-time media channels using various codecs. Common codecs are ADPCM and
u-law PCM, which both provide superior call quality to the cellular network
(when HD voice does not succeed).
Because of DECT's greater ambitions, there is also a management protocol
defined which allows handsets to register with base stations and exchange
subscriber information. This allows cellular-like behavior that supports
environments where there are multiple base stations with handsets roaming
DECT supports authentication and encryption, but implementation is very
inconsistent. The standard authentication and encryption protocol until around
a decade ago was one with known weaknesses. As a practical matter, the use of
FSS and TDMA in DECT makes active attacks difficult, and so DECT "exploits" do
not seem to have ever been particularly common in the wild... but it certainly
is possible to intercept DECT traffic, which lead to a change in standards to
128-bit AES encryption with improved key negotiation. Unfortunately it's not
always easy to tell if devices make use of the newer encryption capability (or
any at all), so security issues remain in practical DECT systems.
That's a lot about what DECT actually does, but what about the weird things it
could do and usually doesn't? Those are my favorite kinds of things.
Protocols have been defined to run IP on top of DECT. IP-over-DECT was actually
very competitive with 801.11 WiFi---it was slow (0.5mbps) but comparatively
very reliable. A particular strong point for DECT was its origin as a possible
cellular standard: DECT has always handled roaming between base stations very
well, something that multi-AP WiFi systems struggle with in practice to this
day. This made DECT a particularly compelling option for data networking in
large, industrial environments. DECT IP networking was integrated into some
industrial hand-held PC systems but was never common, despite efforts to
commercialize a very WiFi-like DECT system under the name Net3. Another DECT
computer networking initiative ran under the name PADcard and seems to have
been launched on a few consumer products, but fizzled out very quickly.
What about DECT as a public cellular telephony standard? This doesn't seem to
have really materialized anywhere. DECT had success in large-area applications
like mines, but these were all still private corporate systems. DECT
development for large systems ran pretty simultaneously with the development of
GSM, which quickly gained more traction in DECT's European stronghold.
Despite DECT's failure to achieve its full potential, it has brought a
surprising level of sophistication to the humble cordless phone. Modern
cordless phones are almost exclusively DECT and take advantage of DECT's
capabilities to offer multiple handsets per base station, intercom calling
between handsets (often both dialed and "page" or broadcast), and a solid-state
answering machine integrated into the base station and controllable from
handsets. All cool features that no one uses, because now we all have
 This use-case has obtained some success in the US in retail environments.
Pacific Northwest retailer Fred Meyer's, for example, at least prior to the
Kroger acquisition, armed each staff member with a DECT handset and earpiece at
most stores. The advantages of this type of setup are clear: DECT handsets can
be similarly priced to two-way radios but allow full-duplex communication and a
"hybrid" model of radio vs. telephony behavior, by either selecting an intercom
channel or dialing a number to reach a specific person. DECT in retail is
likely giving way to IP-based solutions like Theatro (in use at some Walgreens
locations), but a great many retailers (most Wal-Marts, for example) still use
basic two-way radios on MURS or color dot channels. PBX DECT systems are still
available though, and I remain tempted to buy the DECT gateway for the '90s
digital PBX that runs in my office closet. More in the modern era, because DECT
is better established than WiFi for telephony applications there are a lot of
"Cordless IP phones" that handle all the IP in the base station and use DECT to
reach the cordless phones. These are basically just IP evolutions of the older
approach of a DECT module connected to analog accessory ports or a dedicated
board in the PBX.
A special bonus addendum for the Hacker News crowd: After I wrote this, I
somehow ran into the Japanese Personal Handy-Phone
System. It's a
moderately successful (for a time) cellular service using a technology that was
very similar to DECT. Despite the Wikipedia article sort of making it sound
like it, PHS does not seem to have actually been based on DECT in any way.
It looks like a case of parallel evolution at the least.
Given the sources of most of my readers, some of you may have seen this
article about A/UX,
Apple's failed earlier effort towards delivering an Apple user experience on
a POSIX operating system. A/UX was driven primarily by demands for a "serious
workstation," which was a difficult category to obtain at any kind of reasonable
price in the 1980s. It is also an example of Apple putting a concerted effort
into attracting large enterprise customers (e.g. universities), something that
is not exactly part of the Apple brand identity today.
I wanted t, expand on A/UX by discussing some of Apple's other efforts to make
serious inroads into the world of Big Business. In the 2000s and 2010s, Apple
was frequently perceived as focusing on consumers and small organizations
rather than large organizations. This was perhaps a result of Apple's
particular nexus to the personal computing ethos; Apple believed that computers
should be affordable and easy to use rather than massively integrated.
This doesn't seem to have been a choice made by intention, though: through the
'90s Apple made many efforts to build a platform which would be attractive to
large enterprises. Many major events in the company's history are related to
these attempts, and they're formative to some aspects of Apple's modern
products... but they were never really successful. There's a reason that our
usual go-to for an integrated network computing environment is the much
maligned Microsoft Active Directory, and not the even more maligned Apple
Indeed, somewhere around 2010 Apple seems to have lost interest in enterprise
environments entirely, and has been slowly retiring their products aimed at
that type of customer. On the one hand, Apple probably claims that their iCloud
offerings replace some of the functionality of their on-prem network products.
On the other hand, I think a far larger factor is that they just never sold
well, and Apple made a decision to cut their losses.
But that's starting at the end. Let's take a step back to the '80s, during the
project that would become A/UX, to Apple's first major step into the world of
I have previously
mentioned the topic of
network operating systems, or NOS. NOS can be tricky because, at different
points in time, the term referred to two different things. During the '80s, an
NOS was an operating system that was intended specifically to be used with a
network. Keep in mind that at the time many operating systems had no support
for networking at all, so NOS were for a while an exciting new category. NOS
were expected to supported file sharing, printer sharing, messaging, and other
features we now take for granted .
Apple never released an NOS as such (stories about the 'i' in iMac standing for
'internet' aside), but they were not ignoring the increasing development of
microcomputer networks. The Apple Lisa appears to have been the first Apple
product to gain network support, although only in the most technical sense.
Often referred to as AppleNet , this early Apple effort towards networking
was directly based on Xerox XNS (which will be familiar to long-time readers,
as many early PC network protocols were derived from XNS). AppleNet ran at
1mbps using coaxial cables, and failed to gain any meaningful traction. While
Apple announced Lisa-based AppleNet file and print servers, I'm not sure
whether they ever actually shipped a server product at all (at least the file
server was vaporware).
Apple's second swing at networking took the form of AppleBus. AppleBus was
fundamentally just a software expansion of the Macintosh serial controller into
an RS-422 like multi-drop serial network scheme. Because AppleBus also served
as the general-purpose peripheral interconnect for many Apple systems into the
'90s, it could be seen as an unusually sophisticated interconnect in terms of
its flexibility. On the other hand, consumers could find it confusing that a
disk drive and the network could be plugged into the same port, a confusion
that in my experience lasted into the '00s among some devoted Macintosh users.
Nonetheless, the idea that inter-computer networking was simply an evolution of
the peripheral bus was a progressive one that would continue to appear in
Apple-influenced interconnects like Firewire and to a lesser extent
Thunderbolt, although it would basically never be successful. In a different
world, the new iMac purchaser that thought USB and Ethernet were the same thing
might have been correct. Like the proverbial mad prophet, they have a rare
insight into a better way.
For all of my optimism about AppleBus, it was barely more successful than
AppleNet. AppleBus, by that name, came and went with barely any actual computer
inter-networking applications. AppleBus was short-lived to the extent that it
is barely worth mentioning, except for the interesting fact that AppleBus
development as a computer network was heavily motivated by the LaserWriter.
One of the first laser printers on the market, the LaserWriter was formidably
expensive. AppleBus was the obvious choice of interface since it was well
supported by the Macintosh, but the ability to share the LaserWriter between
multiple machines seemed a business imperative to sell them. Ipso facto, the
peripheral interconnect had to become a network interconnect.
In 1985, Apple rebranded the AppleBus effort to AppleTalk, which will likely be
familiar to many readers. Although AppleTalk was a fairly complete rebrand and
significantly more successful than AppleBus, it was technically not much
different. The main component of AppleTalk outside of the built-in serial
controller was an external adapter box which performed some simple level
conversions to allow for a longer range. Unfortunately this simple design lead
to limitations, the major one being a speed of just 230kbps which was decidedly
slow even for the time.
As early as the initial development of AppleBus, token ring seem to be the
leading direction for microcomputer networking and, indeed, Apple had been
involved in discussions with IBM over use of token ring for the Macintosh.
Unfortunately for IBM, their delays in having token ring ready for market lead
somewhat directly to Apple's shift towards ethernet. Since token ring was not
an option for the LaserWriter, Apple pushed forward with serial AppleBus for
several years, by which point it became clear that Ethernet would be the
victor. In 1987, AppleTalk shifted to Ethernet as a long-range physical layer
(called EtherTalk) and the formerly-AppleBus serial physical layer was
rebranded as LocalTalk.
AppleTalk was a massive success. For fear of turning this entire post into a
long discussion of Apple networking history, I will now omit most of the fate
of AppleTalk, but LocalTalk remained in use to the late '90s even as IP over
Ethernet became the norm. AppleTalk was for a time the most widely deployed
microcomputer networking protocol, and a third-party accessory ecosystem
flourished that included alternate physical media, protocol converters,
switches, etc. Of course, this all inevitably fell out of use for
inter-networking as IP proliferated.
The summation of this history is that Apple has offered networking for their
computers from the mid-'80s to the present. But what of network applications?
One of the great selling points of AppleTalk was its low-setup,
auto-configuring nature. As AppleTalk rose to prominence, Apple broke from
conventional NOS vendors by keeping to more of a peer-to-peer logical
architecture where basic services like file sharing could be hosted by any
Macintosh. This seems to have been an early manifestation of Apple's dominance
in zero-configuration network protocols, which is perhaps reduced today by
their heavy reliance on iCloud but is still evident in the form of Bonjour.
This is not at all to say that Apple did not introduce a server product,
although Apple was slow to the server market, and the first AppleTalk file and
print servers were third-party. In 1987, Apple released their own effort:
AppleShare. AppleShare started out as a file server (using the AFP protocol
that Apple designed for this purpose), but it gained print and mail support,
which put it roughly on par with the NOS landscape at the time.
But what of directory services? Apple lagged on development of their own
directory system, but had inherited NetInfo from their acquisition of NeXT.
I've had a hard time finding much about NetInfo, and apparently the one fact
I used to state here was wrong. It seems to have been used in universties but
not especially widely.
The story of Apple directory services becomes clearer with Apple Open
Directory, an LDAP-based solution introduced in 2002. Open Directory is largely
similar to competitors like MS AD and Red Hat IDM. As a result of the move to
open standards, it has occasionally, with certain versions of the relevant
operating systems, been possible to use OS X Server as an Active Directory
domain controller or join OS X to an Active Directory domain. I have previously
worked with OS X machines joined to an Oracle directory, which was fine except
for all the Oracle parts and the one time that Apple introduced a bug where OS
X didn't check user's passwords any more. Haha, wacky hijinx!
As I mentioned, Apple has been losing interest in the enterprise market. The
primary tool for management of Apple directory environments, Workgroup Manager,
was retired around 2016 and has not really received a replacement. It is still
possible to use Open Directory, but no longer feels like a path that Apple
intends to support.
That of course brings us naturally to the entire topic of OS X Server and the
XServe hardware. Much like Windows Server, for many years OS X Server was a
variant of the operating system that included a variety of pre-configured
network services ranging from Open Directory to Apache. As of now, OS X Server
is no longer a standalone product and is instead offered as a set of apps to be
installed on OS X. This comes along with the end of the XServe hardware in
2011, meaning that it is no longer possible to obtain a rack-mount system which
legitimately runs OS X.
The industry norm now appears to be affixing multiple Mac Minis to a 19" sheet
of plywood with plumber's tape. And that, right there, is a summary of Apple's
place in the enterprise market. In actuality, Apple has come around somewhat
and now offers an official rack ear kit for the Mac Pro. That said, it's 5U for
specifications that other vendors fit in 1U (save the GPU which is not of
interest to conventional NOS applications), and lacks conventional server
usability features like IPMI or even a cable arm. In general, Apple continues
to be uninterested in the server market, even to support their own
workstations, and really offers the Mac Pro rack option only for media
professionals who have one of the desks with 19" racks integrated .
I originally set out to write this post about Apple's partnership with IBM,
which has delightful parallels to Microsoft's near simultaneous partnership
with IBM as both were attempting to develop a new operating system for the
business workstation market. I find great irony and amusement in the fact that
both Microsoft, which positioned itself as Not IBM, and Apple, which positioned
itself as even more Not IBM, both spent an appreciable portion of their
corporate histories desperately attempting to court IBM. While neither was
successful, Apple was even less successful than Microsoft. Let's take a look at
that next time---although I am about to head out on a vacation and might either
not write for a while or end up writing something strange related to said trip.
 Amusingly, despite many efforts by vendors, local file and printer sharing
can still be a huge pain if there are heterogeneous product families involved.
The more things change, the more they stay the same, which is to say that we
are somehow still fighting with SMB and NFS.
 AppleBus, AppleNet, and AppleTalk all refer to somewhat different things,
but for simplicity I am calling it AppleNet until the AppleTalk name was
introduced. This is really just for convenience and because it matches the
terminology I see other modern sources using. The relationship between AppleBus
and AppleNet was as confusing at the time as it is now, and it is common for
'80s articles to confuse and intermix the two.
 I would absolutely be using one of these if I had room for one.
Update: A reader sent me a correction. I had said that NetInfo functioned on
AppleTalk, but it did not, it was IP only. The reader also raised questions
about the capability of MacOS server to function as an Active Directory DC.
Apple does state this in the sell sheet for OS X Server 10.4, but I agree
that there are reasons to question how extensive that capability was,
and it seems to have disappeared shortly thereafter. I'm going to do some
more research into that aspect.
Let's return, for a while, to the green-ish-sometimes pastures of GUI systems.
To get to one of my favorite parts of the story, delivery of GUIs to terminals
over the network, a natural starting point is to discuss an arcane, ancient
GUI system that came out of academia and became rather influential.
I am referring of course to the successor of W on V: X.
Volumes could be written about X, and indeed they have. So I'm not intending to
present anything like a thorough history of X, but I do want to address some
interesting aspects of X's design and some interesting applications. Before we
talk about X history, though, it might be useful to understand the broader
landscape of GUI systems on UNIX-like operating systems, though, because so far
we've talked about the DOS/VMS sort of family instead and there are some
Operating systems can be broadly categorized as single-user and multi-user.
Today, single-user operating systems are mostly associated only with embedded
and other very lightweight devices . Back in the 1980s, though, this divide
was much more important. Multi-user operating systems were associated with "big
iron," machines that were so expensive that you would need to share them for
budget reasons. Most personal computers were not expected to handle multiple
users, over the network or otherwise, and so the operating systems had no
features to support this (no concept of user process contexts, permissions,
Of course, you can likely imagine that the latter situation, single-user
operating systems, made the development of GUI software appreciably easier.
It wasn't even so much about the number of users, but rather about where they
were. Single-user operating systems usually only supported someone working
right at the console, and so applications could write directly to the graphics
hardware. A windowing system, at the most basic, only really needed to concern
itself with getting applications to write to the correct section of the frame
On a multi-user system, on the other hand, there were multiple terminals
connected by some method or other that almost certainly did not allow for
direct memory access to the graphics hardware. Further, the system needed to
manage what applications belonged on which graphics devices, as well as the
basic issue of windowing. This required a more complicated design. In
particular, server-client systems were extremely in at the time because they
had the same general shape as the computer-terminal architecture of the system.
This made them easier to reason about and implement.
So, graphics systems written for multi-user systems were often, but not always,
server-client. X is no different: the basic architecture of X is that of a
server (running within the user context generally) that has access to a
graphics device and input devices (that it knows how to use), and one or more
clients that want to display graphics. The clients, which are the applications
the user is using, tell the server what they want to display. In turn, the
server tells the clients what input they have received. The clients never
interact directly with the display or input hardware, which allows X to manage
multiple access and to provide abstraction.
While X was neither the first graphics system for multi-user operating systems,
nor the first server-client graphics system, it rapidly spread between academic
institutions and so became a de facto standard fairly quickly . X's
dominance lasts nearly to this day, Wayland has only recently begun to exceed
it in popularity. Wayland is based on essentially the same architecture.
X has a number of interesting properties and eccentricities. One of the first
interesting things many people discover about X is that its client-server
nature actually means something in practice: it is possible for clients to
connect to an X server running on a different machine via network sockets.
Combined with SSH's ability to tunnel arbitrary network traffic, this means
that nearly all Linux systems have a basic "remote application" (and even full
remote desktop) capability built in. Everyone is very excited when they first
learn this fact, until they give it a try and discover that the X protocol is
so hopelessly inefficient and modern applications so complex that X is utterly
unusable for most applications over most internet connections.
This gets at the first major criticism of X: the protocol that clients use to
describe their output to X is very low level. Besides making the X protocol
fairly inefficient for conveying typical "buttons and forms" graphics, X's lack
of higher-level constructs is a major contributor to the inconsistent
look-and-feel and interface of typical Linux systems. A lot of basic
functionality that feels cross-cutting, like copy-and-paste, is basically
considered a client-side problem (and you can probably see how this leads to
the situation where Linux systems commonly have two or three distinct copy
and paste buffers).
But, to be fair, X never aimed to be a desktop environment, and that type of
functionality was always assumed to occur at a higher level.
One of the earliest prominent pseudo-standards built on top of X was Motif,
which was used as a basis for the pseudo-standard Common Desktop Environment
presented by many popular UNIX machines. Motif was designed in the late '80s
and shows it, but it was popular and both laid groundwork and matched the
existing designs (Apple Lisa etc) to an extent that a modern user wouldn't have
much trouble using a Motif system.
Motif could have remained a common standard into the modern era, and we can
imagine a scenario where Linux had a more consistent look-and-feel because
Motif rose to the same level of universality as X. But it didn't. There are a
few reasons, but probably the biggest is that Motif was proprietary and not
released as open source until well after it had fallen out of popularity. No
one outside of the big UNIX vendors wanted to pay the license fees.
There were several other popular GUI toolkits built on top of X in the early
days, but I won't spend time discussing them for the same reason I don't care
to talk about Gnome and KDE in this post. But rest assured that there is a
complex and often frustrating history of different projects coming and going,
with few efforts standing the test of time.
Instead of going on about that, I want to dig a bit more into some of the less
discussed implications of X's client-server nature. The client and server could
exist on different machines, and because of the hardware-independence of the X
protocol could also be running different operating systems, architectures, etc.
This gave X a rather interesting property: you could use one computer to "look
at" X software running on another computer almost regardless of what the two
computers were running.
In effect, this provided one of the first forms of remote application delivery.
Much like textual terminals had allowed the user to be physically removed from
the computer, X allowed the machine that rendered the application and collected
input to be physically removed from the actual computational resources running
the software. In effect, it created a new kind of terminal: a "dumb" machine
that did nothing but run an X server, with all applications running on another
The terminology around this kind of thing can be confusing and is not well
agreed upon, but most of the time the term "thin terminal" refers to exactly
this: a machine that handles the basic mechanics of graphical output and user
input but does not run any application software.
Because of the relatively high complexity of handling graphics outputs, thin
terminals tend to be substantially similar to proper "computers," but have very
limited local storage (usually only enough for configuration) and local
processing and memory capacity that are just enough to handle the display task.
They're like really low-end computers, basically, that just run the display
server part of the system.
In a way that's not really very interesting, as it's conceptually very similar
to the block terminals used by IBM and other mainframes. In practice, though,
this GUI foray into terminals took on some very odd forms over the early
history of personal computers. The terminal was decidedly obsolete, but was
also the hot new thing.
Take, for example, DESQview. DESQview was a text-mode GUI for DOS that I
believe I have mentioned before. After DESQview, the same developer,
Quarterdeck, released DESQview/X. DESQview/X was just one of several X servers
that ran on DOS. X was in many ways a poor fit for DOS, given DOS's single task
nature and X's close association with larger machines running more capable
operating systems, but it was really motivated by cost-savings. DOS was cheap,
and running an X server on DOS allowed you to both more easily port
applications written for big expensive computers, and to use a DOS machine as
an X server for an application running on another machine. The cheap DOS PC
became effectively a hybrid thin terminal that could both run DOS software and
"connect to" software running on a more expensive system.
One way to take advantage of this functionality was reduced-cost workstations.
For example, at one time years ago the middle school I attended briefly had a
computer lab which consisted of workstations (passed down from another better
funded middle school) with their disks removed. The machines booted via PXE
into a minimal Linux environment. The user was presented with a specialized
display manager that connected to a central server over the network to start
a desktop environment.
The goal of this scheme was reduced cost. In practice, the system was dog slow
 and the unfamiliarity of KDE, StarOffice, and the other common applications
on the SuSE server was a major barrier to adoption. Like most K-12 schools at
the time the middle school was already firmly in the grasp of Apple anyway.
Another interesting aspect of X is the way it relates to the user model. I will
constrain myself here to modern Linux systems, because this situation has
varied over time. What user does X run as?
On a typical Linux distribution (that still uses X), the init system starts a
desktop manager as root and then launches an X server to use, still with root
privileges. The terminology for desktop things can get confusing, but a
desktop manager is responsible for handling logins and setting up graphical
user sessions. It's now common for the display manager to actually run as its
own user to contain its capabilities (by switching users after start), so it
may use setuid in order to start the X server with root capabilities.
Once a user authenticates to the display manager, a common display manager
behavior is to launch the user's desktop environment as the user and then hand
it the information necessary to connect to the existing X instance. X runs as
root the whole time.
It is completely possible to run X as a non-privileged user. The problem is
that X handles all of the hardware abstraction, so it needs direct write access
to hardware. For numerous reasons this is typically constrained to root.
You can imagine that the security concerns related to running X as root are
significant. There is work to change this situation, such as the kernel mode
setting feature, but of course there is a substantial problem of inertia: since
X has always run with root privileges, a great many things assume that X has
them, so there are a lot of little problems that need solving and this usage is
not yet well supported.
This puts X in a pretty weird situation. It is a system service because it
needs to interact directly with scarce hardware, but it's also a user
application because it is tied to one user session (ultimately by means of
password authentication, the so-called X cookie). This is an inherent tension
that arises from the nature of graphics cards as devices that expect very
low-level interaction. Unfortunately, continuously increasing performance
requirements for graphical software make it very difficult to change this
situation... as is, many applications use something like mesa to actually
bypass the X server and talk directly to the graphics hardware instead.
I am avoiding talking about the landscape of remote applications on Windows,
because that's a topic that deserves its own post. And of course X is a fertile
field for technology stores, and I haven't even gotten into the odd politics of
Linux's multiple historic X implementations.
 Windows looks and feels like a single-user operating system to the extent
that I sometimes have to point out to people that NT windows releases are fully
multi-user. In fact, in some ways Windows NT is "more multi-user" than Linux,
since it was developed later on and the multi-user concept is more thoroughly
integrated into the product. Eventually I will probably write about some
impacts of these differences, but the most obvious is screen locking: on Linux,
the screen is "locked" by covering it with a window that won't go away. On
Windows, the screen is "locked" by detaching the user session from the local
console. This is less prone to bypasses, which perennially appear in Linux
 The history of X, multi-user operating systems from which UNIX inherited,
and network computing systems in general is closely tied to major projects at a
small number of prominent American universities. These universities tended to
be very ambitious in their efforts to provide a "unified environment" which
lead to the development of what we might now call a "network environment," in
the sense of shared resources across an institution. The fact that this whole
concept came out of prominent university CS departments helps to explain why
most of the major components are open source but hilariously complex and
notoriously hard to support, which is why everyone today just pays for
Microsoft Active Directory, including those universities.
 Given the project's small budget, I think the server was just under-spec'd
to handle 30 sessions of Firefox at once. The irony is, of course, that as
computers have sped up web browsers have as well, to the extent that running
tens of user sessions of Firefox remains a formidable task today.
Note: several corrections made, mostly minor, thanks to HN users smcameron,
yrro, segfaultbuser. One was a larger one: I find the startup process around X
to be confusing (you hopefully see why), and indeed I described it wrong. The
display manager starts first and is responsible for starting an X server to
use, not the other way around (of course, when you use xinit to start X after
login, it goes the way I originally said... I didn't even get into that).
The strategic and tactical considerations surrounding nuclear weapons went
through several major eras in a matter of a few decades. Today we view the
threat of nuclear wear primarily through the "triad": the capability to deliver
a nuclear attack from land, sea, and air. This would happen primarily through
intercontinental ballistic missiles (ICBMs), so-called because they basically
launch themselves to the lower end of space before strategically falling
towards their targets (ballistic reentry). ICBMs are fast, taking about 30
minutes to arrive across the globe. The result is that we generally expect to
have very little warning of a nuclear first trike.
The situation in the early Cold War was quite different. ICBMs, and long-range
missiles in general, are complex and took some time to develop. From the end of
World War II to roughly the late '60s, the primary method of delivery for
nuclear weapons was expected to be by air: bombs, delivered by long-range
bombers. The travel time from the Soviet Union would be hours, allowing
significant warning and a real opportunity for air defense intervention.
The problem was this: we would have to know the bombers were coming.
Many people seem to assume that the United States has the capability to detect
and track all aircraft flying in US airspace. The reality is quite a bit
different. The problem of detecting and tracking aircraft is a surprisingly
difficult one, and even today our capabilities are limited. Nonetheless,
surveillance of airspace is considered a key element of "air sovereignty," or
our ability to maintain military and civil control of our airspace.
Let's take a look at the history of the United States ability to monitor our
During the 1940s, it was becoming clear that airspace surveillance was an
important problem. Although the United States did not then face attacks on the
contiguous US (and never would), were the Axis forces to advance to the point
of bombing missions on CONUS it would be critical to be able to detect the
incoming aircraft. The military invested in a system for Aircraft Control and
Warning, or AC&W. Progress was slow: long-range radar was primitive and
expensive, and the construction of AC&W stations was not a high-level priority.
By the end of the 1948, there were only a small number of AC&W stations, they
were considered basically experimental, and the ability to integrate data
coming from the several stations was very limited. Efforts to expand the air
surveillance system routinely failed due to lack of funding.
The Lashup project, launched in '48, made up the first major effort to build an
air surveillance system. As the name suggests, Lashup was only intended to be
temporary, funded by the congress as a stopgap measure until a more complete
system could be designed. Over the next two years, 44 radar stations were built
focused around certain strategically important areas. Lashup provided nothing
near nationwide coverage, but was expected to detect bombing runs directed
towards the most important military targets. Lashup included three stations
surrounding my own Albuquerque, due to the importance of the Sandia and Manzano
Amy Bases and the Z Division of Los Alamos.
Lashup sites used sophisticated radar sets for the time, but perhaps the most
important innovation of Lashup was the command and control infrastructure built
around it. Lashup stations were connected to the air defense command by
dedicated telephone lines, the air defense command was connected to the
continental air command by another dedicated telephone line, and ultimately
dedicated lines were connected all the way to the White House. This was the
first system built to allow a prompt nuclear response by informing the
commander in chief of an impending nuclear attack as quickly as possible.
If you've read any of my other material on the cold war, you might understand
that this is the core of my fascination with cold war defense history: the
threat of nuclear attack was, for the large part, the first thing to motivate
the development of a nationwide rapid communications system. For the first half
of the 20th century there simply wasn't a need to deliver a message from the
west coast to the President in minutes, but in the second half of the 20th
century there most definitely was.
The fear of a Soviet nuclear strike, and the resulting government funding, was
perhaps the largest single motivator of progress in communications and
computing technology from the 1940s to the 1980s. Most of the communications
technology we now rely on was originally built to meet the threat of a first
We see this clearly in the case of air defense radar. While Lashup nominally
had the capability to deliver a prompt warning of nuclear attack, the entire
process was rather manual and thus not very reliable. Fortunately, lashup was
temporary, and just as construction of the Lashup sites was complete work
started on its replacement: the Permanent System.
The Permanent System consisted of a large number of radar stations, ultimately
over 100. More importantly, though, it consisted of a system of communications
and coordination centers intended to quickly confirm and communicate a nuclear
It will help in understanding this system to understand the strategic principal
involved. The primary defense against a nuclear attack by bombers was a process
called ground-controlled intercept, or GCI. The basic concept of GCI was that
radar stations would provide up-to-date position and track information on
inbound enemy aircraft, which would be used to vector interceptor aircraft
directly towards the threat. The aid of ground equipment was critical to an
effective response, as fighter aircraft of the time lacked sophisticated
targeting radar and had no good way to search for bombers.
To this end, the Permanent System included Manual Air Defense Control Centers
(ADCC) (the "manual" was used to differentiate from automatic centers in the
later SAGE system). The ADCCs received information on radar targets from the
individual radar sites via telephone, and plotted them with wet erase marker on
clear plexiglass maps (perhaps the source of the clear whiteboard trope now
ubiquitous in films) in order to correlate multiple tracks. They then reported
these summarized formations and tracks to the Air Defense Command, at Ent AFB
in Colorado, for use in directing interceptors.
The Permanent System was extended beyond CONUS, although Alaska continued to
have a distinct air defense program. The biggest OCONUS extension of the
Permanent System was into Canada, with the Pinetree Line (the first of the
cross-Canadian early warning radar networks) roughly integrated into the
Permanent System. Perhaps most interestingly, the Permanent System also saw an
early effort at extension of early warning radar into the ocean. This took the
form of the Texas Towers, a set of three awkward offshore radar stations that
were later abandoned due to their poor durability against rough seas .
Technology was advancing extremely rapidly in the mid-20th century, and by the
time the Permanent System reached nearly 200 radar stations it had also become
nearly obsolete. For its vast scale, the capabilities of the Permanent System
were decidedly limited: it could only detect large aircraft, it performed
poorly at low altitudes (often requiring mitigation through "gap filler"
stations), and interpretation and correlation of radar data was a manual
process, costing precious minutes in the timeline of a nuclear reprisal.
Here in Albuquerque, Kirtland Air Force Base was host to the Kirtland Manual
ADCC, activated in 1951. 13 radar stations around New Mexico, eastern Arizona,
and western Texas reported to Kirtland AFB. Each of these 13 radar stations was
itself a manned Air Force Station including housing and cantonment. The
Continental Divide Air Force Station, for example, consisted of some fifty
people in remote McKinley County. The station included amenities like a library
and gym, housing and a trailer park, and two radars: an early warning radar and
a height-finding radar. Finally, a ground-air transmit-receive (GATR) radio
site provided a route for communications with interceptors.
Continental Divide AFS was deactivated in 1960. You can still see the remains
today, although there is little left other than roads and some foundations.
Like Continental Divide AFS, the Permanent System as a whole failed to make it
even a decade. In 1960, it was as obsolete as Lashup, having been replaced not
only by improved radar equipment but, more importantly, by a vastly improved
communications and correlation system: the Semi-Automatic Ground Environment,
or SAGE---by most measures, the first practical networked computer system.
We'll talk about SAGE later, but for now, check out a list of Permanent System
sites. There might be one in your
area. Pay it a visit some time; in many ways it's the beginning of the computer
revolution: a manual data collection network obsoleted in just a few years by
the development of the first nationwide computer network.
 The Texas Towers were connected to shore via troposcatter radio links, one
of my favorite communications technologies and something that will surely get
a full post in the future.
The use of electronics to administer elections has been controversial for some
time. Since the "hanging chads" of the 2000 election, there's been some degree
of public awareness of the use of technology for voting and its possible
impacts on the accuracy and integrity of the election. The exact nature of the
controversy has been through several generations, though, reflecting both
changes in election technology and changes in the political climate.
Voting is a topic of great interest to me. The administration of elections is
critical to a functioning democracy, and raises a variety of interesting
security and practical challenges. In particular, the introduction of
automation into elections presents great opportunity for cost savings and
faster reporting, but also a greater risk of intentional and accidental
interference in the voting process. Back when I was in school, I focused some
of my research on election administration. Today, I continue to research the
topic, and have added the practical experience of being a poll worker in two
states and for many elections .
Given my general propensity to have opinions, it will come as no surprise that
this has all left me with strong opinions on the role of computer technology in
election administration. But before we get to any of that, I want to talk a bit
about the facts of the matter.
The thing that most frustrates me about controversies surrounding electronic
voting is the generally very poor public understanding of what electronic
voting is. If you follow me on Twitter, you may have seen a thread about this
recently, and it's a ramble I go on often. There is a great deal of public
misconception about the past, present, and future role of electronics in
elections. These misunderstandings constantly taint debate about electronic
In an on-and-off series of posts, I plan to provide an objective technical
discussion of election technology, "electronic voting," and security concerns
surrounding both. I will largely not be addressing recent "stolen election"
conspiracy theories for a variety of reasons, but will undoubtedly touch on
them occasionally. At the very least, because I can never turn down an
opportunity to talk about J. Hutton Pulitzer, an amazing wacko who has a
delightful way of appearing with a huge splash, making a fool of himself, and
then disappearing... to pop up again a couple years later in a completely
I will restate that my goal here is to remain largely apolitical (mocking J.
Hutton Pulitzer aside), and as a result I will not necessarily respond to any
given election fraud or interference claim directly. But I do think anyone
interested in or concerned by these theories will find the technical context
that I can provide very useful.
Who runs elections?
One of the odd things about the US, compared to other countries, is the general
architecture of election administration. In the US, elections are mostly
administered by the county clerk, and the election process is defined by state
law. Federal law imposes only minimal requirements on election administration,
leaving plenty of room for variation between states.
Although election administration is directly performed by the county clerk, for
state-level elections (which is basically all the big ones) the secretary of
state performs many functions. It's also typical for the secretary of state to
provide a great deal of support and policy for the county clerks. So, while
county clerks run elections, it's common for them to do so using equipment,
software, and methods provided by the state. It's ultimately the responsibility
of states to pay for elections, which is probably the greatest single problem
with US election integrity, because states are poor.
While it seems a little odd that, say, a presidential election is run by the
county clerks, it can also be odd the other way. Entities like municipalities,
school districts, higher education districts, flood control districts, all
kinds of sub-county entities may also have elected offices and the authority to
issue bond and tax measures. These are typically (but not always) administered
by the county or counties as well, usually on a contract basis.
What is electronic voting?
Debate around electronic voting tends to focus purely around "voting machines,"
a broad category that I will define more later. The reality is that voting
machines are only a small portion of the overall election apparatus, and are
not always the most important part. So before I get into the world of election
security theory, I want to talk a bit about the moving parts of an election,
and where technology is used.
The general timeline of an election looks like this:
Registration of voters
Registration/certification of candidates
Preparation of ballots
Preparation of pollbooks
Election day: use of pollbooks, issuance of ballots, collection of ballots,
possibly continuous tallying.
Election night: rapid reporting of totalized results
Canvassing: review of problem ballots, investigation of provisional ballots,
final tallying of votes.
Certification: final audit and approval of election process and results.
To meet these ends, election administrators use various different systems.
There's a great deal of mix-and-match between these systems, many vendors offer
a "complete solution" but it's still common for election administrators to use
products from multiple vendors.
Voter registration management system (long-term)
Ballot printing system
Ballot marker or direct-recording electronic machine
Totalizing/tallying system (election management system)
Canvassing support systems - ballot adjudication, bulk scanning, etc.
Each of these systems poses various integrity and security concerns. However,
election systems can be roughly divided into two categories: tabulating systems
and non-tabulating systems.
Tabulating systems, such as tabulators and direct recording electronic (DRE)
machines, directly count votes which they record in various formats for later
totalization. Tabulating systems tend to be the highest-risk element of an
election because they are the key point at which the outcome of an election
could be altered by, for example, changing votes.
Non-tabulating systems perform support functions such as design of ballots,
registration of voters, and totalizing of tabulated votes. These systems tend
to be less security critical because they produce artifacts which are
relatively easy to audit after the fact. For example, a fault in ballot design
will be fairly obvious and easy to check for. Similarly, totalizing of
tabulated votes can fairly easily be repeated using the original output of
the tabulators (and tabulators typically output their results in multiple
independent formats to facilitate this verification).
This is not to say that tabulating systems are not subject to audit. When a
paper form of the voter's selections exists (a ballot or paper audit trail),
it's possible to manually recount the paper form in order to verify the
correctness of the tabulation. However, this is a much more labor intensive and
costly operation than auditing the results of other systems. In the case of DRE
systems with no paper audit trail, an audit may not be possible.
We will be discussing all of these systems in more detail in the future.
Why electronic voting
There is one fundamental question about electronic voting that I want to
address up front, in this overview. That is: why electronic voting at all?
Most of the fervor around electronic voting has centered around direct
recording electronic (DRE) machines that lack a voter verifiable paper audit
trail (VVPAT) . These machines, typically touchscreens, record the voter's
choices directly to digital media without producing any paper form. As a
result, there is typically no acceptable way to audit the tabulation performed
by these machines. Software bugs or malicious tampering could result in an
incorrect tabulation that could not be readily detected or corrected after the
It's fairly universally accepted that these machines are a bad idea. Basically
no one approves of them at this point. So why are they so common?
Well, this is the first major misconception about the nature of electronic
voting: DRE machines with no VVPAT are rare. Only ten states still use them,
and most of those states only use them in some polling places. Year by year,
the number of DRE w/o VVPAT machines in use decreases as they are generally
being replaced with other solutions.
The reason is simple: they are extremely unpopular.
So why did anyone ever have DRE machines? And why do we use machines at all
instead of paper ballots placed in a simple box?
The answer is the Help America Vote Act of 2002 (HAVA). The HAVA was written
with a primary goal of addressing the significant problems that occurred with
older mechanical voting systems in the 2000 election, including accessibility
problems. Accessibility is its biggest enduring impact: the HAVA requires that
all elections offer a voting mechanism which is accessible to individuals with
various disabilities including impaired or no vision.
In 2002, there were few options that met this requirement.
The other key ingredient is, as we discussed earlier, the nature of election
administration in the US. Elections are not just administered but funded at the
state and county level. State budgets for elections have typically been very
slim, and suddenly, in 2002, most states suddenly faced a requirement that
they replace their voting systems.
The result was that, in the years shortly after 2002, basically the entire
United States replaced its voting systems on a shoestring budget. Many states
were forced to go for the cheapest possible option. Because paper handling adds
an appreciable amount of complexity, the cheapest option was to do it in
software: "paperless," or non-auditable, DRE machines.
To the extent that DRE w/o VVPAT machines are still in use in 2021, we are still
struggling with the legacy of the HAVA's good intentions combined with the US's
decentralized and tiny budget for the fundamental administration of democracy.
We don't have non-auditable voting systems because someone likes them. We have
them because they were all we could afford in 2003, and because we haven't since
been able to afford to replace them.
Basically the entire electronic voting landscape revolves around this single
issue: there is enormous pressure in the US to perform elections as cheaply as
possible, while still meeting sometimes stringent but often lax standards.
The driver on selection of election technology is almost never integrity, and
seldom speed or efficiency. It is nearly always price.
In upcoming posts, I will be expanding on this with (at least!) the following
The philosophy of the "Australian" or "Massachusetts" ballot
Tabulating systems - central tabulation vs precinct tabulation vs DRE
Electronic pollbooks, voter identification, and ballot preparation
Administration of voter registration and the practical issues around access
to the polls
Election reporting ("unofficial" results) and canvassing ("official"
 I highly recommend that anyone with an interest in election administration
step up as a poll worker. You will learn more than you could imagine about the
practical considerations around elections.
 We will talk more about VVPAT and how it compares to a paper ballot in the