I got mad at my commercial landlord for running shoddy political advertising
on their buildings and trying to block a homeless shelter and in general
being exceptionally bourgeois, and so didn't renew my office lease. My mailing
address has changed, the new one is in the footer and other normal places.
Righteous outrage is convenient like that. It's a PO box now so the good news
is I'll just keep it regardless of the office situation, the bad news is that I
will need to routinely enter one of the most depressing places on earth: a New
Mexico post office. They did fix the asbestos finally but we'll see about the
rat problem. And thanks to everyone who has sent me letters, and sorry for
taking so long to respond to them.
After taking a long time to overcome my electromagnetic hypersensitivity that
only reacts to SIP, I spent an afternoon fixing the PBX and will finally
resume fax delivery to the vanishingly small list of people who have requested
it. One day I'll write a post on T.38.
I have found that Apple Mail often rejects emails from my AWS SES setup. The
SMTP error directs me to a help page with absolutely no useful information,
so clearly they're learning from Google's expertise in running a major email
service. The funny thing is that I am having no delivery issues with gmail, but
I'm pretty sure if I change anything at this point I will. So if you use Apple
Mail, I'm sorry, for many reasons. Maybe try fax? I think certain LaserWriters
could take a fax modem, if you really want to stay in-ecosystem.
Where we left
the Emergency Alert System (EAS) had been "replaced," at least in name, by
IPAWS: the Integrated Public Alert and Warning System. In fact, it's more
accurate to say that EAS is now just one component of IPAWS, and the task of
originating alerts (and much of the bureaucracy) now rests on IPAWS.
IPAWS was particularly motivated by Hurricane Katrina, as this large-scale
disaster had made it apparent how limited the existing emergency alert
infrastructure was. A large portion of people do not receive EAS alerts because
they are not listening to the radio or watching television. There are other
avenues that exist to deliver alert information but the infrastructure was not
in place to get alerts into these channels.
So, IPAWS took the fragmented landscape of miscellaneous government
communications options and combined them into one beautiful, happy family that
works together in flawless harmony. Let's just pretend.
There are several major components of IPAWS which had existed, at least in some
form, prior to IPAWS but had not been unified into one network. These were
EAS, NAWAS, WEA, and NOAA Weather Radio. More ambitiously, IPAWS is intended to
be easily extensible to include other government and non-government alerting
systems, but first, let's talk about the core.
The EAS we have already discussed. Another emergency communications system
which dates back to the Cold War is NAWAS, the National Warning System.
Wikipedia asserts that NAWAS was established in 1978, but this can't be correct
as it's described in an AT&T standard a full decade earlier as an already
existing system, with much the same capabilities it has today. 1978 may have
been a significant overhaul of the system; it's hard to figure out a whole lot
about NAWAS as it had historically been classified and today is obscure .
NAWAS serves the purpose of alerting, and more general communications, between
government authorities. It is essentially a system of four-wire  leased
telephone lines that links FEMA and other federal locations with state emergency
authorities. Within states, there is typically a subsidiary NAWAS network for
which the state authority acts as control and local authorities are connected
An older operating manual for NAWAS
has become public and you can read a great deal about it there, but the basic
concept is that it functions as an intercom system over which federal centers
such as NORAD or the National Weather Service can read voice messages, which
will be heard in all state emergency operations centers. This provides a very
rapid way of spreading basic information on a national emergency, and NAWAS
is both a descendant and component of systems intended to trigger air raid
sirens as quickly as possible after a NORAD alert (more about siren control
will likely be a future topic).
Although NAWAS has seen technical improvement in the equipment, it still
functions more or less the exact same way it did decades ago, and operating
procedures are very simple. If you have ever used a good-quality, multi-station
commercial intercom system with a visual alert feature, such as is often used
in the theater industry for cues, you would find NAWAS unsurprising... except
that the stations span thousands of miles.
NAWAS functions primarily as a party line intercom, but it does support dialing
between stations to alert a specific location to start listening. Dialing is
based on FIPS codes, and while that's not too strange of a choice from a
federal system in general, it's probably not a coincidence that NAWAS stations
are alerted using a similar numbering scheme to SAME headers... typically a
station like the NWS would be issuing EAS messages and calling state EOCs to
advise of the possible damage simultaneously.
The next core component of IPAWS in arbitrary Wikipedia ordering is WEA, the
Wireless Emergency Alert system. WEA is a long-in-development partnership
between the FCC and mobile carriers that ("partnership" in that participation
is now mandatory) which allows short, textual emergency alerts to be sent to
mobile phones throughout a region. This relies in a component of the 3GPP
protocol stack that is not widely used (or really used at all) in the US, which
essentially allows a cellular tower to send a true "broadcast" message which
will be handled by every phone associated with that cell. In this way,
addressing is roughly geographical rather than based on station identities.
These broadcast messages trigger special handling in the cell phone operating
system, which generally feels a bit awkward and roughly implemented. Typically
the old EBS Attention Tone is used as an audible alert and the message is
displayed immediately over other applications.
Use of WEA has traditionally been rather heavily restricted, in practice to
presidential alerts (e.g. the test conducted some years ago) and AMBER alerts.
One might think that there's sort of an odd disparity in severity, between
essentially "nuclear attack" and "child abducted somewhere in the same state,"
and indeed it is a major criticism of the AMBER alert system that emotionally-
motivated handling of AMBER alerts as top-priority induces alarm fatigue that
may lead to people ignoring or downplaying an actual nationwide civil
emergency. If you own a cell phone and live in a state that participates in
AMBER alerts you're probably inclined to agree, or maybe our child abduction
rates here in the land of enchantment are just substantially elevated.
The final major component is NOAA Weather Radio, more properly called NOAA
Weather Radio All Hazards and often referred to as NOAA All Hazards Radio.
This last one, which makes the most sense, is of course unofficial. A great
many US residents are amazingly unaware of the NOAA Weather Radio infrastructure,
which has been steadily expanded to substantial nationwide coverage. Weather
Radio normally transmits a computer-synthesized voice describing the current
weather and upcoming forecast, on one of a list of VHF frequencies around
162MHz. The full forecast generally repeats every fifteen minutes. This loop,
updated regularly, is occasionally supplemented by outlook statements and other
When the NWS issues a weather warning or alert, however, Weather Radio stations
immediately play the alert with SAME headers and footers... much the same as
EAS. Special-purpose radio receivers, popular in tornado-prone regions, parse
the SAME headers and sound an audible alarm when an alert is issued for the
correct region. In fact, the SAME protocol was originally designed for this
purpose and was adopted for EAS after its widespread use for Weather Radio.
The relationship between Weather Radio and EAS is substantial. Since the
development of EAS, Weather Radio stations now transmit all EAS alerts, not
just those issued by the NWS. This is why "All Hazards" was awkwardly appended
to the name: it functions as a general purpose emergency radio network,
complete with a ready supply of specialized alarm receivers. In a way it is the
NEAR concept deployed more successfully, but... well, success is relative.
Weather radio receivers are uncommon nationally, despite their low cost .
So these are the four basic channels of IPAWS: broadcast radio and television,
inter-agency telephone, cellular phones, and the dedicated radio network. IPAWS
allows an alert to be simultaneously, and quickly, issued to all of these
services. This is particularly important because WEA alerts, although they are
length constrained, can encourage people in affected areas to turn on a radio
to receive more extensive information via EAS.
All of that said, the full scope of IPAWS is considerably more ambitious, which
leads to IPAWS-OPEN. IPAWS-OPEN often gets rather grand descriptions as an
enterprise, machine learning, blockchain artificial intelligence, but I'm here
to cut through the bullshit: it's just a set of servers that broker XML
Specifically, those XML documents are the Common Alerting Protocol, or CAP.
CAP is essentially the same concept as SAME but in XML form rather than FSK,
and including extensive capabilities to provide multiple representations of an
alert, intended for different languages and media. CAP supports encryption and
signing, which provides an authentication mechanism as well.
IPAWS-OPEN consists of servers which receive CAP documents and then distribute
them onwards. That's basically it, but it is designed to allow for flexible
expansion of IPAWS as a wide variety of alerting media can simply participate
in the IPAWS-OPEN network. For example, a state DOT's changeable message
highway signs could repeat alerts automatically if the control system's vendor
implemented an IPAWS-OPEN client.
Although IPAWS, in theory, fully integrates all alerting channels, this
obviously has not worked out in practice. Many agencies still operate
fundamentally different alerting systems, most notably the NWS which has an old
and extensive one, and so various sets of gateways, converters, and sometimes
manual processes are required for a message to cascade from IPAWS-OPEN to all
alerting channels. That said, in theory IPAWS will complete the EAS vision of
flexible origination and targeting. A state governor, for example, can take
full advantage of federal systems to deliver an emergency message to their
state by using a CAP origination tool to send the message into IPAWS-OPEN.
Public and private organizations are able to access IPAWS-OPEN either through
authority of a government agency or via a "public" (after extensive paperwork)
data feed. This can be used to put alerting wherever you want; the government
has somewhat comically pursued an internet-based alerting system, for example,
for well over a decade without any real progress made. There seems to have been
a somewhat fundamental misunderstanding of the way the internet is used, as
government officials have often imagined an internet alerting capability as
looking exactly like EAS on television stations---that is, the worst popup
ever. What the infrastructure to deliver that would look like has remained
mysterious, although perennial proposals have ranged from silly to alarming.
That said, Windows tray icon tools to pop up IPAWS alerts are out there,
digital signage vendors offer the capability to automatically display alerts,
and Google has tossed IPAWS into the Google Now pile. There is some progress,
but it is uneven and not often seen in the real world.
For reasons that are partly political and partly historical (that then turned
into political), the United States has surprisingly weak infrastructure for the
distribution of emergency information when compared to other developed nations.
Much of this is a simple result of the lack of a state-owned broadcasting
authority that operates domestic media. All national communications necessarily
pass through the complex network of commercial journalism; while this may have
ideological advantages it is not especially fast or reliable.
The trouble is that, in a way, any centralized, federally-operated system of
delivering information to a large portion of the citizenry would be perceived
as---and probably be---an instrument of propaganda, in violation of long-held
American principles. For this reason, it seems likely that we will always have
a fragmented and seldom-used alerting infrastructure.
On the other hand, much of the modern state---primarily the ridiculous effort
over years taken to deploy WEA---is a result of systematic underfunding and
deprioritization of civil defense in the United States. For the nation with the
world's greatest defense budget and a very high, although not first-place,
military budget as portion of GDP, civil defense has always been an
afterthought. Our preparedness against emergency---whether natural, civil, or
warfare---has routinely been judged less important than offensive capability.
During the Cold War, this was a cause of a surprising amount of strife even
within the military. Robert McNamera, Secretary of Defense during the key
period of the 1960s, routinely objected to investment in missile and even
missile defense systems rather than fallout shelters and relocation
preparations. Today, absent the specter of the Soviet Union's sausage ICBMs,
there is less interest in civil defense as a military strategy than ever
Instead, most modern civil defense efforts are motivated by the political
embarassment subsequent to a series of hurricanes, most notably Katrina.
Unfortunately, public and political reaction to these events tends to end up
down very strange rabbit holes and has seldom lead to serious, systematic
review of civil defense capabilities. What political will has come about is
repeatedly captured by the defense industrial complex and transformed into yet
another acquisition project that costs billions and delivers next to nothing.
What I'm saying is that nothing is likely to change. A single successful
national presidential alert will continue to be regarded as a major
achievement, and the most capabile, reliable technology will continue to be
mild evolutions of systems developed prior to 1980.
All of this pessimism aside, next time I return to the topic of civil defense I
would like to look at its most pessimistic aspect---the part that McNamera
believed to be worth the money. We'll learn about the Federal Relocation Arc
and the National Relocation Program. Naturally with a focus on telecom.
 An obviously interesting question is "what came before NAWAS?" It's hard to
say, and very likely there is no one answer, as the Civil Defense
Administration, DOD, and various state and regional authorities had all stood
up various private-line telephone networks. This includes federal initiatives
such as the "lights and bells" warning system by AT&T which are fairly well
documented, but also a lot of things only vaguely referred to by historians who
seem to actually know very little about the context. Case in point, this
piece from the
Kansas Historical Society which repeats the myth of the Washington-Moscow
hotline as a red phone while giving no useful information about the artifact.
It appears very much like an early 1A2 key system instrument, and the pre-911
emergency number sticker strongly suggests it was just used with the plain-old
telephone system. At the time, a red handset was commonly used to indicate a
"hotline" in the older sense of the term, that is, a no-dial point-to-point
link. This wasn't a feature of the 1A2 system but 1A2 did offer an intercom
feature that this phone may have been left connected to.
 In a four-wire telephone line, audio in and out (microphone and speaker)
are carried on separate pairs. This is generally superior and has long been
used within telephone exchanges and long-distance lines, because the "hybrid"
transformer which allows for both functions on one pair is a source of
distortion and is prone to issues with echos and signal path loops. Moreover,
it inevitably mixes the audio each way. On a typical telephone this just leads
to "sidetone" which is now considered a desirable property, but for an intercom
system with many stations simultaneously active it becomes a tremendous problem
as not just the signal but its poorer-quality "echo" from each hybrid
transformer ends up being amplified. Two-wire lines are generally run to homes
and businesses simply due to the lower materials cost, but for "large-area
intercom" systems such as NAWAS, four-wire connections are used. Really the
whole thing is somewhat technical and requires some EE, but in general
four-wire private lines tend to be used for either very quality-critical
applications (e.g. between radio studios) or intercom/squawk box installations
(e.g. between control rooms). Obviously intercom over private line is not very
common due to the high cost, but emergency operations are a common application.
This whole issue of two-wire vs. four-wire telephone connections becomes
extremely important in the broadcasting industry, where "hybrid" has its own
specific meaning to refer to a sort of "un-hybrid" transformer which separates
the inbound and outbound audio again to help isolate the voice of the host from
returning via the inbound telephone path. Of course doing this by simple
electrical means never works perfectly, and modern broadcast hybrids employ DSP
methods to further reduce the problem. This is all another reason that ISDN
telephones have found an enduring niche in radio journalism.
 Weather alerts aren't always a matter of life and death but sometimes more
simply practical. I've twice had cars damaged by the severe hail storms we are
prone to, and prompt attention to a severe thunderstorm alert gives an
opportunity to move cars under cover. Considering the cost of bodywork the
Weather Radio receiver can pay for itself very quickly.
A little while ago I talked about
CONELRAD, and how its active
denial component was essentially too complex to actually be implemented, so it
was reduced to only serving as an emergency broadcasting system. This is not to
say that CONELRAD was a failure, or at least not entirely. CONELRAD is the
direct ancestor of today's Emergency Alert System, which does serve an
important and useful role.
Like most government initiatives, though, it is tremendously complex and has
had a very rocky path to its present capability. Let's take a look at the post-
CONELRAD history of emergency broadcasting in the US, and how it works today.
It was not always obvious that radio was the best way to disseminate emergency
information. It had two main shortcomings: first, there were tactical
disadvantages to operating radio stations during a military emergency .
Second, receiving an alert by radio required that there be a radio turned on
somewhere nearby. This was not at all guaranteed, and in a case where minutes
mattered presented a significant problem.
"Minutes", after all, was generous. Military and Civil Defense officials
prominently demanded an alerting timeline (from origination to the entire
public) of just thirty seconds.
alternatives to radio
Two major alternate emergency warning strategies have existed to overcome these
downsides of radio: First, sirens. Sirens require no special equipment or
preparation to receive and so are an ideal wide-area alerting system, but they
were very expensive to maintain in the civil defense administration era (in
especially more sparsely populated areas, some sirens were even driven by
diesel engines... you can imagine the maintenance headaches). As a result,
while many larger towns and cities had siren systems at the peak of the Cold
War, today wide-area siren systems are uncommon outside of regions prone to
tornadoes, and more recently, parts of the West Coast due to tsunami hazard
The second strategy is a wired system. We have previously talked about wired
radio in the
context of public broadcasting. A very limited wired broadcast system was
proposed for the US, called the National Emergency Alarm Repeater or NEAR. NEAR
consisted of a small box plugged into an outlet. In the case of an emergency,
an extra 270Hz tone was modulated onto the normal 60Hz AC power lines, which
would cause the NEAR 'repeaters' to sound a buzzer.
That's it. Not much of a broadcast system, really, but rather a supplement to
sirens that would allow coverage in rural areas and ensure that they were
clearly audible indoors .
Although NEAR reached an early implementation stage, with testing in small
areas and manufacturing of repeaters underway, it was never deployed at large
scale. Radio emergency broadcasting was viewed as superior, mainly because of
the ability to deliver instructions. The problem of radio broadcasting not
reaching the many individuals who were not presently listening to the radio is,
to be honest, one that was never meaningfully addressed until the last few
years. But I am getting ahead of myself.
the Emergency Broadcasting System
In 1963, the Emergency Action Notification System (EANS) was activated. EANS is
almost exclusively referred to by its later name, the Emergency Broadcast
System, but it's important to know that it was originally named EANS. In the
context of the United States Government, "Emergency Action" has long been
specifically a euphemism for nuclear war. Emergency action was first, and other
types of emergency were added to the national alerting regime only later.
There is some ambiguity as to whether EBS was a Federal Communications
Commission (FCC) system or a Civil Defense Administration (CD) system. The
answer is some of both; the system was designed and operated by the FCC based
on a requirement, and under authority, from CD. This ambiguity in emergency
alert systems remains to this day, although the Civil Defense Administration
has, through a very circuitous path, become a component of the Federal
Emergency Management Administration (FEMA). A good portion of the ongoing
problems with these initiatives relates to this problem: the Federal Government
has never done an adequate job of placing emergency alerting under a central
authority, which has always lead to competing interests and resource
That's a lot about the bureaucracy, but what about the Emergency Broadcasting
The EBS was organized into a tree-like structure. At the top were two
"origination points," originally a primary and alternate but later equal.
The identity of the origination points varied over the life of the system but
were typically a relevant military center (Air Defense Command, CONAD, NORAD)
and a relevant civilian center (CD, FEMA, and the many acronyms that came in
between). We are talking, here, about physical locations---two of them. In the
early '60s both the culture of national defense and the technology were not
amenable to a substantially redundant system.
At the time, the two origination points were not intended to issue alerts on
their own, but rather on the behalf of the President. So, in a way, there was
one true origination point: the President, wherever they were, would issue
the order, via the White House Communications Agency, to one of the origination
points. This is one of the reasons (the more significant being reprisal itself)
that the President, as they traveled, was always to be in real-time
communication with the WHCA.
The origination points, upon receiving a bona fide order from the President,
would retrieve a codebook and use a teletype network (dedicated to this
purpose) to send the message and an authentication codeword to a number of
major radio and television networks. The same message, called an Emergency
Action Notification, was repeated onto the teletype networks of wire agencies
such as the Associated Press for further distribution.
Upon receiving such a message an operator at each of these networks would tear
open a red envelope issued to the networks quarterly and find the codeword for
the day. If the codewords matched, nuclear attack was imminent.
Activation details from this point varied somewhat by network and technology,
but in general these national media networks would initiate a corporate
procedure to direct all of their member stations to switch their program audio
(and video as relevant) to a leased line or radio link from the national
control center. This process was at least partially automated so that it could
be performed very quickly. These now live national networks would then
broadcast an Attention Tone.
The Attention Tone used later on, a combination of 853 and 960 Hz, is still
instantly recognizable by most Americans today. Although its purpose was, as we
will see, mostly technical, it was intentionally made to be unpleasant and very
distinctive so that listeners would associate it with the Emergency Broadcast
System and start to pay attention. This worked so well that the same Activation
Tone is still widely used by emergency alerting systems today (even as a
ringtone for WEA on most smartphones), although changes in the technology have
rendered it vestigial.
The Attention Tone was recognizable not just to humans, but to electronics.
These national networks were only the first stage of the broadcast component of
EBS. Radio and television stations not associated with one of these major
national networks would have, at their control points, a dedicated receiver
(often more than one) tuned to stations operated by national networks. This
receiver's purpose was to recognize the Attention Tone and at least sound an
alarm in the control room, and later on automatically switch program audio (and
in some cases video) to the received station in order to simply repeat the
In this way, the activation of the major national networks cascaded through the
radio and television industry until every AM, FM, and OTA television station
was broadcasting the same message.
The national networks were expected to broadcast pre-scripted messages until
they received more specific instructions; a typical script went: "We interrupt
this program. This is a national emergency. The President of the United States
or his designated representative will appear shortly over the Emergency
EBS was functional and, besides a one major gaffe involving an activation due
to a mistake by an operator, encountered few serious problems. As a result it
had a long life, remaining in service well into the computer age. The major
limitation of EBS was its highly centralized structure: messages were to
originate only with the President. This was a logistical challenge for alerts
besides nuclear war, and prevented the use of the system to address major
emergencies in smaller areas. The similarly named Emergency Alert System made
use of similar technology, but more flexible policy, to address these
the Emergency Alert System
In 1997, the Emergency Alert System replaced EBS. Like EBS, EAS was a project
of the FCC and FEMA, but added the National Oceanic and Atmospheric
Administration (NOAA). NOAA's involvement, being the parent agency of the
National Weather Service, was the foundation of EAS's larger scope: EAS was
intended not only for military conflict but also for non-military civil
emergencies such as severe weather .
Technologically, the EAS is largely similar to the EBS, but expanded use of
digital signaling and a more flexible hierarchy that allows for messages to
be distributed in a more flexible, targeted way.
When you think of the Attention Tone today, you probably think of it as
accompanied by three buzzes. You can hear an example
here. Those three buzzes,
like the Attention Tone originally, are not intended for human consumption.
They're actually brief FSK packets containing a digital message in the
Specific Area Message Encoding, or SAME. As the name suggests, the main
feature of SAME is that it contains a list of locations---expressed as
FIPS state and county IDs---to which the alert applies. This allows the the
dedicated receivers in "downstream" stations to intelligently decide whether
or not the alert is applicable to the location they serve.
In addition, SAME headers include a code identifying the type of disaster,
which can be used for a variety of purposes such as for tornado siren
controllers to determine whether or not they should activate.
EAS also adds more flexible options for broadcast stations. The technical
device used by stations to receive and inject EAS messages, called an ENDEC, is
computerized and configurable. It can be combined with other equipment to allow
some stations to inject only a brief message (which may be in the form of a
text crawl over the normal program feed for television stations) directing
listeners to a different station to receive more detailed information.
The biggest change in EAS, though, is the origination of messages. EAS messages
enter the broadcast realm through Primary Entry Point radio stations, which are
typically major network-operated radio stations with high transmit powers and
modest hardening against attack and disaster. PEP stations are fitted with
special equipment that can automatically receive an alert (and override the
program feed to transmit it) through various methods, but originally through
FNARS is the FEMA National Radio System, a network of HF radio stations (using
the hybrid digital ALE protocol also used by the military) located at various
emergency command points. The primary control station for FNARS is located at
Mount Weather, FEMA's primary hardened bunker, and state OEMs and many
better-equipped county and city OEMs are connected to FNARS either directly or
through regional radio networks.
In modern applications, FNARS is complemented by IP delivery of messages, but
that's getting in to a future topic.
This nationwide network that includes multiple organizations allows EAS
messages to be originated by different Alerting Authorities at different
scopes. The President still has the ability to issue EAS messages to the entire
nation, but so can certain federal agencies and military centers under certain
circumstances (e.g. NORAD). Importantly, though, alerts can be issued for
entire states by the governor or a designee (such as a state director of
emergency operations), or at the county or city level by a relevant executive
or emergency operations official.
This makes EAS suitable for a wide variety of situations: not just nuclear
attack, but civil unrest, severe weather, major transportation disasters,
infrastructure emergencies (e.g. contaminated municipal water), etc.
By far the largest user of EAS is the National Weather Service, whose forecast
offices routinely issue EAS alerts. While these types of weather alerts are
usually associated with tornados, in my part of the country they more often
relate to flash flooding, large hail, or particularly severe wind and
lightning. The National Weather Service estimates that dozens of lives are
routinely saved by timely warnings of imminent severe weather.
the internet age
In most meaningful senses, EAS remains in service today. However, in a
technical sense of government funding, it has been replaced by something more
ambitious. The reality is that the expectation that alertees have a radio
turned on nearby has always been a problematic one, and broadcast radio and
television are generally declining in popularity.
To achieve rapid alerting, alerts must now be disseminated through more
channels than just broadcast stations. That's exactly the goal of the
Integrated Public Alert and Warning System, or IPAWS. I've already gone on
long enough, so let's talk about IPAWS next.
Teaser: there's even more radio involved!
 This because civilian radio stations could be used as navigation aids by
enemy aircraft, helping them to locate major cities despite blackout. This
concern became obsolete as air navigation technology improved.
 To some degree tsunamis are a retrospective explanation, the state of
Hawaii and the city of San Francisco have maintained siren systems since the
Cold War and only more recently began to discuss tsunamis as a purpose. Mostly
they're still worried about "radiological attack," to quote the SF OEM.
 In Great Britain, a more complete wired broadcast system---including voice
messages---called HANDEL was installed in various government buildings, but was
not extended to homes or businesses. A rather accurate depiction of HANDEL is
seen in the 1984 film Threads, and in this YouTube
clip at 1:07 and again, in alert,
at 2:17, but if you are interested in the topics of civil defense and nuclear
war the entire film is required, albeit difficult, viewing.
 At the time war, civil unrest, and weather represented essentially the
scope of the system. Earthquakes have only begun to fall into the scope of
emergency alerting very recently, which is interesting because the earthquake
scenario is actually much more challenging than nuclear attack: the potential
for lifesaving through early warning is tremendous, but seismic methods of
detecting earthquakes give warning only seconds before the destructive shaking
starts. Although some parts of the US have had earthquake warning systems for
a couple decades, they have seldom ever been backed by an alerting system
capable of delivering the warning before it is pointless.
Something I have long been interested in is time. Not some wacky
philosophical or physical detail of time, but rather the simple logistics
of the measurement and dissemination of time. How do we know what time it is?
I mean, how do we really know?
There are two basic problems in my proprietary model of time logistics: first
is the measurement of time. This is a complicated field because "time," when
examined closely, means different things to different people. These competing
needs for timekeeping often conflict in basic ways, which results in a number
of different precise definitions of time that vary from each other. The
simplest of these examples would be to note the competing needs of astronomy
and kinematics: astronomers care about definitions of time that are directly
related to the orientation of Earth compared to other objects, while kinematic
measurements care about time that advances at a fixed rate, allowing for
comparison of intervals.
These two needs directly conflict. And on top of this, most practical astronomy
also requires working with intervals, which has the inevitable result that most
astronomical software must convert between multiple definitions of time, e.g.
sidereal and monotonic. Think about that next time you are irked by time zones.
The second problem is the dissemination of time. Keeping an extremely accurate
measurement of time in one place (historically generally by use of astronomical
means like a transit telescope) is only so useful. Time is far more valuable
when multiple people in multiple places can agree. This can obviously be
achieved by setting one clock to the correct time and then moving it, perhaps
using it to set other clocks. The problem is that the accuracy of clocks is
actually fairly poor , and so without regular synchronization they will drift
away from each other.
Today, I am going to talk about just a small portion of that problem: time
dissemination within a large building or campus. There is, of course, so much
more to this topic that I plan to discuss in the future, but we need to make
a beachhead, and this is one that is currently on my mind .
There are three spaces where the problem of campus-scale time dissemination is
clear: schools, hospitals, and airports. Schools often operate on fairly
precise schedules (the start and end of periods), and so any significant
disagreement of clocks could lead to many classes starting late. Hospitals rely
on fairly accurate time in keeping medical records, and disagreement of clocks
in different rooms could create inconsistencies in patient charts. And in
airports, well, frankly it is astounding how many US airports lack a sufficient
number of clearly visible, synchronized clocks, but at least some have figured
out that people on the edge of making a flight care about consistent clock
It is no surprise, then, that these types of buildings and campuses are three
major applications of central clock systems.
In a central, master, or primary clock system, there is one clock which
authoritatively establishes the correct time. Elsewhere, generally throughout a
building, are devices variously referred to as slave clocks, synchronized
clocks, secondary clocks, or repeater clocks. I will use the term secondary
clock just to be consistent.
A secondary clock should always indicate the exact same time as the central
clock. The methods of achieving this provide a sort of cross section of
electrical communications technologies, and at various eras have been typical
of the methods used in other communications systems as well. Let's take a look.
The earliest central clocks to achieve widespread use were manufactured by a
variety of companies (many around today, such as Simplex and GE) and varied in
details, but there are enough common ideas between them that it is possible to
talk about them generally. Just know that any given system likely varies a bit
in the details from what I'm about to describe.
Introduced at the turn of the 20th century, the typical pulse-synchronized
clock system was based on a primary clock, which was a fairly large case clock
using a pendulum as this was the most accurate movement available at the time.
The primary clock was specially equipped so that, at the top of each minute, a
switch momentarily closed. Paired with a transformer, this allowed for the
production of a control voltage pulse, which was typically 24 volts either DC
In the simplest systems, the secondary clocks then consisted of a clock with a
much simplified movement. Each pulse actuated a solenoid, which advanced the
movement by one minute exactly, usually using an escapement mechanism to ensure
accurate positioning on each minute.
This system met the basic need: left running, the secondary clocks would
advance at the same rate as the master clock and thus could remain perfectly in
sync. However, only synchronization was ensured, not accuracy. This meant that
installation of a new system and then every power outage (or DST adjustment)
required a careful process of correctly setting each clock before the next
minute pulse. The system provided synchronization, but not automatic setting
The next advancement made on this system was the hour pulse. A different pulse,
of a different polarity in DC systems or on AC systems often using a separate
wire, was sent at the top of each hour. In the secondary clocks, this pulse
energized a solenoid which "pulled" the minute hand directly to the 00
position. Thus, any accumulated minute error should be corrected at the top of
the hour. The clocks still needed to be manually set to the correct hour, but
the minutes could usually take care of themselves. This was an especially
important innovation, because it could "cover up" the most common failure mode
of secondary clocks, which was a gummed up mechanism that caused some minute
pulses to fail to advance the minute hand.
Some of these systems offered semi-automatic DST handling by either stopping
pulses for one hour or pulsing at double rate for one hour, as appropriate.
This mechanism was of course somewhat error prone.
The next obvious innovation was a similar mechanism to correct the hour hand,
and indeed later generations of these systems added a 12-hour pulse which used
a similar mechanism to the hour pulse to reset the hour hand to the 12 position
twice each day. This, in theory, allowed any error in a clock to be completely
corrected at midnight and noon .
Of course, in practice, the hour and 12-hour solenoids could only pull a hand
(or really gear) so far, and so both mechanisms were usually only able to
correct an error within a certain range. This kept slightly broken clocks on
track but allowed severely de-synchronized clocks to stay that way, often
behaving erratically at the top of the hour and at noon and midnight as the
correction pulses froze up the mechanisms.
One of the problems with this mechanism is that the delivery of minute, hour,
and 12-hour pulses required at least three wires generally (minute and hour can
use polarity reversal), and potentially four (in the case of an AC system,
minute, hour, 12-hour, and neutral). These multiple wires increased the
installation cost of new systems and made it difficult to upgrade old two-wire
systems to perform corrections.
A further innovation addressed this problem by using a simple form of frequency
modulation. Such "frequency-synchronized" clocks had a primary clock which
emitted a continuous tone of a fixed frequency which was used to drive the
clock mechanism to advance the minute hand. For hour and 12-hour corrections,
the tone was varied. The secondary clocks detected the different frequency and
triggered correction solenoids.
Of course, this basically required electronics in the primary and secondary
clocks. In earlier versions these were tube-based, and that came with its own
set of maintenance challenges. However, installation was cheaper and it
provided an upgrade path.
These systems, pulse-synchronized and frequency-synchronized, were widely
installed in institutional buildings from around 1900 to 1980. Simplex systems
are especially common in schools, and many middle school legends of haunted
clocks can be attributed to Simplex secondary clocks with damaged mechanisms
that ran forwards and backwards at odd speeds at each correction pulse. Many of
these systems remain in service today, usually upgraded with a solid-state
primary clock. Reliability is generally very good if the secondary clocks are
well-maintained, but given the facilities budgets of school districts they are
unfortunately often in poor condition and cause a good deal of headache.
As a further enhancement, a lot of secondary clocks gained a second hand. The
second hand was usually driven by an independent and fairly conventional clock
mechanism, and could either be completely free-running (e.g. had no particular
relation to the minute hand, which was acceptable since the second hand is
typically used only for interval measurements) or corrected by the minute
pulse. In frequency-synchronized systems, the second-hand could be driven by
the same mechanism running at the operating frequency, which was a simple
design that produced an accurate second hand at the cost of the second hand
sometimes having odd behavior during correction pulses.
The use of 24 volt control circuits was very common throughout the 20th century
and is still widespread today. For example, thermostats and doorbells typically
operate at 24 vac. A 24 vac control application that is not usually seen today
are low-voltage light switches which actuate a central relay rack to turn
building lighting on and off. These were somewhat popular around the
mid-century because the 24vac control wiring could be small gauge and thus very
inexpensive, but are rare today outside of commercial systems (which are more
often digital anyway).
Another interesting but less common pre-digital central clock technology relied
on higher frequency, low voltage signals superimposed on the building
electrical wiring, either on the hot or neutral. Tube-based circuits could
detect these tones and activate correction solenoids or motors. The advantage
not running dedicated clock wiring was appealing, but these are not widely
seen... perhaps because of the more complex installation and code implications
of connecting the primary clock to the building mains.
Finally, something which is not quite a central clock system but has some of
the flavor is the AC-synchronized clock. These clocks, which were very common
in the mid-century, use a synchronous AC motor instead of an escapement. They
rely on the consistent 60Hz or 50Hz of the electrical supply to keep time.
These are no longer particularly common, probably because the decreasing cost
of quartz crystal oscillators made it cheaper to keep the whole clock mechanism
DC powered and electronically controlled. They can be somewhat frustrating
today because they often date to an era when the US was not yet universally on
60Hz, and so like the present situation in Japan, they may not run correctly if
they were originally made for a 50Hz market. Still, they're desirable in my
mind because many flip clocks were made this way, and flip clocks are
Semiconductors offered great opportunities for central clock systems. While
systems conveying digital signals over wires did exist, they quickly gave way
to wireless systems. These wireless systems usually use some sort of fairly
simple digital modulation which sends a complete timestamp over some time
period. The period can be relatively long since these more modern secondary
clocks were universally equipped with a local oscillator that drove the clock,
so they could be left to their own devices for as much as a day at a time
before a correction was applied. In practice, a complete timestamp every minute
is common, perhaps both because it is a nice round period and because it
matches WWVB (a nationwide time correction radio service which I am considering
out of scope for these purposes, and which is not often used for commercial
clock synchronization because indoor reception is inconsistent).
A typical example would be the Primex system, in which a controller transmits a
synchronization signal at around 72 MHz and 1 watt of power. The signal
contains a BPSK encoded timestamp. When Primex clocks are turned on, they
search for a transmitter and correct themselves as soon as they find one---and
then at intervals (such as once a day) from then on .
More in line with the 21st century, central clock systems can operate over IP.
In the simplest case, a secondary clock can just operate as an NTP client to
apply corrections periodically. These systems do certainly exist, but seem to
be relatively unpopular. I suspect the major problem is the need to run
Ethernet or deal with WiFi and the high energy cost and complexity of a network
stack and NTP client.
Today, secondary clocks are generally available with both digital and analog
displays. This can be amusing. Digital displays manufactured as retrofit for
pulse-synchronized systems must essentially simulate a mechanical clock
mechanism in order to observe the correct time. Analog displays manufactured
for digital systems use position switches or specialized escapements to
establish a known position for the hands (homing) and then use a stepper motor
or encoder and servo to advance them to show the time, thus simulating a
mechanical clock mechanism in their own way.
In the latter half of the 20th century and continuing today, central clock
systems are often integrated with PA or digital signage systems. Schools built
today, for example, are likely to have secondary clocks which are just a
feature of the PA system and may just be LCD displays with an embedded
computer. The PA system and a tone generator or audio playback by computer
often substitute for bells, as well, which had previously usually been
activated by the central clock---sometimes using the same 24vac wiring as the
Going forward, there are many promising technologies for time dissemination
within structures. LoRa, for example, seems to have obvious applications for
centralized clocks. However, the development of new central clock systems
seems fairly slow. It's likely that the ubiquity of cellphones has reduced the
demand for accurate wall clocks, and in general widespread computers make the
spread of accurate time a lot less impressive than it once was... even as the
mechanisms used by computers for this purpose are quite a bit more complicated.
Time synchronization within milliseconds is now something we basically take for
granted, and in a future post I will talk a bit about how that is
conventionally achieved today in both commercial IT environments and in more
specialized scientific and engineering applications. The keyword is PNT, or
Position, Navigation, Time, as multilateration-based systems such as GPS rely
on a fundamental relationship of correct location and correct time, and thus
can be used to determine either given the other... or to determine both using
an awkward bootstrapping process which is thankfully both automated and fast in
modern GPS receivers (although only because they cheat).
 This seems like a somewhat bold statement to make so generally, considering
the low cost of fairly precise quartz oscillators today, but consider this: as
clocks have become more accurate, so too have the measurements made with them.
It seems like a safe assumption that we will never reach a point where the
accuracy of clocks is no longer a problem, because the precision of other
measurements will continue to increase, maintaining the clock as a meaningful
source of error.
 Because, due to a winding path from an idea I had months ago, I recently
bought some IP managed, NTP synchronized LED wall clocks off of eBay. They are
unreasonably large for my living space and I love them.
 This is all the more true in train stations, which generally operate on
tighter and more exact schedules, and train stations are indeed another major
application of central clock systems. The thing is that I live in the Western
United States, where we have read about passenger trains in books but seldom
seen them. Certainly we have not known them to keep to a timetable, Amtrak.
 This was not exactly true in practice, for example, Simplex systems
performed the hour and 12-hour pulses a bit early because it simplified the
design of the secondary clock mechanism. A clock behaving erratically right
around the 58th minute of the hour is characteristic of pulse-synchronized
Simplex systems applying hour and 12-hour corrections.
 Because the relatively low frequency of the 72MHz commercial band
penetrates building materials well, it is often used for paging systems in
hospitals. The FCC essentially considers Primex clocks to be a paging system,
and indeed newer iterations allow the controller to send out textual alerts
that clocks can display.
The New York Times once described a software package as "among the first of an
emerging generation of software making extensive use of artificial intelligence
techniques." What is this deep learning, data science, artificial intelligence
they speak of? Ansa Paradox, of 1985 .
Obviously the contours of "artificial intelligence" have shifted a great deal
over the years. At the same time, though, basic expectations of computers have
One of the most obvious applications of computer technology is the storage of
Well, that's both obvious and general, but what I mean specifically here is
data which had previously or conventionally been stored on hardcopy. Business
records, basically: customer accounts, project management reports, accounts
payable, etc etc. The examples are numerous, practically infinite.
I intend to show that, counter-intuitively, computers have in many ways gotten
worse at these functions over time. The reasons are partially technical,
but for the most part economic. In short, capitalism ruins computing once
To get there, though, we need to start a ways back, with the genesis of
Early computers were generally not applied to "data storage" tasks. A simple
explanation is that storage technology developed somewhat behind computing
technologies; early computers, over a reasonable period of time, could process
more data than they could store. This is where much of the concept of a
"computer operator" comes from: the need to more or less continuously feed
new data to the computer, retrieved from paper files or prepared (e.g. on
punched cards) on demand.
As the state of storage changed, devices included simple, low-capacity types of
solid state memory such as core memory, and higher capacity media such as paper
or magnetic tape. Core memory was random access, but very expensive. Tape was
relatively inexpensive on a capacity basis, but it was extremely inefficient to
access it in a nonlinear (e.g. random) fashion. This is essentially the origin
of mainframe computers being heavily based around batch processing: for
efficiency purposes, data needed to be processed in large volumes, in fixed
order, simply to facilitate the loading of tapes.
The ability to efficiently use a "database" as we think of them today
effectively required a random-access storage device of fairly high capacity,
say, a multi-MB hard drive (or more eccentrically, and more briefly, something
like a film or tape magazine).
Reasonably large capacity hard disk drives were available by the '60s, but were
enormously expensive and just, well, enormous. Still, these storage devices
basically created the modern concept of a "database:" a set of data which could
be retrieved not just in linear order but also arbitrarily based on various
As a direct result of these devices, IBM researcher E. F. Codd published a
paper in 1970 describing a formalized approach to the storage and retrieval of
complex-structured data on large "shared data banks." Codd called his system
"relational," and described the primary features seen in most modern databases.
Although it was somewhat poorly received at the time (likely primarily due to
the difficulty of implementing it on existing hardware), by the '90s the
concept of a relational database had become so popular that it was essentially
assumed that any "database" was relational in nature, and could be queried by
SQL or something similar to it.
A major factor in the rising popularity of databases was the decreasing cost of
storage, which encouraged uses of computers that required this kind of
flexible, structured data storage. By the end of the 1980s, hard disk drives
became a common option on PCs, introducing the ingredients for a database
system to the consumer market.
This represented, to a degree which I do not wish to understate, a
democratization of the database. Nearly as soon as the computers and storage
became available, it was a widespread assumption that computer users of all
types would have a use for databases, from the home to the large enterprise.
Because most computer users did not have the desire to learn a programming
language and environment in depth, this created a market for a software genre
almost forgotten today: the desktop database.
I hesitate to make any claims of "first," but an early and very prominent
desktop database solution was dBase II (they called the first version II, a
particularly strong form of the XBox 360 move) from Ashton-Tate. dBase was
released in around 1980, and within a period of a few years the field
proliferated. FoxPro (actually a variant of dBase) and Paradox were other major
entrants from the same time period that may be familiar to older readers.
dBase was initially offered on CP/M, which was a popular platform at the time
(and one that was influential on the design of DOS), but was ported to DOS (of
both Microsoft and IBM variants) and Apple II, the other significant platforms
of the era.
Let's consider the features of dBase, which was typical of these early desktop
database products. dBase was textmode software, and while it provided tabular
(or loosely "graphical") views, it was primarily what we would now call a REPL
for the dBase programming language. The dBase language was fairly flexible but
also intended to be simple enough for end-users to learn, so that they could
write and modify their own dBase programs---this was the entire point of the
software, to make custom databases accessible to non-engineers.
It wasn't necessarily expected, though, that the dBase language and shell would
be used on an ongoing basis. Instead, dBase shipped with tools called ASSIST
and APPGEN. The purpose of these tools was to offer a more user-friendly
interface to a dBase database. ASSIST was a sort of general-purpose client to
the database for querying and data management, while APPGEN allowed for the
creation of forms, queries, and reports linked by a menu system---basically the
creation of a CRUD app.
In a way, the combination of dBase and APPGEN is thus a way to create common
CRUD applications without the need for "programming" in its form at the time.
This capability is referred to as Rapid Application Development (RAD), and RAD
and desktop databases are two peas in a pod. The line between the two has
become significantly blurred, and all desktop databases offer at least basic
RAD capabilities. More sophisticated options were capable of generating client
applications for multiple applications which could operate over the network.
As I mentioned, there are many of these. A brief listing that I assembled,
based mostly on Wikipedia with some other sources, includes: DataEase, Paradox,
dBase, FoxPro, Kexi, Approach, Access, R:Base, OMNIS,
StarOffice/OpenOffice/LibreOffice/NeoOffice Base (delightfully also called
Starbase), PowerBuilder, FileMaker, and I'm sure at least a dozen more. These
include some entrants from major brands recognizable today, such as Access
developed by Microsoft, FileMaker acquired by Apple, and Approach acquired by
These products were highly successful in their time. dBase propelled
Ashton-Tate to the top of the software industry, alongside IBM, in the 1980s.
FileMaker has been hugely influential in Apple business circles. Access was the
core of many small businesses for over a decade. It's easy to see why: desktop
databases, and their companion of RAD, truly made the (record-keeping) power of
computers available to the masses by empowering users to develop their own
You didn't buy an inventory, invoicing, customer management, or other solution
and then conform your business practices to it. Instead, you developed your own
custom application that fit your needs exactly. The development of these
database applications required some skill, but it was easier to acquire than
general-purpose programming, especially in the '90s and '00s as desktop
databases made the transition to GUI programs with extensive user assistance.
The expectation, and in many cases reality, is that a business clerk could
implement a desktop database solution to their record-keeping use case with
only a fairly brief study of the desktop database's manual... no coding
Nonetheless, a professional industry flowered around these products with many
third-party consultants, integrators, user groups, and conferences. Many of
these products became so deeply integrated into their use-cases that they
survive today, now the critical core of a legacy system. Paradox, for example,
has become part of the WordPerfect suite and remains in heavy use in
WordPerfect holdout industries such as law and legislation.
And yet... desktop databases are all but gone today. Many of these products are
still maintained, particularly the more recent entrants such as Kexi, and there
is a small set of modern RAD solutions such as Zoho Creator. All in all,
though, the desktop database industry has entirely collapsed since the early
'00s. Desktop databases are typically viewed today as legacy artifacts, a sign
of poor engineering and extensive technical debt. Far from democratizing, they
are seen as constraining.
I posit that the decline of desktop databases reflects a larger shift in the
software industry: broadly speaking, an increase in profit motive, and a
decrease in ambition.
In the early days of computing, and extending well into the '90s in the correct
niches, there was a view that computers would solve problems in the most
general case. From Rear Admiral Hopper's era of "automatic programming" to
"no-code" solutions in the '00s, there was a strong ambition that the field of
software engineering existed only as a stopgap measure until "artificial
intelligence" was developed to such a degree that users were fully empowered to
create their own solutions to their own problems. Computers were infinitely
flexible, and with a level of skill decreasing every day they could be made to
perform any function.
Today, computers are not general-purpose problem-solving machines ready for the
whims of any user. They are merely a platform to deliver "apps," "SAAS," and
in general special-purpose solutions delivered on a subscription model.
The first shift is economic: the reality of desktop databases is that they were
difficult to monetize to modern standards. After a one-time purchase of the
software, users could develop an unlimited number of solutions without any
added cost. In a way, the marketers of desktop databases sealed their own fate
by selling, for a fixed fee, the ability to not be dependent on the software
industry going forward. While not achieved, this was at least the ideal of
The second shift is cultural: the mid-century to the '90s was a heady time in
computer science when the goal was flexibility and generality. To be somewhat
cynical (not that that is new), the goal of the '10s and '20s is monetization
and engagement. Successful software today must be prescriptive, rather than
general, in order to direct users to the behaviors which are most readily
converted into a commercial advantage for the developer.
Perhaps more deeply though, software engineers have given up.
The reality is that generality is hard. I am, hopefully obviously, presenting
a very rosy view of the desktop database. In practice, while these solutions
were powerful and flexible, they were perhaps too flexible and often lead to
messy applications which were unreliable and difficult to maintain. Part of
this was due to limitations in the applications, part of it was due to the
inherent challenge of untrained users who were effectively developing software
without a practical or academic knowledge of computer applications (although
one could argue that this sentence describes many software engineers today...).
One might think that this is one of the most important challenges that a
computer scientist, software engineer, coder, etc. could take on. What needs to
be done, what needs to be changed to make computers truly the tools of their
owners? Truly a flexible, general device able to take on any challenge, as IBM
marketing promised in the '50s?
But, alas, these problems are hard, and they are hard in a way that is not
especially profitable. We are, after all, talking about engineering software
vendors entirely out of the problem.
The result is that the few RAD solutions that are under active development
today are subscription-based and usage-priced, effectively cloud platforms.
Even despite this, they are generally unsuccessful. Yet, the desire for a
generalized desktop database remains an especially strong one among business
computer users. Virtually everyone who has worked in IT or software in an
established business environment has seen the "Excel monstrosity," a tabular
data file prepared in spreadsheet software which is trying so very hard to be
a generalized RDBMS in a tool not originally intended for it.
As professionals, we often mock these fallen creations of a sadistic mind as
evidence of users run amok, of the evils of an untrained person empowered by a
keyboard. We've all done it, certainly I have; making fun of a person who has
created a twenty-sheet, instruction-laden Excel workbook to solve a problem
that clearly should have been solved with software, developed by someone
with a computer science degree or at least a certificate from a four-week
fly-by-night NodeJS bootcamp.
And yet, when we do this, we are mocking users for employing computers as they
were once intended: general-purpose.
I hesitate to sound like RMS, particularly considering what I wrote a few
messages ago. But, as I said, he is worthy of respect in some regards. Despite
his inconsistency, perhaps we can learn something from his view of software as
user-empowering versus user-subjugating. Desktop databases empowered users.
Do applications today empower users?
The software industry, I contend, has fallen from grace. It is hard to place
when this change occurred, because it happened slowly and by degrees, but it
seems to me like sometime during the late '90s to early '00s the software
industry fundamentally gave up. Interest in solving problems was abandoned
and replaced by a drive to engage users, a vague term that is nearly always
interpreted in a way that raises fundamental ethical concerns. Computing is no
longer a lofty field engaged in the salvation of mankind; it is a field of
mechanical labor engaged in the conversion of people into money.
In short, capitalism ruins computing once again.
If I have a manifesto at the moment, this is it. I don't mean to entirely
degrade the modern software industry, I mean, I work in it. Certainly there
are many people today working on software that solves generalized problems for
any user. But if you really think about it, on the whole, do you feel that the
modern software industry is oriented towards the enablement of all computer
users, or towards the exploitation of those users?
There are many ways in which this change has occurred, and here I have focused
on just one minute corner of the shift in the software industry. But we can see
the same trend in many other places: from a distributed to centralized internet,
from open to closed platforms, from up-front to subscription, from
general-purpose to "app store." And yet, after it all, there is still "dBase
2019... for optimized productivity!"
 I found this amazing quote courtesy of some Wikipedia editor, but just
searching a newspaper archive for "artificial intelligence" in the 1970-1990
timeframe is a ton of fun and will probably lead to a post one day.
I have said before that I believe that teaching modern students the OSI model
as an approach to networking is a fundamental mistake that makes the concepts
less clear rather than more. The major reason for this is simple: the OSI model
was prescriptive of a specific network stack designed alongside it, and that
network stack is not the one we use today. In fact, the TCP/IP stack we use
today was intentionally designed differently from the OSI model for practical
Teaching students about TCP/IP using the OSI model is like teaching students
about small engine repair using a chart of the Wankel cycle. It's nonsensical
to the point of farce. The OSI model is not some "ideal" model of networking,
it is not a "gold standard" or even a "useful reference." It's the architecture
of a specific network stack that failed to gain significant real-world
Well, "failed to gain real-world adoption" is one of my favorite things, so
today we're going to talk about the OSI model and the OSI network stack.
The story of the OSI model basically starts in the late '70s with a project
between various standards committees (prominently ISO) to create a standardized
network stack which could be used to interconnect various systems. An Open
Systems Interconnection model, if you will.
This time period was the infancy of computer networking, and most computer
networks operated on vendor-specific protocols that were basically overgrown
versions of protocols designed to connect terminals to mainframes. The IBM
Systems Network Architecture was perhaps the most prominent of these, but
there were more of them than you could easily list.
Standardized network protocols that could be implemented across different
computer architectures were relatively immature. X.25 was the most popular, and
continues to be used as a teaching example today because it is simple and easy
to understand. However, X.25 had significant limitations, and was married to
the telephone network in uncomfortable ways (both in that it relied on leased
lines and in that X.25 was in many ways designed as a direct analog to the
telephone network). X.25 was not good enough, and just as soon as it gained
market share people realized they needed something that was more powerful, but
also not tied to a vendor.
The OSI network stack was designed in a very theory-first way. That is, the OSI
conceptual model of seven layers was mostly designed before the actual
protocols that implemented those layers. This puts the OSI model in an unusual
position of having always, from the very start, been divorced from actual
working computer networks. And while this is a matter of opinion, I believe the
OSI model to have been severely over-engineered from that beginning.
Unlike most practical computer networks which aim to provide a simple channel
with few bells and whistles, the OSI model attempted to encode just about every
aspect of what we now consider the "application" into the actual protocols.
This results in the OSI model's top four layers, which today are all
essentially "Application" spelled in various strange ways. Through a critical
eye, this could be viewed as a somewhat severe example of design over function.
History had, even by this time, shown that what was needed from computer
networks was usually ease of implementation and ease of use, not power.
Unfortunately, the OSI model, as designed, was powerful to a fault.
From the modern perspective, this might not be entirely obvious, but only
because most CS students have been trained to simply ignore a large portion of
the model. Remember, the OSI model is:
Do (Data Link)
Before we get too much into the details of these layers, let's remember what a
layer is. The fundamental concept that the OSI model is often used to introduce
is the concept that I call network abstraction: each layer interacts only with
the layer below it, and by doing so provides a service to the layer above it.
Each layer has a constrained area of concern, and the protocol definitions
create a contract which defines the behavior of each layer. Through this
sort of rigid, enforced abstraction, we gain flexibility: the layers become
"mix and match." As long as layers implement the correct interface for above
and expect the correct interface from below, we can use any implementation of a
given layer that we want.
This matters in practice. Consider the situation of TCP and UDP: TCP and UDP
can both be dropped on top of IP because they both expect the same capabilities
from the layer under them. Moreover, to a surprising extent TCP and UDP are
interchangeable. While they provide different guarantees, the interface for the
two is largely the same, and so switching which of the two software uses is
trivial (in the simple case where we do not require the guarantees which TCP
So, having hopefully grasped this central concept of networking, let's apply it
to the OSI model, with which it was likely originally taught to us. The
presentation layer depends on the session layer, and provides services to the
application layer. That's, uhh, cool. Wikipedia suggests that serializing data
structures is an example of something which might occur at this layer. But this
sort of presupposes that the session layer does not require any high-level data
structures, since it functions without the use of the presentation layer. It
also seems to suggest that presentation is somehow dependent on session, which
makes little sense in the context of serialization.
In fact, it's hard to see how this "fundamental concept" of the presentation
layer applies to computing systems because it does not. Session and
presentation are both "vestigial layers" which were not implemented in the IP
stack, and so they have no real modern equivalent. Most teaching of the session
and presentation layers consists of instructors grasping for examples---I have
heard of things like CHAP as the session layer---which undermine the point they
are making by violating the actual fundamental concept of layered networking.
Now that we all agree that the OSI model is garbage which does not represent
the real world, let's look at the world it does represent: the OSI protocols,
which were in fact designed explicitly as an implementation of the OSI model.
No one really defines layer 1, the physical layer, because it is generally a
constraint on the design of the protocols rather than something that anyone
gets to design intentionally. The physical layer, in the context of the OSI
stack, could generally be assumed to be a simple serial channel like a leased
telephone line, using some type of line coding and other details which are not
really of interest to the network programmer.
Layer 2, the data link layer, provides the most fundamental networking
features. Today we often talk of layer 2 as being further subdivided into the
MAC (media access control) and LLC (logical link control) sublayers, but to a
large extent this is simply a result of trying to retcon the OSI model onto
modern network stacks, and the differentiation between MAC and LLC is not
something which was contemplated by the actual designers of the OSI model.
The data link layer is implemented primarily in the form of X.212. In a major
change from what you might expect if you were taught the IP stack via the OSI
model, the OSI data link link layer and thus X.212 provides reliability features
including checksumming and resending. Optionally, it provides guaranteed order
of delivery. X.212 further provides a quality of service capability.
Specifically related to order of delivery, X.212 provides a connection-oriented
mode and a connectionless mode. This is very similar (but not quite the same)
to the difference between TCP and UDP, but we are still only talking about
layer 2! Keep in mind here that layer 2 is essentially defined within the
context of a specific network link, and so these features are in place to
content with unreliable links or links that are themselves implemented on
other high-level protocols (e.g. tunnels), and not to handle routed networks.
X.212 addressing is basically unspecified, because the expectation is that
addresses used at layer 2 will be ephemeral and specific to the media in use.
Because layer 2 traffic cannot directly span network segments, there is no need
for any sort of standardized addressing.
As with most layers, there are alternative implementations available for the
data link layer, including implementations that transport it over other
OSI layer 3, the network layer, provides a more sophisticated service which is
capable of moving bytes between hosts with basically the same semantics we
expect in the IP world. Layer 3 is available in connection oriented and
connectionless modes, much like layer 2, but now provides these services across
a routed network.
The two typical layer 3 protocols are Connectionless Network Protocol and
Connection Oriented Network Protocol, which are basically exactly what they
OSI addressing at these layers is based on Network Service Point Addresses or
NSAPs. Or, well, it's better to say that NSAPs are the current standard for
addressing. In fact, the protocols are somewhat flexible and historically other
schemes were used but have been largely replaced by NSAP. NSAP addresses are 20
bytes in length and have no particular structure, although there are various
norms for allocation of NSAPs that include embedding of IP addresses. NSAPs do
not include routing information as is the case with IP addresses, and so the
process of routing traffic to a given NSAP includes the "translation" of NSAPs
into more detailed addressing types which may be dependent on the layer 2 in
use. All in all, OSI addressing is confusing and in modern use depends very
much on the details of the specific application.
Layer 4, the transport layer, adds additional features over layer 3 including
multiplexing of multiple streams, error recovery, flow control, and connection
management (e.g. retries and reconnects). There are a variety of defined layer
4 protocol classes called TP0 thru TP4, which vary in the features that they
offer in ways that do not entirely make sense from the modern perspective.
Because layer 4 offers general messaging features, it is perhaps the closest
equivalent to the TCP and UDP protocols in the IP stack, but of course this is
a confusing claim since there are many elements of UDP and TCP found at lower
levels as well.
The selection of one of the five transport layer "levels" depends basically on
application requirements and can range from very high reliability (TP4) to low
latency given unreliable network conditions, with relaxed guarantees (TP0 or
The session layer adds management of associations between two hosts and the
status of the connection between them. This is a bit confusing because the IP
model does not have an equivalent, but it might help to know that, in the OSI
model, connections are "closed" at the session layer (which causes actions
which cascade down to the lower layers). The OSI session layer, defined by
X.215, this serves some of the roles we associate with link setup.
More interestingly, though, the session layer is responsible for very
high-level handling of significant network errors by gracefully restarting a
network dialog. This is not a capability that the IP stack offers unless it is
explicitly included in an application.
The session layer manages conversations through a token mechanism, which is
somewhat similar to that of token-ring networking or the general "talking
stick" concept. Multiple tokens may be in use, allowing for half-duplex or
duplex interactions between hosts.
Like basically every layer below it, Layer 5 comes in connection-oriented and
connectionless flavors. The connectionless flavor is particularly important
since it provides powerful features for session management without the
requirement for an underlying prepared circuit---something which is likewise
often implemented at the application layer over UDP.
Layer 6, the presentation layer, is another which does not exist in the IP
stack. The session layer is a bit hard to understand from the view of the IP
stack, but the presentation layer is even stranger.
The basic concept is this: applications should interact using abstract
representations rather than actual wire-encoded values. These abstract values
can then be translated to actual wire values based on the capabilities of
the underlying network.
Why is this even something we want? Well, it's important to remember that this
network stack was developed in a time period when text encoding was even more
poorly standardized than now, and when numeric representation was not
especially well standardized either (with various types of BCD in common use).
So, for two systems to be able to reliably communicate, they must establish an
acceptable way to represent data values... and it is likely that a degree of
translation will be required. The OSI presentation layer, defined by X.216,
nominally adjusts for these issues by the use of an abstract representation
transformed to and from a network representation. There are actually a number
of modern technologies that are similar in concept, but they are seldom viewed
as network layers .
Finally, the application layer is actually where, you know, things are done.
While the application layer is necessarily flexible and not strongly defined,
the OSI stack nonetheless comes with a generous number of defined application
layer protocols. While it's not particularly interesting to dig into these all,
it is useful to note a couple that remain important today.
X.500, the directory service application protocol, can be considered the
grandparent of LDAP. If you think, like all sane people, that LDAP is
frustratingly complicated, boy you will love X.500. It was basically too
complex to live, but too important to die, and so it was pared down to the
Although X.500 failed to gain widespread adoption, one component of X.500 lives
on today, nearly intact: X.509, which describes the cryptographic certificate
feature of the X.500 ecosystem. The X.509 certificate format and concepts are
directly used today by TLS and other cryptographic implementations, including
its string representations (length-prefixed) which were a decent choice at the
time but now quite strange considering the complete victory of null-terminated
X.400, the messaging service protocol, is basically the OSI version of email.
As you would expect, it is significantly more powerful and complicated than
email as we know it today. For a long time, Microsoft Exchange was better
described as an X.400 implementation than an email implementation, which is
part of why it is a frightening monstrosity. The other part is everything
about modern email.
And that is a tour of the OSI network protocols. I could go into quite a bit
more depth, but I have both a limited budget to buy ISO standards and a limited
attention span to read the ones I could get copies of. If you are interested,
though, the OSI stack protocols are all well defined by ITU standards available
in the US from ISO or from our Estonian friends for much cheaper. For a fun
academic project, implement them: you will be perhaps the only human alive who
truly understands the OSI model ramble your data communications professor
 Contrast SCTP, which provides an interface which is significantly different
from the UDP and TCP bytestream, due to features such as multiple streams. Not
unrelatedly, SCTP has never been successful on the internet.
 I think that this is actually a clue to the significant limitations of the
OSI model for teaching. The OSI model tends to create a perception that there
is one "fixed" set of layers with specified functions, when in actual modern
practice it is very common to have multiple effective layers of what we would
call application protocols.