COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.
I have an M. S. in information security, more certifications than any human should, and ready access to a keyboard. This are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in engineering for a small company in the healthcare sector. I have a background in security operations and DevOps, but also in things that are actually useful like photocopier repair.
You can see read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.
I've mentioned LDAP several times as of late. Most recently, when I said I
would write about it. And here we are! I will not provide a complete or
thorough explanation of LDAP because doing so would easily fill a book, and
I'm not sure that I'm prepared to be the kind of person who has written a
book on LDAP. But I will try to give you a general understanding of what
LDAP is, how it works, and why it is such a monumental pain in the ass.
I've also mentioned it, though, in the context of the OSI protocols. This is
because LDAP is a direct descendent of one of the great visions of the OSI
project: a grand, unified directory infrastructure with global addressability
and integration with the other OSI protocols. This is an example of the
ambition and failure of the OSI concept: in practice, directory services have
proven to be fairly special-purpose, limited to enterprise environments, and
intentionally limited in scope (e.g. kept internal for security reasons). OSI
contemplated a directory infrastructure which was basically the opposite in
every regard. It did not survive to the modern age, except in various bits
and pieces which are still widely used in... once again, crypto infrastructure.
Common crypto certificate formats are ASN.1 serialized (as we mentioned last
week) because they are from the OSI directory service, X.500.
Before we get into the weeds, though, let's understand the high level
objectives. What even is a directory, or a directory service?
It's a digital telephone directory.
This answer is so simple and naive that it almost cannot be true, and yet it
is. Remember that the whole OSI deal was in many ways a product of the
telephone industry, and that the telephone industry has always favored more
complex, powerful, integrated solutions over simpler, independent, but
composable solutions. One thing the telephone industry knew well, and had
a surprisingly sophisticated approach to, was the white pages.
If you think about it, the humble telephone directory was a surprisingly
central component of the bureaucracy of the typical 1970s enterprise. Today,
historians often review archived institutional and corporate telephone
directories as a way to figure out the timelines of historical figures.
Corporate histories often use the telephone directory as a main organizing
source, since it documents both the changing staff and the changing structure
of the organization (traditional corporate directories often had an org chart
in the front pages to boot!).
Across the many functional areas of a business, the telephone directory was a
unifying source of truth---or authority---for the structure and membership of
the organization. For consumer telephone service, directories had a less
complex structure but were an undertaking in their own way due to the sheer
number of subscribers. Telephone providers put computers to work at the job of
collecting, sorting, and printing their subscriber's directory entries very
early on. The information in the published white pages was an excerpt or
report from the company's subscriber rolls, and so was closely tied to other
important functions like billing and service management.
Inside the industry, the directory referred to all this and more: the unified,
authoritative information on the users of the system.
This concept was extended to the world of computing in the form of X.500 and
its accompanying OSI network protocols for access to X.500 information. At its
root, LDAP is an alternative protocol to access X.500, and so there are
substantial similarities between X.500 and the X.500-like substance that we now
refer to as an LDAP server. In fact, there is no such thing as an "LDAP server"
in the sense that LDAP remains a protocol to access an X.500 compliant
directory, but in practice LDAP is now usually used with backends that were
designed specifically for LDAP and avoid much of the complexity of X.500 in the
sense of the OSI model. The situation today is such that "X.500" and "LDAP" are
closely related concepts which are difficult to fully untangle; X.500 is very
much alive and well if you accept the caveat that it is only used in the
constrained form of corporate directories accessed by alternative methods .
The basic structure of X.500 is called the Directory Information Tree, or DIT.
The DIT is a hierarchical database which stores objects that possess
attributes, which are basically key-value pairs belonging to the object.
Objects can be queried for based on their attributes, using a form called the
Distinguished Name. DNs are made up of a set of attributes which uniquely
identify an object at each level of the hierarchy. For example, an idealized
X.500 DN, in the same notation as used by LDAP/LDIF (notation for DNs varies by
X.500 protocol), looks like this: cn=J. B. Crawford,ou=Blogger,o=Seventh
Standard,c=us. This DN identifies an object by, from top to bottom, country,
organization, organizational unit, and common name. Common name is an attribute
which contains a human-readable name for the object and is, conventionally,
widely used for the identification of that object.
Note some things about this concept: first, the structure is rooted in the
US. How does the namespace work, exactly? Who determines organizations under
countries? Originally, X.500 was intended to be operated much like DNS, as a
distributed system of many servers operating a shared namespace. Space in that
namespace would be managed through a registry, which would be SRI or Network
Solutions or whatever .
Second, this whole concept of identifying objects by attributes seems like it's
very subject to conventions. It is, but you must resist the urge to hear
"hierarchical store of objects with attributes" and think of X.500 as being a
lightly-structured, flexible data store like a modern "NoSQL." In reality it is
not, X.500 is highly structured through the use of schemas.
We mostly use the term "schema" when talking about relational databases or
markup languages. X.500 schemas serve the same function of describing the
structure of objects in the DIT but look and feel different because they are
highly object-oriented. That is, an X.500 schema is made up of classes. Classes
can be inherited from other classes, in which case their attributes are merged.
Resultingly there is not only a hierarchy of data, but of types. Objects can
be instances of multiple classes, in which case they must provide the
attributes of all of those classes, which may overlap. It's seemingly simple
but can get confusing very fast.
Let's illustrate this by taking a look at a common X.500 class:
organizationalPerson, or 184.108.40.206. What's up with that number? Remember the
whole snmp thing? Yes, X.500 makes use of OIDs to, among other things, identify
classes. That said, we commonly (and especially in the case of LDAP) deal only
with their names.
While organizationalPerson does not require any attributes, it suggests things
You will notice that this list is dated, and missing obvious things like name.
The former is because it is in fact very old, the latter is because
organizationalPerson is an auxiliary class and so is intended to be applied to
objects only in addition to other classes. Namely, organizationalPerson is
usually applied to objects alongside Person, which has some basics like:
cn (common name, required)
sn (surname, required)
assistant (as in, reference to this person's assistant)
You will notice that this class both overlaps with organizationalPerson on
telephoneNumber, but also has some odd things like assistant that seem to be
specific to an organization. Why the two different classes, then? Conway
observed that the structure of systems resembles the structure of their
creators; X.500 is no exception. organizationalPerson was written more as part
of an effort to represent organizations than as part of an effort to represent
people, these two efforts were not as well harmonized as you would hope for.
An object has a "primary" or "core" type. This is referred to as its structural
class, and the class itself must be specially marked as structural. This is
important for several reasons that are mostly under the hood of the X.500
implementation, but it's useful to know that Person is a structural class... so
an X.500 entry representing a human being should have a core type of Person,
but in most cases will have multiple auxiliary types bolted on to provide
That's a lot about the conceptual design of X.500... or really just the core
concept of the data structure, ignoring basically the entire transactional
concept which is more complicated than you could ever imagine. It's enough to
get more into LDAP, though.
Before we go fully into the LDAPverse, though, it's useful to understand how
LDAP is really used. This swerves right from OSI to one of my other favorite
topics, Network Operating Systems .
For a group of computers to act like a unified computing environment, they must
have a central concept of a user. This is most often thought of in the context
of authentication and authorization, but a user directory is also necessary to
enable features like messaging. Further, the user directory itself (e.g. the
ability to use the computer as a telephone directory) is considered a feature
of a network computing environment in its own right.
In almost all network computing environments, this user directory is descended
from X.500. This is seen in the form of Microsoft Active Directory for Windows
(modern Windows does not actually use LDAP to interact with the AD domain
controller, but instead a different directory access implementation called NT
LAN Manager or NTLM), and LDAP for Linux and MacOS (we will not discuss NIS
for Linux now, but perhaps in the future).
In these systems, the directory server acts as the source of basic information
on the user. Consider another important LDAP class, PosixAccount. PosixAccount
adds attributes like uid, homeDirectory, and gecos that reflect the user
account metadata expected by POSIX . It is possible to perform
authentication against LDAP as well, but it comes with limitations and security
concerns that make it uncommon in practice for operating systems. Both Windows
and Unix-like environments now generally use Kerberos for authentication.
Many things have changed in the transition from the grand vision of X.500 to
the reality of LDAP for information on user accounts. First, the concept of a
single unified X.500 namespace has been wholly abandoned. It's complex to
implement, and it's not clear that it's something anyone ever wanted, anyway,
as federation of directories between organizations brings significant security
and compliance concerns.
Instead, modern directories usually use DNS as their root organizational
hierarchy. This basically involves cramming shim objects into the DIT that
reflect the DNS hierarchy. The example DN I mentioned earlier would more often
be seen today as cn=J. B. Crawford,dc=computer,dc=rip. dc here is Domain
Component, and domain components are represented in the same order as in DNS
because LDAP uses the same confused right-to-left hierarchical representation
(AD does it the correct way around).
Another major change has been to the structure. The original intention was that
the X.500 hierarchy should represent the structure of the organization. This is
uncommon today, because it introduced a maintenance headache (moving objects
around the directory as people changed positions) and didn't have a lot of
advantages in practice. Instead LDAP objects are more commonly grouped by their
high-level purpose. For example, user accounts are often placed in an OU called
"accounts" or "users." All in all, this marks a more general trend that LDAP
has become a system only for software consumption, and there is minimal concern
today about LDAP being browseable by human users.
So let's consider some details of how LDAP works. First off, LDAP is a binary
protocol that uses a representation based on ASN.1. That said, LDAP is almost
always used with LDAP Data Interchange Format, or LDIF, which is a textual
representation. So it's very common to talk about LDAP "data" and "objects"
in LDIF format, but understand that LDIF is just a user aid and is not how
LDAP data is represented "in actuality."
LDAP provides more or less the verbs you would expect: ADD, DELETE, MODIFY.
These are not especially interesting. The SEARCH operation, however, is where
much of the in-use complexity of LDAP resides. SEARCH is a general-purpose
verb to retrieve information from an LDAP DIT, and it is built to be very
flexible. At its simplest, SEARCH can be invoked with a baseObject (a DN)
and a scope of BaseObject, which just causes the server to return exactly the
object identified by the DN.
In a more complex application, SEARCH can be invoked with a base path
representing a subtree, a scope of wholeSubtree (means what it says), and a
filter. The filter is a prefix-notation conditional statement that is applied
to each candidate object; objects are only returned if the filter evaluates to
We can put these SEARCH concepts together into a very common LDAP SEARCH
application, which is locating a user in a directory. A common configuration
for a piece of software using LDAP for authentication would be:
The $user here is a substitution tag which will be replaced by the user's
username. Confusingly, in the PosixAccount class, uid refers to the user name
while uidNumber is the value we usually refer to as uid.
A real headache comes about with groups. In authorization applications like
RBAC, you commonly want to get the list of groups a user is a member of to make
authorization decisions. There are multiple norms for representing groups in
LDAP. Groups can have a list of accounts which are members, or accounts can
have a list of groups they are a member of. Both are in common use, generally
the former for Windows and the latter for UNIX-likes. This is where the
flexibility of the filter expression becomes important: whatever "direction"
the LDAP server represents the relationship, it's possible to go "the other
way" by querying for the object type that contains the list with a filter
expression that the list must contain the thing you're looking for. Because
finding all users in a group is a less common requirement than finding all
groups a user is in, a lot of LDAP clients in practice make somewhat narrow
assumptions about how to find users but provide a more general (but also more
irritating) configuration for finding group information .
Another complexity of LDAP in practice is authentication. A last important
LDAP verb is BIND, BIND is used to assume the identity of a user in the
directory. While anonymous access to LDAP is common, modern directory servers
implement access control and limit access to sensitive values like password
hashes to the users they belong to, for obvious reasons. This means that the
formerly common approach of anonymously querying for a user to get their
password hash and then checking the password should never be seen or heard of
today. Instead, user authentication is done via BIND: the LDAP client attempts
to BIND to the user (as an LDAP object) using the password provided by the
user. If the server allows it, the user apparently provided the correct
password. If the server doesn't allow it, the user better try again. In this
way, the actual authentication method is the authentication method of the LDAP
server itself .
There's a problem, though. Or rather, two. First, for security reasons, it's
not necessarily a great idea to allow users to query for complete group
information, and depending on how group membership is represented it is not
necessarily practical to use access controls to allow a user to access only
the group information they should know about. Second, applications often have
a need to access directory information at points other than when a user is
actively logging in and the application has access to their password. For
obvious reasons it is not a good idea for the application to store the user's
password in plaintext for this purpose.
The solution is an irritating invention usually called a "manager." The manager
is a non-person account (also called a system account) that an LDAP client uses
in order to BIND to the LDAP server so that it is permitted to read information
that is not available for anonymous query. Most commonly this is used for
getting a user's group memberships. This is a particularly common setup because
a lot of applications need access to user group information fairly frequently
and do not strongly abstract their user information access, so they "cache"
group information and update it from the LDAP server periodically---outside of
the context of an authenticating user.
Very frequently this takes the form of periodically "synchronizing" the
application's existing local user database with the LDAP server, a lazy bit
of engineering that causes endless frustration for administrators but is also
difficult to avoid as the reality is that the concepts of "user" and "group"
simply vary far too widely between applications to completely centralize all
user information in one place.
As mentioned earlier, all of the methods of authenticating against LDAP have
appreciable limitations. For this reason, Kerberos is generally considered the
superior authentication method and "real" LDAP authentication is not common at
the OS level. That said, Kerberos configuration and clients are relatively
complex, which is probably the main reason that many non-OS applications still
use direct LDAP authentication.
In practice, directory servers are not usually set up as a standalone package.
Usually they are one facet of a larger directory system or identity management
system. Popular options are Microsoft Active Directory and Red Hat IDM (based
on FreeIPA), but there are a number of other options out there. Each of these
generally implement a directory service alongside a dedicated authentication
service (usually Kerberos because it is powerful and well researched), a name
service (DNS), and some type of policy engine. DNS might initially be
surprising here, as it does not at first glance seem like a related concern.
However, in practice, directory systems represent device just as much as
people. Because each host needs to have a corresponding directory entry
(particularly important with Kerberos where hosts need the ability to
authenticate to other network services on their own), it's already necessary to
maintain host information in the directory service which makes it a natural
place to implement DNS. DHCP is also sometimes implemented as part of the
directory service because there is overlap between the directory management
functions and basic host management functions of DHCP, but this seems to be
less common today because in enterprise orgs DHCP is more often part of an IPAM
solution (e.g. Infoblox).
You might be surprised to hear that there are all of these inconsistencies and
differences in LDAP implementations considering my claim that X.500 is strongly
typed against schemas. The nature of this contradiction will be obvious to any
DBA: for any non-trivial application, the schema will always be both too
complex and not complex enough. The well-established X.500 and LDAP schemas,
published for example in RFCs, don't have enough fields to express the full
scope of information about users needed in any given application.
Simultaneously, though, they provide so many types and attributes that there
are multiple ways to solve a given problem. Any attempt to reduce one problem
will inevitably make the other worse.
The long history of these systems only makes the problem more complicated, as
there are multiple and sometimes conflicting historic schemas and approaches
and it's hard to get rid of any of them now. For this reason identity
management solutions often come with some sort of "quick ref" documentation
explaining the important aspects of the LDAP schema as they use it, to be used
as an aid in configuring other LDAP clients.
I'm going to call this enough on the topic of LDAP for now... but there will be
a followup coming. For me, this whole discussion of complex enterprise
directory solutions raises a question: can we have the advantages of a directory
service, namely a unified sense of identity, in a consumer environment?
The answer is yes, through the transformation of all software into a monthly
subscription, but I want to talk a bit about the history of attempts at
bringing the dream of the NOS to the home. Microsoft has tried at least a
half dozen times and it has never really worked.
 As an example of this ontological complexity, Microsoft Active Directory is
sometimes referred to as being an LDAP server or LDAP implementation. This is
not true, but it's also not untrue. It is perhaps more accurate to say that
"Active Directory is an implementation of a modified form of X.500 which is
commonly accessed using LDAP for interoperability" but that's a mouthful and
probably still not quite correct.
 Have I written about this here before? While IANA was long operated by Jon
Postel who was famously benevolent, the function of ICANN was tossed around
defense contractors for a while and then handed to Network Solutions, who
turned out to be so comically evil that the power had to be taken away from
them. ICANN didn't turn out much better. It's a whole story.
 Requisite explanatory footnote about network operating systems (NOS): the
term has basically changed in definition midway through computer history.
Today NOS generally refers to operating systems written for network appliances,
like Cisco IOS. Up to the mid-'90s, though, it more commonly referred to a
general-purpose operating system that was built specifically to be used as part
of a network environment, such as Novell Netware. The salient features of NOS
such as centralized user directories, inter-computer messaging, and shared
access to storage and printers are present in all modern operating systems
(sometimes with implementations borrowed from historic NOS) and so the use of
the term NOS in this sense has faded away.
 This whole thing gets into some weird UNIX history, particularly the gecos
and the aspect of LDAP's UNIX-nerd cousin NIS. Maybe that'll be a post some
 For how closely connected the concept of users and groups seems to be, this
issue of the user->group query being irritatingly difficult is remarkably
common in identity systems, even many modern "cloud" ones. Despite being a
common requirement and one of the conceptually simpler options for
authorization RBAC does not generally seem to be a first-class concern to the
designers of directories.
 It's possible to use a wide variety of network services for authentication
in this way, by just passing the user's credentials on and seeing if it works.
I have seen a couple of web applications offer "IMAP authentication" in that
way, presumably because small organizations are more likely to have central
email than LDAP.
Very early on in my career as an "IT person," when my daily work consisted
primarily of photocopier and laptop warranty service with a smattering of
Active Directory administration (it was an, uh, weird job), I was particularly
intimidated by SNMP. It always felt like one of those dark mysteries of
computing that existed far beyond my mortal knowledge, like distributed
The good news is that SNMP is actually, as the name suggests, quite simple.
The reason for my SNMP apprehensions is a bit silly from the perspective of
computer science: SNMP makes extensive use of long, incomprehensible numbers.
That is, of course, basically a description of all of computing, but SNMP
exposes them to users in a way that modern software generally tries to avoid.
Today, we're going to learn about SNMP and those numbers. Surprise: they're
an emanation of an arcane component of the OSI stack, like at least 50% of
the things I talk about.
But let's step back and just talk about SNMP at a high level. SNMP was designed
to offer a portable and simple to implement method for a manager (e.g. an
appliance or administrator's workstation) to inspect the state of various
devices and potentially change their configuration. It's intended to be
amenable to implementation on embedded systems, and while it's most classically
associated with network appliances there is a virtually unlimited number of
devices and software packages which expose an SNMP interface.
SNMP often acts as a "lowest common denominator:" it's a simple and old
protocol, so just about everything supports it. This makes it very handy for
getting heterogeneous devices (especially in terms of vendor) into one
monitoring solution, and sometimes allows for centralized configuration as
well, although that gets a lot trickier.
At its core, SNMP belongs to a category of protocols which I refer to as remote
memory access protocols (this is my taxonomy and does not necessarily reflect
that of academic work or your employer). These are protocols which allow a
remote host to read and (possibly subject to access controls) write an emulated
memory address space. This does not necessarily (and often doesn't) have
anything to do with the actual physical or virtual memory of the service, and
the addressing scheme used for this memory space might be eccentric, but the
basic idea is there: the "server" has memory addresses, and the protocol allows
you to read and write them.
These remote memory access protocols, as a category, tend to be very common
with embedded systems because if they do happen to align with physical
memory, they are very simple to implement. A prominent example is Modbus, a
common industrial automation protocol that consists of reading and writing
registers, coils, etc., which are domain-specific terms for addresses in the
typed memory of PLCs (historically these were physical addresses in the PLC's
unusually structured memory, but today it's generally just a software construct
running on a more general-purpose architecture).
Unsurprisingly, then, the basic SNMP "verbs" are get and set, and these take
parameters of an address and, if setting, a value. On top of this very simple
principle, SNMP adds a more sophisticated feature called a "trap," but we'll
talk about that later. Let's call it an "advanced topic," although it's
actually one of the most useful parts of SNMP in practical situations.
What is perhaps most interesting to consider, as far as arcane details of SNMP,
is the structure of the addresses. This is the scary part of SNMP: just about
the first time you have to interact directly with SNMP you will encounter an
address, called a variable or more properly object identifier (OID) in SNMP
parlance, like .220.127.116.11.18.104.22.1685. It's like an IP address, if they were
substantially less user-friendly. That is to say, an IPv6 address .
These OIDs are in fact hierarchical addresses in a structure called the
Management Information Base (MIB). The MIB is an attempt to unify, into one
data structure, the many data points which could exist across devices in
a network. This idea of a grand unification of the domain of knowledge of
"configuration of network appliances" into one unpleasant numbered hierarchy
has a powerful smell of golden era Computer Science with a capital CS, and
indeed it is!
You see, from a very high level, the MIB is actually viewed as something akin
to a serialization format---it is, after all, fundamentally concerned with
packing the state of a device (Management Information) into a normalized,
strictly structured, interoperable format. To achieve this, the MIB is
described using something called SMI (e.g. RFC2578), which is best understood
as a simplified (or perhaps more formally "constrained") flavor or ASN.1.
ASN.1 is the most prominent of the interface description and serialization
formats developed for the OSI protocol suite. You might be tempted to call
ASN.1 an example of the "presentation layer," although like most invocations of
the OSI model, you would be misunderstanding the OSI model in saying so (the
OSI presentation layer protocols are, as the name suggests but is often
ignored, full on request-reply network protocols, not just serialization
formats). Nonetheless, people say this a lot, and at least ASN.1 truly dates
back to OSI, unlike a lot of things people relate to the OSI model.
You might be familiar with ASN.1 because it is widely used in cryptography, and
by this I mean that cryptography applications are widely saddled with ASN.1.
Most cryptographic certificates, the formats we tend to variously (and
confusingly) call X.500, PKCS#11, DER, PEM, etc, are ASN.1 serialized. This is
a whole lot of fun since ASN.1 is significantly divergent from modern computing
conventions, including the use of length-prefixed rather than terminated
strings (in some cases). I bring this up because it has lead to a rather famous
series of vulnerabilities in TLS implementations, because apparently not even
the people implementing TLS have actually read the ASN.1 specification that
Anyway, back to SMI. Basically, SMI allows vendors of devices (or anyone
really) to write, in SMI, a description of an MIB "module." A "module" is
basically a list of OIDs (hierarchically structured) with their types and other
metadata. This SMI source is then compiled into the binary representation
actually used by SNMP clients. If you are unlucky, you may need to write SMI
yourself for devices whose vendors implemented SNMP but did not provide the
supporting materials. But, in most cases, device vendors provide a file
(commonly called an MIB file) which is the SMI description of the MIB module(s)
implemented by the device. This MIB file can then be fed to your SNMP tool to
be compiled into its "whole picture" binary MIB.
Knowing that it is a result of compiling together SMI produces by various
vendors, let's take a look at the structure of the MIB. Each dot-separated
number identifies a subtree, which for extra fun are called "arcs" in the
context of the MIB. At the very top of the OID hierarchy is a top level which
identifies the standards authority. This is 0, 1, or 2, which refer to ITU,
ISO, and ITU/ISO together, respectively. Of course these three parts of the
tree use different internal structures so I can't generalize past this point,
but I will focus on the ISO tree because it's the one most commonly used in
Under the .1 ISO hierarchy are arcs for ISO standard OIDs, registry authorities
(somewhat difficult to explain and also not widely used, basically a metadata
space), ISO member organizations by country (e.g. ANSI in the US), and then
identified organizations, which are just companies and organizations that have
asked for OID space. This can be somewhat confusing because many national ISO
member organizations also allocate OID space within their arcs, but major
vendors (e.g. Cisco) are often found at this top level instead.
So let's take a look at a somewhat arbitrary example, an MIB for Juniper's
Junos. I'm using this as an example rather than the more obvious Cisco IOS
because I got mad at Cisco's website for getting MIBs which did not appear to
have seen an update in a decade. In any case, the MIB starts out at
In terms of the hierarchy this means: ISO standard, identified organization,
DOD, internet, private projects, private enterprises, Juniper.
Haha, wait, that just goes against most of what I said. What's going on with
the DOD thing?
The entire Internet, big-I, TCP/IP world is considered to be a subset of the
DOD, for OID purposes. This .22.214.171.124.4.1 space is actually managed by IANA, and
if you would like your own .126.96.36.199.4.1 number they will be happy to give you
one upon application.
This is all particularly interesting historically, because unlike a lot of
protocols I talk about SNMP does not predate IP. It was designed specifically
for use on IP networks, over UDP. SNMP is based on several earlier protocols
also used with IP. So, where does this weird rendition of IP to a small subset
Well, it really has more to do with politics than technology. The MIB tree
essentially belongs to ITU and ISO, but ITU and ISO are both organizations
which are not especially known for swiftly and cheaply adopting standards
proposed by vendors. It was fairly obvious from an early stage that vendors
would need to produce MIB modules for their own devices fairly quickly, but ISO
and ISO member organizations were not especially enthusiastic about issuing a
large number of arcs to these vendors. So instead, IANA stepped in---but not
quite IANA yet, instead IANA's predecessor, Jon Postel. Postel, who was the
IANA for quite some time, worked on contract for DOD, and so he assigned OIDs
out of their space. There's no really good reason for it to be this way, but
if you work with SNMP a lot then typing .188.8.131.52.4.1 will have become
Now, what is found inside of this Juniper space? Well, for example, there's
.184.108.40.206.4.1.26220.127.116.11.18.104.22.168. This is an integer value which provides the
average power used, in watts, by whatever's plugged into a particular outlet of
a managed PDU. The MIB structure allows OIDs which contain other OIDs (object
identifier type OIDs) to actually contain tables of those OIDs, so
.22.214.171.124.4.1.26126.96.36.199.2.4 is a table of all of the outlets on the PDU, and
.188.8.131.52.4.1.26184.108.40.206.2.4.1 within it is a list of useful properties of the
outlet such as name, status, and various useful electrical measurements like
current and power factor.
After all of this talk of ASN.1 and MIBs and etc, these examples are actually
very useful and concrete. SNMP is, after all, actually a useful protocol for
real-world situations, such as centralized monitoring of your PDUs to identify
problems and catch your colo customers exceeding their power budgets.
And remember, SNMP even allows writing. So .220.127.116.11.4.1.2618.104.22.168.22.214.171.124,
the status of the outlet, can not only be used to determine whether the outlet
is on or off but also to turn the outlet on or off, which is a fun move when
your colo customer doesn't pay their bill for months.
SNMP is not limited to as concrete of devices as managed PDUs. For example,
RFC4113 provides an MIB for UDP. That is, it permits you to describe and modify
UDP messages using SNMP, if that's a thing you really want to do. In fact, the
entire concept of the MIB is far more general than SNMP, and ISO protocols and
standards often use MIB OIDs for identification purposes having little to do
with the application we're discussing here. For example, many MIME types have
an associated OID because the OSI email equivalent, X.435, uses OIDs to
identify the types of message parts. In general, OSI standards are lousy with
OIDs used as identifiers and, less frequently, to describe data structures and
The fact that you can set via SNMP, and get all kinds of potentially sensitive
questions, raises the concern of security. Fortunately, SNMP provides an
airtight solution to this problem: "communities." A community is really just a
shared password, if the SNMP manager has the same community string as the SNMP
agent then it is allowed access. Even better, many SNMP agents have well-known
default community strings. Perfect. To be fair, SNMPv3 adds a more rigorous
authentication support including support for different authentication methods,
but there are still plenty of SNMPv2 devices out there with community string
set to "public."
One final thing to complete our discussion of SNMP is to mention the trap. More
technically, I am going to conflate traps and inform requests which are
actually slightly different, but everyone conflates them so I feel okay about
it. A trap is an extremely useful feature of SNMP which allows you to configure
an agent (e.g. device) to immediately inform a manager when certain events
occur. This is essentially a basic alarm capability built in to many devices.
Traps are identified by OIDs, and can bind other OIDs, so that the generated
trap message includes not only which trap was triggered, but also some other
related data if so configured. To be complete, an inform request is really just
a trap where the agent acknowledges receipt (this is not the case with normal
traps) so that the agent can resend if it is not acknowledged.
In order for traps to work, the manager first needs to listen for traps, which
is usually fairly straightforward to set up. Then, various OIDs are set on the
agent to enable traps and set the destination for those traps (e.g. the IP of
the manager). In some cases agents also provide a web interface or other more
convenient mechanisms to set these up, which is much appreciated since SNMP
is unpleasant to have to think about directly.
That's about it for SNMP. Simple, right? Well, it really is pretty simple, as
long as you agree to just take OIDs as magic numbers that come from wherever it
is computers do and not ask too many questions. Where SNMP can become rather
rough is when you run into issues with MIBs, or if you are using SNMPv3 where
the authentication and configuration can be amazingly, maddeningly complex for
As an aside, the whole reason I'm talking about SNMP is because a reader asked
me to. For much the same reason, from the same reader, I'll be talking about
LDAP soon. LDAP is even more an out-of-place artifact of OSI than SNMP, and it
is basically impossible to describe as used in short form, but I will take a
shot at illustrating the odd historical components of LDAP and the ways they
matter today. It will at least serve as a teaser for my yet to be written book,
"Survival Under LDAP." LDAP is survivable for as many as 70% of Americans,
but you must know how to protect yourself!
 I continue to seriously question the merits of the complex address
representation used with IPv6. If we had stuck to decimalized bytes separated by
dots, we'd be doing a lot more typing, but we wouldn't be trying to remember
what :: means when it's there.
I got mad at my commercial landlord for running shoddy political advertising
on their buildings and trying to block a homeless shelter and in general
being exceptionally bourgeois, and so didn't renew my office lease. My mailing
address has changed, the new one is in the footer and other normal places.
Righteous outrage is convenient like that. It's a PO box now so the good news
is I'll just keep it regardless of the office situation, the bad news is that I
will need to routinely enter one of the most depressing places on earth: a New
Mexico post office. They did fix the asbestos finally but we'll see about the
rat problem. And thanks to everyone who has sent me letters, and sorry for
taking so long to respond to them.
After taking a long time to overcome my electromagnetic hypersensitivity that
only reacts to SIP, I spent an afternoon fixing the PBX and will finally
resume fax delivery to the vanishingly small list of people who have requested
it. One day I'll write a post on T.38.
I have found that Apple Mail often rejects emails from my AWS SES setup. The
SMTP error directs me to a help page with absolutely no useful information,
so clearly they're learning from Google's expertise in running a major email
service. The funny thing is that I am having no delivery issues with gmail, but
I'm pretty sure if I change anything at this point I will. So if you use Apple
Mail, I'm sorry, for many reasons. Maybe try fax? I think certain LaserWriters
could take a fax modem, if you really want to stay in-ecosystem.
Where we left
the Emergency Alert System (EAS) had been "replaced," at least in name, by
IPAWS: the Integrated Public Alert and Warning System. In fact, it's more
accurate to say that EAS is now just one component of IPAWS, and the task of
originating alerts (and much of the bureaucracy) now rests on IPAWS.
IPAWS was particularly motivated by Hurricane Katrina, as this large-scale
disaster had made it apparent how limited the existing emergency alert
infrastructure was. A large portion of people do not receive EAS alerts because
they are not listening to the radio or watching television. There are other
avenues that exist to deliver alert information but the infrastructure was not
in place to get alerts into these channels.
So, IPAWS took the fragmented landscape of miscellaneous government
communications options and combined them into one beautiful, happy family that
works together in flawless harmony. Let's just pretend.
There are several major components of IPAWS which had existed, at least in some
form, prior to IPAWS but had not been unified into one network. These were
EAS, NAWAS, WEA, and NOAA Weather Radio. More ambitiously, IPAWS is intended to
be easily extensible to include other government and non-government alerting
systems, but first, let's talk about the core.
The EAS we have already discussed. Another emergency communications system
which dates back to the Cold War is NAWAS, the National Warning System.
Wikipedia asserts that NAWAS was established in 1978, but this can't be correct
as it's described in an AT&T standard a full decade earlier as an already
existing system, with much the same capabilities it has today. 1978 may have
been a significant overhaul of the system; it's hard to figure out a whole lot
about NAWAS as it had historically been classified and today is obscure .
NAWAS serves the purpose of alerting, and more general communications, between
government authorities. It is essentially a system of four-wire  leased
telephone lines that links FEMA and other federal locations with state emergency
authorities. Within states, there is typically a subsidiary NAWAS network for
which the state authority acts as control and local authorities are connected
An older operating manual for NAWAS
has become public and you can read a great deal about it there, but the basic
concept is that it functions as an intercom system over which federal centers
such as NORAD or the National Weather Service can read voice messages, which
will be heard in all state emergency operations centers. This provides a very
rapid way of spreading basic information on a national emergency, and NAWAS
is both a descendant and component of systems intended to trigger air raid
sirens as quickly as possible after a NORAD alert (more about siren control
will likely be a future topic).
Although NAWAS has seen technical improvement in the equipment, it still
functions more or less the exact same way it did decades ago, and operating
procedures are very simple. If you have ever used a good-quality, multi-station
commercial intercom system with a visual alert feature, such as is often used
in the theater industry for cues, you would find NAWAS unsurprising... except
that the stations span thousands of miles.
NAWAS functions primarily as a party line intercom, but it does support dialing
between stations to alert a specific location to start listening. Dialing is
based on FIPS codes, and while that's not too strange of a choice from a
federal system in general, it's probably not a coincidence that NAWAS stations
are alerted using a similar numbering scheme to SAME headers... typically a
station like the NWS would be issuing EAS messages and calling state EOCs to
advise of the possible damage simultaneously.
The next core component of IPAWS in arbitrary Wikipedia ordering is WEA, the
Wireless Emergency Alert system. WEA is a long-in-development partnership
between the FCC and mobile carriers that ("partnership" in that participation
is now mandatory) which allows short, textual emergency alerts to be sent to
mobile phones throughout a region. This relies in a component of the 3GPP
protocol stack that is not widely used (or really used at all) in the US, which
essentially allows a cellular tower to send a true "broadcast" message which
will be handled by every phone associated with that cell. In this way,
addressing is roughly geographical rather than based on station identities.
These broadcast messages trigger special handling in the cell phone operating
system, which generally feels a bit awkward and roughly implemented. Typically
the old EBS Attention Tone is used as an audible alert and the message is
displayed immediately over other applications.
Use of WEA has traditionally been rather heavily restricted, in practice to
presidential alerts (e.g. the test conducted some years ago) and AMBER alerts.
One might think that there's sort of an odd disparity in severity, between
essentially "nuclear attack" and "child abducted somewhere in the same state,"
and indeed it is a major criticism of the AMBER alert system that emotionally-
motivated handling of AMBER alerts as top-priority induces alarm fatigue that
may lead to people ignoring or downplaying an actual nationwide civil
emergency. If you own a cell phone and live in a state that participates in
AMBER alerts you're probably inclined to agree, or maybe our child abduction
rates here in the land of enchantment are just substantially elevated.
The final major component is NOAA Weather Radio, more properly called NOAA
Weather Radio All Hazards and often referred to as NOAA All Hazards Radio.
This last one, which makes the most sense, is of course unofficial. A great
many US residents are amazingly unaware of the NOAA Weather Radio infrastructure,
which has been steadily expanded to substantial nationwide coverage. Weather
Radio normally transmits a computer-synthesized voice describing the current
weather and upcoming forecast, on one of a list of VHF frequencies around
162MHz. The full forecast generally repeats every fifteen minutes. This loop,
updated regularly, is occasionally supplemented by outlook statements and other
When the NWS issues a weather warning or alert, however, Weather Radio stations
immediately play the alert with SAME headers and footers... much the same as
EAS. Special-purpose radio receivers, popular in tornado-prone regions, parse
the SAME headers and sound an audible alarm when an alert is issued for the
correct region. In fact, the SAME protocol was originally designed for this
purpose and was adopted for EAS after its widespread use for Weather Radio.
The relationship between Weather Radio and EAS is substantial. Since the
development of EAS, Weather Radio stations now transmit all EAS alerts, not
just those issued by the NWS. This is why "All Hazards" was awkwardly appended
to the name: it functions as a general purpose emergency radio network,
complete with a ready supply of specialized alarm receivers. In a way it is the
NEAR concept deployed more successfully, but... well, success is relative.
Weather radio receivers are uncommon nationally, despite their low cost .
So these are the four basic channels of IPAWS: broadcast radio and television,
inter-agency telephone, cellular phones, and the dedicated radio network. IPAWS
allows an alert to be simultaneously, and quickly, issued to all of these
services. This is particularly important because WEA alerts, although they are
length constrained, can encourage people in affected areas to turn on a radio
to receive more extensive information via EAS.
All of that said, the full scope of IPAWS is considerably more ambitious, which
leads to IPAWS-OPEN. IPAWS-OPEN often gets rather grand descriptions as an
enterprise, machine learning, blockchain artificial intelligence, but I'm here
to cut through the bullshit: it's just a set of servers that broker XML
Specifically, those XML documents are the Common Alerting Protocol, or CAP.
CAP is essentially the same concept as SAME but in XML form rather than FSK,
and including extensive capabilities to provide multiple representations of an
alert, intended for different languages and media. CAP supports encryption and
signing, which provides an authentication mechanism as well.
IPAWS-OPEN consists of servers which receive CAP documents and then distribute
them onwards. That's basically it, but it is designed to allow for flexible
expansion of IPAWS as a wide variety of alerting media can simply participate
in the IPAWS-OPEN network. For example, a state DOT's changeable message
highway signs could repeat alerts automatically if the control system's vendor
implemented an IPAWS-OPEN client.
Although IPAWS, in theory, fully integrates all alerting channels, this
obviously has not worked out in practice. Many agencies still operate
fundamentally different alerting systems, most notably the NWS which has an old
and extensive one, and so various sets of gateways, converters, and sometimes
manual processes are required for a message to cascade from IPAWS-OPEN to all
alerting channels. That said, in theory IPAWS will complete the EAS vision of
flexible origination and targeting. A state governor, for example, can take
full advantage of federal systems to deliver an emergency message to their
state by using a CAP origination tool to send the message into IPAWS-OPEN.
Public and private organizations are able to access IPAWS-OPEN either through
authority of a government agency or via a "public" (after extensive paperwork)
data feed. This can be used to put alerting wherever you want; the government
has somewhat comically pursued an internet-based alerting system, for example,
for well over a decade without any real progress made. There seems to have been
a somewhat fundamental misunderstanding of the way the internet is used, as
government officials have often imagined an internet alerting capability as
looking exactly like EAS on television stations---that is, the worst popup
ever. What the infrastructure to deliver that would look like has remained
mysterious, although perennial proposals have ranged from silly to alarming.
That said, Windows tray icon tools to pop up IPAWS alerts are out there,
digital signage vendors offer the capability to automatically display alerts,
and Google has tossed IPAWS into the Google Now pile. There is some progress,
but it is uneven and not often seen in the real world.
For reasons that are partly political and partly historical (that then turned
into political), the United States has surprisingly weak infrastructure for the
distribution of emergency information when compared to other developed nations.
Much of this is a simple result of the lack of a state-owned broadcasting
authority that operates domestic media. All national communications necessarily
pass through the complex network of commercial journalism; while this may have
ideological advantages it is not especially fast or reliable.
The trouble is that, in a way, any centralized, federally-operated system of
delivering information to a large portion of the citizenry would be perceived
as---and probably be---an instrument of propaganda, in violation of long-held
American principles. For this reason, it seems likely that we will always have
a fragmented and seldom-used alerting infrastructure.
On the other hand, much of the modern state---primarily the ridiculous effort
over years taken to deploy WEA---is a result of systematic underfunding and
deprioritization of civil defense in the United States. For the nation with the
world's greatest defense budget and a very high, although not first-place,
military budget as portion of GDP, civil defense has always been an
afterthought. Our preparedness against emergency---whether natural, civil, or
warfare---has routinely been judged less important than offensive capability.
During the Cold War, this was a cause of a surprising amount of strife even
within the military. Robert McNamera, Secretary of Defense during the key
period of the 1960s, routinely objected to investment in missile and even
missile defense systems rather than fallout shelters and relocation
preparations. Today, absent the specter of the Soviet Union's sausage ICBMs,
there is less interest in civil defense as a military strategy than ever
Instead, most modern civil defense efforts are motivated by the political
embarassment subsequent to a series of hurricanes, most notably Katrina.
Unfortunately, public and political reaction to these events tends to end up
down very strange rabbit holes and has seldom lead to serious, systematic
review of civil defense capabilities. What political will has come about is
repeatedly captured by the defense industrial complex and transformed into yet
another acquisition project that costs billions and delivers next to nothing.
What I'm saying is that nothing is likely to change. A single successful
national presidential alert will continue to be regarded as a major
achievement, and the most capabile, reliable technology will continue to be
mild evolutions of systems developed prior to 1980.
All of this pessimism aside, next time I return to the topic of civil defense I
would like to look at its most pessimistic aspect---the part that McNamera
believed to be worth the money. We'll learn about the Federal Relocation Arc
and the National Relocation Program. Naturally with a focus on telecom.
 An obviously interesting question is "what came before NAWAS?" It's hard to
say, and very likely there is no one answer, as the Civil Defense
Administration, DOD, and various state and regional authorities had all stood
up various private-line telephone networks. This includes federal initiatives
such as the "lights and bells" warning system by AT&T which are fairly well
documented, but also a lot of things only vaguely referred to by historians who
seem to actually know very little about the context. Case in point, this
piece from the
Kansas Historical Society which repeats the myth of the Washington-Moscow
hotline as a red phone while giving no useful information about the artifact.
It appears very much like an early 1A2 key system instrument, and the pre-911
emergency number sticker strongly suggests it was just used with the plain-old
telephone system. At the time, a red handset was commonly used to indicate a
"hotline" in the older sense of the term, that is, a no-dial point-to-point
link. This wasn't a feature of the 1A2 system but 1A2 did offer an intercom
feature that this phone may have been left connected to.
 In a four-wire telephone line, audio in and out (microphone and speaker)
are carried on separate pairs. This is generally superior and has long been
used within telephone exchanges and long-distance lines, because the "hybrid"
transformer which allows for both functions on one pair is a source of
distortion and is prone to issues with echos and signal path loops. Moreover,
it inevitably mixes the audio each way. On a typical telephone this just leads
to "sidetone" which is now considered a desirable property, but for an intercom
system with many stations simultaneously active it becomes a tremendous problem
as not just the signal but its poorer-quality "echo" from each hybrid
transformer ends up being amplified. Two-wire lines are generally run to homes
and businesses simply due to the lower materials cost, but for "large-area
intercom" systems such as NAWAS, four-wire connections are used. Really the
whole thing is somewhat technical and requires some EE, but in general
four-wire private lines tend to be used for either very quality-critical
applications (e.g. between radio studios) or intercom/squawk box installations
(e.g. between control rooms). Obviously intercom over private line is not very
common due to the high cost, but emergency operations are a common application.
This whole issue of two-wire vs. four-wire telephone connections becomes
extremely important in the broadcasting industry, where "hybrid" has its own
specific meaning to refer to a sort of "un-hybrid" transformer which separates
the inbound and outbound audio again to help isolate the voice of the host from
returning via the inbound telephone path. Of course doing this by simple
electrical means never works perfectly, and modern broadcast hybrids employ DSP
methods to further reduce the problem. This is all another reason that ISDN
telephones have found an enduring niche in radio journalism.
 Weather alerts aren't always a matter of life and death but sometimes more
simply practical. I've twice had cars damaged by the severe hail storms we are
prone to, and prompt attention to a severe thunderstorm alert gives an
opportunity to move cars under cover. Considering the cost of bodywork the
Weather Radio receiver can pay for itself very quickly.
A little while ago I talked about
CONELRAD, and how its active
denial component was essentially too complex to actually be implemented, so it
was reduced to only serving as an emergency broadcasting system. This is not to
say that CONELRAD was a failure, or at least not entirely. CONELRAD is the
direct ancestor of today's Emergency Alert System, which does serve an
important and useful role.
Like most government initiatives, though, it is tremendously complex and has
had a very rocky path to its present capability. Let's take a look at the post-
CONELRAD history of emergency broadcasting in the US, and how it works today.
It was not always obvious that radio was the best way to disseminate emergency
information. It had two main shortcomings: first, there were tactical
disadvantages to operating radio stations during a military emergency .
Second, receiving an alert by radio required that there be a radio turned on
somewhere nearby. This was not at all guaranteed, and in a case where minutes
mattered presented a significant problem.
"Minutes", after all, was generous. Military and Civil Defense officials
prominently demanded an alerting timeline (from origination to the entire
public) of just thirty seconds.
alternatives to radio
Two major alternate emergency warning strategies have existed to overcome these
downsides of radio: First, sirens. Sirens require no special equipment or
preparation to receive and so are an ideal wide-area alerting system, but they
were very expensive to maintain in the civil defense administration era (in
especially more sparsely populated areas, some sirens were even driven by
diesel engines... you can imagine the maintenance headaches). As a result,
while many larger towns and cities had siren systems at the peak of the Cold
War, today wide-area siren systems are uncommon outside of regions prone to
tornadoes, and more recently, parts of the West Coast due to tsunami hazard
The second strategy is a wired system. We have previously talked about wired
radio in the
context of public broadcasting. A very limited wired broadcast system was
proposed for the US, called the National Emergency Alarm Repeater or NEAR. NEAR
consisted of a small box plugged into an outlet. In the case of an emergency,
an extra 270Hz tone was modulated onto the normal 60Hz AC power lines, which
would cause the NEAR 'repeaters' to sound a buzzer.
That's it. Not much of a broadcast system, really, but rather a supplement to
sirens that would allow coverage in rural areas and ensure that they were
clearly audible indoors .
Although NEAR reached an early implementation stage, with testing in small
areas and manufacturing of repeaters underway, it was never deployed at large
scale. Radio emergency broadcasting was viewed as superior, mainly because of
the ability to deliver instructions. The problem of radio broadcasting not
reaching the many individuals who were not presently listening to the radio is,
to be honest, one that was never meaningfully addressed until the last few
years. But I am getting ahead of myself.
the Emergency Broadcasting System
In 1963, the Emergency Action Notification System (EANS) was activated. EANS is
almost exclusively referred to by its later name, the Emergency Broadcast
System, but it's important to know that it was originally named EANS. In the
context of the United States Government, "Emergency Action" has long been
specifically a euphemism for nuclear war. Emergency action was first, and other
types of emergency were added to the national alerting regime only later.
There is some ambiguity as to whether EBS was a Federal Communications
Commission (FCC) system or a Civil Defense Administration (CD) system. The
answer is some of both; the system was designed and operated by the FCC based
on a requirement, and under authority, from CD. This ambiguity in emergency
alert systems remains to this day, although the Civil Defense Administration
has, through a very circuitous path, become a component of the Federal
Emergency Management Administration (FEMA). A good portion of the ongoing
problems with these initiatives relates to this problem: the Federal Government
has never done an adequate job of placing emergency alerting under a central
authority, which has always lead to competing interests and resource
That's a lot about the bureaucracy, but what about the Emergency Broadcasting
The EBS was organized into a tree-like structure. At the top were two
"origination points," originally a primary and alternate but later equal.
The identity of the origination points varied over the life of the system but
were typically a relevant military center (Air Defense Command, CONAD, NORAD)
and a relevant civilian center (CD, FEMA, and the many acronyms that came in
between). We are talking, here, about physical locations---two of them. In the
early '60s both the culture of national defense and the technology were not
amenable to a substantially redundant system.
At the time, the two origination points were not intended to issue alerts on
their own, but rather on the behalf of the President. So, in a way, there was
one true origination point: the President, wherever they were, would issue
the order, via the White House Communications Agency, to one of the origination
points. This is one of the reasons (the more significant being reprisal itself)
that the President, as they traveled, was always to be in real-time
communication with the WHCA.
The origination points, upon receiving a bona fide order from the President,
would retrieve a codebook and use a teletype network (dedicated to this
purpose) to send the message and an authentication codeword to a number of
major radio and television networks. The same message, called an Emergency
Action Notification, was repeated onto the teletype networks of wire agencies
such as the Associated Press for further distribution.
Upon receiving such a message an operator at each of these networks would tear
open a red envelope issued to the networks quarterly and find the codeword for
the day. If the codewords matched, nuclear attack was imminent.
Activation details from this point varied somewhat by network and technology,
but in general these national media networks would initiate a corporate
procedure to direct all of their member stations to switch their program audio
(and video as relevant) to a leased line or radio link from the national
control center. This process was at least partially automated so that it could
be performed very quickly. These now live national networks would then
broadcast an Attention Tone.
The Attention Tone used later on, a combination of 853 and 960 Hz, is still
instantly recognizable by most Americans today. Although its purpose was, as we
will see, mostly technical, it was intentionally made to be unpleasant and very
distinctive so that listeners would associate it with the Emergency Broadcast
System and start to pay attention. This worked so well that the same Activation
Tone is still widely used by emergency alerting systems today (even as a
ringtone for WEA on most smartphones), although changes in the technology have
rendered it vestigial.
The Attention Tone was recognizable not just to humans, but to electronics.
These national networks were only the first stage of the broadcast component of
EBS. Radio and television stations not associated with one of these major
national networks would have, at their control points, a dedicated receiver
(often more than one) tuned to stations operated by national networks. This
receiver's purpose was to recognize the Attention Tone and at least sound an
alarm in the control room, and later on automatically switch program audio (and
in some cases video) to the received station in order to simply repeat the
In this way, the activation of the major national networks cascaded through the
radio and television industry until every AM, FM, and OTA television station
was broadcasting the same message.
The national networks were expected to broadcast pre-scripted messages until
they received more specific instructions; a typical script went: "We interrupt
this program. This is a national emergency. The President of the United States
or his designated representative will appear shortly over the Emergency
EBS was functional and, besides a one major gaffe involving an activation due
to a mistake by an operator, encountered few serious problems. As a result it
had a long life, remaining in service well into the computer age. The major
limitation of EBS was its highly centralized structure: messages were to
originate only with the President. This was a logistical challenge for alerts
besides nuclear war, and prevented the use of the system to address major
emergencies in smaller areas. The similarly named Emergency Alert System made
use of similar technology, but more flexible policy, to address these
the Emergency Alert System
In 1997, the Emergency Alert System replaced EBS. Like EBS, EAS was a project
of the FCC and FEMA, but added the National Oceanic and Atmospheric
Administration (NOAA). NOAA's involvement, being the parent agency of the
National Weather Service, was the foundation of EAS's larger scope: EAS was
intended not only for military conflict but also for non-military civil
emergencies such as severe weather .
Technologically, the EAS is largely similar to the EBS, but expanded use of
digital signaling and a more flexible hierarchy that allows for messages to
be distributed in a more flexible, targeted way.
When you think of the Attention Tone today, you probably think of it as
accompanied by three buzzes. You can hear an example
here. Those three buzzes,
like the Attention Tone originally, are not intended for human consumption.
They're actually brief FSK packets containing a digital message in the
Specific Area Message Encoding, or SAME. As the name suggests, the main
feature of SAME is that it contains a list of locations---expressed as
FIPS state and county IDs---to which the alert applies. This allows the the
dedicated receivers in "downstream" stations to intelligently decide whether
or not the alert is applicable to the location they serve.
In addition, SAME headers include a code identifying the type of disaster,
which can be used for a variety of purposes such as for tornado siren
controllers to determine whether or not they should activate.
EAS also adds more flexible options for broadcast stations. The technical
device used by stations to receive and inject EAS messages, called an ENDEC, is
computerized and configurable. It can be combined with other equipment to allow
some stations to inject only a brief message (which may be in the form of a
text crawl over the normal program feed for television stations) directing
listeners to a different station to receive more detailed information.
The biggest change in EAS, though, is the origination of messages. EAS messages
enter the broadcast realm through Primary Entry Point radio stations, which are
typically major network-operated radio stations with high transmit powers and
modest hardening against attack and disaster. PEP stations are fitted with
special equipment that can automatically receive an alert (and override the
program feed to transmit it) through various methods, but originally through
FNARS is the FEMA National Radio System, a network of HF radio stations (using
the hybrid digital ALE protocol also used by the military) located at various
emergency command points. The primary control station for FNARS is located at
Mount Weather, FEMA's primary hardened bunker, and state OEMs and many
better-equipped county and city OEMs are connected to FNARS either directly or
through regional radio networks.
In modern applications, FNARS is complemented by IP delivery of messages, but
that's getting in to a future topic.
This nationwide network that includes multiple organizations allows EAS
messages to be originated by different Alerting Authorities at different
scopes. The President still has the ability to issue EAS messages to the entire
nation, but so can certain federal agencies and military centers under certain
circumstances (e.g. NORAD). Importantly, though, alerts can be issued for
entire states by the governor or a designee (such as a state director of
emergency operations), or at the county or city level by a relevant executive
or emergency operations official.
This makes EAS suitable for a wide variety of situations: not just nuclear
attack, but civil unrest, severe weather, major transportation disasters,
infrastructure emergencies (e.g. contaminated municipal water), etc.
By far the largest user of EAS is the National Weather Service, whose forecast
offices routinely issue EAS alerts. While these types of weather alerts are
usually associated with tornados, in my part of the country they more often
relate to flash flooding, large hail, or particularly severe wind and
lightning. The National Weather Service estimates that dozens of lives are
routinely saved by timely warnings of imminent severe weather.
the internet age
In most meaningful senses, EAS remains in service today. However, in a
technical sense of government funding, it has been replaced by something more
ambitious. The reality is that the expectation that alertees have a radio
turned on nearby has always been a problematic one, and broadcast radio and
television are generally declining in popularity.
To achieve rapid alerting, alerts must now be disseminated through more
channels than just broadcast stations. That's exactly the goal of the
Integrated Public Alert and Warning System, or IPAWS. I've already gone on
long enough, so let's talk about IPAWS next.
Teaser: there's even more radio involved!
 This because civilian radio stations could be used as navigation aids by
enemy aircraft, helping them to locate major cities despite blackout. This
concern became obsolete as air navigation technology improved.
 To some degree tsunamis are a retrospective explanation, the state of
Hawaii and the city of San Francisco have maintained siren systems since the
Cold War and only more recently began to discuss tsunamis as a purpose. Mostly
they're still worried about "radiological attack," to quote the SF OEM.
 In Great Britain, a more complete wired broadcast system---including voice
messages---called HANDEL was installed in various government buildings, but was
not extended to homes or businesses. A rather accurate depiction of HANDEL is
seen in the 1984 film Threads, and in this YouTube
clip at 1:07 and again, in alert,
at 2:17, but if you are interested in the topics of civil defense and nuclear
war the entire film is required, albeit difficult, viewing.
 At the time war, civil unrest, and weather represented essentially the
scope of the system. Earthquakes have only begun to fall into the scope of
emergency alerting very recently, which is interesting because the earthquake
scenario is actually much more challenging than nuclear attack: the potential
for lifesaving through early warning is tremendous, but seismic methods of
detecting earthquakes give warning only seconds before the destructive shaking
starts. Although some parts of the US have had earthquake warning systems for
a couple decades, they have seldom ever been backed by an alerting system
capable of delivering the warning before it is pointless.
Something I have long been interested in is time. Not some wacky
philosophical or physical detail of time, but rather the simple logistics
of the measurement and dissemination of time. How do we know what time it is?
I mean, how do we really know?
There are two basic problems in my proprietary model of time logistics: first
is the measurement of time. This is a complicated field because "time," when
examined closely, means different things to different people. These competing
needs for timekeeping often conflict in basic ways, which results in a number
of different precise definitions of time that vary from each other. The
simplest of these examples would be to note the competing needs of astronomy
and kinematics: astronomers care about definitions of time that are directly
related to the orientation of Earth compared to other objects, while kinematic
measurements care about time that advances at a fixed rate, allowing for
comparison of intervals.
These two needs directly conflict. And on top of this, most practical astronomy
also requires working with intervals, which has the inevitable result that most
astronomical software must convert between multiple definitions of time, e.g.
sidereal and monotonic. Think about that next time you are irked by time zones.
The second problem is the dissemination of time. Keeping an extremely accurate
measurement of time in one place (historically generally by use of astronomical
means like a transit telescope) is only so useful. Time is far more valuable
when multiple people in multiple places can agree. This can obviously be
achieved by setting one clock to the correct time and then moving it, perhaps
using it to set other clocks. The problem is that the accuracy of clocks is
actually fairly poor , and so without regular synchronization they will drift
away from each other.
Today, I am going to talk about just a small portion of that problem: time
dissemination within a large building or campus. There is, of course, so much
more to this topic that I plan to discuss in the future, but we need to make
a beachhead, and this is one that is currently on my mind .
There are three spaces where the problem of campus-scale time dissemination is
clear: schools, hospitals, and airports. Schools often operate on fairly
precise schedules (the start and end of periods), and so any significant
disagreement of clocks could lead to many classes starting late. Hospitals rely
on fairly accurate time in keeping medical records, and disagreement of clocks
in different rooms could create inconsistencies in patient charts. And in
airports, well, frankly it is astounding how many US airports lack a sufficient
number of clearly visible, synchronized clocks, but at least some have figured
out that people on the edge of making a flight care about consistent clock
It is no surprise, then, that these types of buildings and campuses are three
major applications of central clock systems.
In a central, master, or primary clock system, there is one clock which
authoritatively establishes the correct time. Elsewhere, generally throughout a
building, are devices variously referred to as slave clocks, synchronized
clocks, secondary clocks, or repeater clocks. I will use the term secondary
clock just to be consistent.
A secondary clock should always indicate the exact same time as the central
clock. The methods of achieving this provide a sort of cross section of
electrical communications technologies, and at various eras have been typical
of the methods used in other communications systems as well. Let's take a look.
The earliest central clocks to achieve widespread use were manufactured by a
variety of companies (many around today, such as Simplex and GE) and varied in
details, but there are enough common ideas between them that it is possible to
talk about them generally. Just know that any given system likely varies a bit
in the details from what I'm about to describe.
Introduced at the turn of the 20th century, the typical pulse-synchronized
clock system was based on a primary clock, which was a fairly large case clock
using a pendulum as this was the most accurate movement available at the time.
The primary clock was specially equipped so that, at the top of each minute, a
switch momentarily closed. Paired with a transformer, this allowed for the
production of a control voltage pulse, which was typically 24 volts either DC
In the simplest systems, the secondary clocks then consisted of a clock with a
much simplified movement. Each pulse actuated a solenoid, which advanced the
movement by one minute exactly, usually using an escapement mechanism to ensure
accurate positioning on each minute.
This system met the basic need: left running, the secondary clocks would
advance at the same rate as the master clock and thus could remain perfectly in
sync. However, only synchronization was ensured, not accuracy. This meant that
installation of a new system and then every power outage (or DST adjustment)
required a careful process of correctly setting each clock before the next
minute pulse. The system provided synchronization, but not automatic setting
The next advancement made on this system was the hour pulse. A different pulse,
of a different polarity in DC systems or on AC systems often using a separate
wire, was sent at the top of each hour. In the secondary clocks, this pulse
energized a solenoid which "pulled" the minute hand directly to the 00
position. Thus, any accumulated minute error should be corrected at the top of
the hour. The clocks still needed to be manually set to the correct hour, but
the minutes could usually take care of themselves. This was an especially
important innovation, because it could "cover up" the most common failure mode
of secondary clocks, which was a gummed up mechanism that caused some minute
pulses to fail to advance the minute hand.
Some of these systems offered semi-automatic DST handling by either stopping
pulses for one hour or pulsing at double rate for one hour, as appropriate.
This mechanism was of course somewhat error prone.
The next obvious innovation was a similar mechanism to correct the hour hand,
and indeed later generations of these systems added a 12-hour pulse which used
a similar mechanism to the hour pulse to reset the hour hand to the 12 position
twice each day. This, in theory, allowed any error in a clock to be completely
corrected at midnight and noon .
Of course, in practice, the hour and 12-hour solenoids could only pull a hand
(or really gear) so far, and so both mechanisms were usually only able to
correct an error within a certain range. This kept slightly broken clocks on
track but allowed severely de-synchronized clocks to stay that way, often
behaving erratically at the top of the hour and at noon and midnight as the
correction pulses froze up the mechanisms.
One of the problems with this mechanism is that the delivery of minute, hour,
and 12-hour pulses required at least three wires generally (minute and hour can
use polarity reversal), and potentially four (in the case of an AC system,
minute, hour, 12-hour, and neutral). These multiple wires increased the
installation cost of new systems and made it difficult to upgrade old two-wire
systems to perform corrections.
A further innovation addressed this problem by using a simple form of frequency
modulation. Such "frequency-synchronized" clocks had a primary clock which
emitted a continuous tone of a fixed frequency which was used to drive the
clock mechanism to advance the minute hand. For hour and 12-hour corrections,
the tone was varied. The secondary clocks detected the different frequency and
triggered correction solenoids.
Of course, this basically required electronics in the primary and secondary
clocks. In earlier versions these were tube-based, and that came with its own
set of maintenance challenges. However, installation was cheaper and it
provided an upgrade path.
These systems, pulse-synchronized and frequency-synchronized, were widely
installed in institutional buildings from around 1900 to 1980. Simplex systems
are especially common in schools, and many middle school legends of haunted
clocks can be attributed to Simplex secondary clocks with damaged mechanisms
that ran forwards and backwards at odd speeds at each correction pulse. Many of
these systems remain in service today, usually upgraded with a solid-state
primary clock. Reliability is generally very good if the secondary clocks are
well-maintained, but given the facilities budgets of school districts they are
unfortunately often in poor condition and cause a good deal of headache.
As a further enhancement, a lot of secondary clocks gained a second hand. The
second hand was usually driven by an independent and fairly conventional clock
mechanism, and could either be completely free-running (e.g. had no particular
relation to the minute hand, which was acceptable since the second hand is
typically used only for interval measurements) or corrected by the minute
pulse. In frequency-synchronized systems, the second-hand could be driven by
the same mechanism running at the operating frequency, which was a simple
design that produced an accurate second hand at the cost of the second hand
sometimes having odd behavior during correction pulses.
The use of 24 volt control circuits was very common throughout the 20th century
and is still widespread today. For example, thermostats and doorbells typically
operate at 24 vac. A 24 vac control application that is not usually seen today
are low-voltage light switches which actuate a central relay rack to turn
building lighting on and off. These were somewhat popular around the
mid-century because the 24vac control wiring could be small gauge and thus very
inexpensive, but are rare today outside of commercial systems (which are more
often digital anyway).
Another interesting but less common pre-digital central clock technology relied
on higher frequency, low voltage signals superimposed on the building
electrical wiring, either on the hot or neutral. Tube-based circuits could
detect these tones and activate correction solenoids or motors. The advantage
not running dedicated clock wiring was appealing, but these are not widely
seen... perhaps because of the more complex installation and code implications
of connecting the primary clock to the building mains.
Finally, something which is not quite a central clock system but has some of
the flavor is the AC-synchronized clock. These clocks, which were very common
in the mid-century, use a synchronous AC motor instead of an escapement. They
rely on the consistent 60Hz or 50Hz of the electrical supply to keep time.
These are no longer particularly common, probably because the decreasing cost
of quartz crystal oscillators made it cheaper to keep the whole clock mechanism
DC powered and electronically controlled. They can be somewhat frustrating
today because they often date to an era when the US was not yet universally on
60Hz, and so like the present situation in Japan, they may not run correctly if
they were originally made for a 50Hz market. Still, they're desirable in my
mind because many flip clocks were made this way, and flip clocks are
Semiconductors offered great opportunities for central clock systems. While
systems conveying digital signals over wires did exist, they quickly gave way
to wireless systems. These wireless systems usually use some sort of fairly
simple digital modulation which sends a complete timestamp over some time
period. The period can be relatively long since these more modern secondary
clocks were universally equipped with a local oscillator that drove the clock,
so they could be left to their own devices for as much as a day at a time
before a correction was applied. In practice, a complete timestamp every minute
is common, perhaps both because it is a nice round period and because it
matches WWVB (a nationwide time correction radio service which I am considering
out of scope for these purposes, and which is not often used for commercial
clock synchronization because indoor reception is inconsistent).
A typical example would be the Primex system, in which a controller transmits a
synchronization signal at around 72 MHz and 1 watt of power. The signal
contains a BPSK encoded timestamp. When Primex clocks are turned on, they
search for a transmitter and correct themselves as soon as they find one---and
then at intervals (such as once a day) from then on .
More in line with the 21st century, central clock systems can operate over IP.
In the simplest case, a secondary clock can just operate as an NTP client to
apply corrections periodically. These systems do certainly exist, but seem to
be relatively unpopular. I suspect the major problem is the need to run
Ethernet or deal with WiFi and the high energy cost and complexity of a network
stack and NTP client.
Today, secondary clocks are generally available with both digital and analog
displays. This can be amusing. Digital displays manufactured as retrofit for
pulse-synchronized systems must essentially simulate a mechanical clock
mechanism in order to observe the correct time. Analog displays manufactured
for digital systems use position switches or specialized escapements to
establish a known position for the hands (homing) and then use a stepper motor
or encoder and servo to advance them to show the time, thus simulating a
mechanical clock mechanism in their own way.
In the latter half of the 20th century and continuing today, central clock
systems are often integrated with PA or digital signage systems. Schools built
today, for example, are likely to have secondary clocks which are just a
feature of the PA system and may just be LCD displays with an embedded
computer. The PA system and a tone generator or audio playback by computer
often substitute for bells, as well, which had previously usually been
activated by the central clock---sometimes using the same 24vac wiring as the
Going forward, there are many promising technologies for time dissemination
within structures. LoRa, for example, seems to have obvious applications for
centralized clocks. However, the development of new central clock systems
seems fairly slow. It's likely that the ubiquity of cellphones has reduced the
demand for accurate wall clocks, and in general widespread computers make the
spread of accurate time a lot less impressive than it once was... even as the
mechanisms used by computers for this purpose are quite a bit more complicated.
Time synchronization within milliseconds is now something we basically take for
granted, and in a future post I will talk a bit about how that is
conventionally achieved today in both commercial IT environments and in more
specialized scientific and engineering applications. The keyword is PNT, or
Position, Navigation, Time, as multilateration-based systems such as GPS rely
on a fundamental relationship of correct location and correct time, and thus
can be used to determine either given the other... or to determine both using
an awkward bootstrapping process which is thankfully both automated and fast in
modern GPS receivers (although only because they cheat).
 This seems like a somewhat bold statement to make so generally, considering
the low cost of fairly precise quartz oscillators today, but consider this: as
clocks have become more accurate, so too have the measurements made with them.
It seems like a safe assumption that we will never reach a point where the
accuracy of clocks is no longer a problem, because the precision of other
measurements will continue to increase, maintaining the clock as a meaningful
source of error.
 Because, due to a winding path from an idea I had months ago, I recently
bought some IP managed, NTP synchronized LED wall clocks off of eBay. They are
unreasonably large for my living space and I love them.
 This is all the more true in train stations, which generally operate on
tighter and more exact schedules, and train stations are indeed another major
application of central clock systems. The thing is that I live in the Western
United States, where we have read about passenger trains in books but seldom
seen them. Certainly we have not known them to keep to a timetable, Amtrak.
 This was not exactly true in practice, for example, Simplex systems
performed the hour and 12-hour pulses a bit early because it simplified the
design of the secondary clock mechanism. A clock behaving erratically right
around the 58th minute of the hour is characteristic of pulse-synchronized
Simplex systems applying hour and 12-hour corrections.
 Because the relatively low frequency of the 72MHz commercial band
penetrates building materials well, it is often used for paging systems in
hospitals. The FCC essentially considers Primex clocks to be a paging system,
and indeed newer iterations allow the controller to send out textual alerts
that clocks can display.