Here's an experiment: a briefer message about something of interest to me
that's a little different from my normal fare.
A day or two ago I was reading about something that lead me to remember the
existence of this BuzzFeed news
entitled "We Trained A Computer To Search For Hidden Spy Planes. This Is What
I have several naggles about this article, but the thing that really got me
in a foul mood about is their means of "training a computer." To wit:
Then we turned to an algorithm called the random forest, training it to distinguish between the characteristics of two groups of planes: almost 100 previously identified FBI and DHS planes, and 500 randomly selected aircraft.
The random forest algorithm makes its own decisions about which aspects of the data are most important. But not surprisingly, given that spy planes tend to fly in tight circles, it put most weight on the planes turning rates. We then used its model to assess all of the planes, calculating a probability that each aircraft was a match for those flown by the FBI and DHS.
To describe this uncharitably: They wanted to identify aircraft that circle a
lot, so they used machine learning, which determined that airplanes that circle
a lot can be identified by how much they circle.
I try not to be entirely negative about so-called "artificial intelligence,"
but the article strikes me as a pretty depressing misapplication of machine
learning techniques. They went into the situation knowing what they were
looking for, and then used ML techniques to develop an over-complicated and not
especially reliable way to run the heuristic they'd already come up with.
Anyway, this is an interesting problem for other reasons as well. The article
Police helicopters circling a city are environmental
makes a case for the harm caused by persistent use of aerial surveillance by
police. Moreover, If you'd seen NextDoor around here, you'd know that the
constant sound of the Albuquerque Police Department's helicopters is one of the
greatest menaces facing our society. This increasingly common complaint has
and although they're no longer keeping me up at night with loudspeaker
announcements the frequency with which helicopters circle over my house has
been notably high.
Moreover, late last night I went for a walk and there was an APD helicopter
circling over me the entire time. You know, being constantly followed by
government helicopters used to be a delusion.
So, I decided to explore the issue a bit. I dropped a few dollars on
FlightAware's API, which they excitedly call "FlightXML" even though it returns
JSON by default, in order to retrieve the last week or so of flights made by
all three of APD's aircraft. I then trained a computer to identify circling.
No, actually, I wrote a very messy Python script that essentially follows the
aircraft's flight track dragging a 1.5nm x 1.5nm square around as the aircraft
bumps into the edges. Any time the aircraft spends more than six minutes in
this moving bounding rectangle, it deems the situation probable circling.
Experimentally I have found that these threshold values work well, although it
depends somewhat on your definition of circling (I chose to tune it so that
situations where the aircraft makes a single or only two revolutions are
generally excluded). I plan to put this code up on GitHub but I need to
significantly clean it up first or no one will ever hire me to do work on
computers ever again.
On the upside, maybe recruiters will stop emailing me because they "loved what
they saw on GitHub." Actually, maybe I should put it up right now, with a
readme which declares it to be my best work and a specimen of what I can achieve
for any employer who cold-calls me ever.
You can see the result
Incidents of circling actually seem more evenly distributed through the city
than I had expected, although there is a notable concentration in the
international district (which would be unsurprising to any Burqueño on account
of longstanding economic and justice challenges in this area). Also interesting
are the odd outliers in the far northwest, almost Rio Rancho, and the total
lack of activity in the South Valley. I suspect this is just a result of where
mutual aid agreements are in place, Bernalillo County has its own aviation
department but I don't think the Rio Rancho police do.
This is all sort of interesting, and I plan to collect more data over time (I
only seem to be able to get the last week or so of tracks from FlightAware, so
I'm just going to re-query every day for a few weeks to accumulate more). Maybe
the result will be informative as to what areas are most affected, but I think
it will match up with people's expectations.
On the other hand, it doesn't quite provide a full picture, as I've noticed
that APD aircraft often seem to fly up and down Central or other major streets
(e.g. Tramway to PdN) when not otherwise tasked. This may further complaints of
low-flying helicopters from residents of the downtown area, but isn't quite
circling. Maybe I need to train a computer to recognize aircraft flying in a
straight line as well.
It would also be interesting to apply this same algorithm to aircraft in
general and take frequent circling as an indicator of an aircraft being owned
by a law enforcement or intelligence agency, which is essentially what BuzzFeed
as actually doing. I made a slight foray into this, the problem is just that,
as you would expect, it mostly identified student pilots. I need to add some
junk to exclude any detections near an airport or practice area.
Anyway, just a little tangent about something I've been up to (combined with,
of course, some complaining about machine learning). Keep using computers to
answer interesting questions, just please don't write a frustrating puff piece
about how you've "trained a computer" to do arithmetic and branching logic.
With complete honesty, I hear a helicopter right now, and sure enough, it's
APD's N120PD circling over my house. I need to go outside to shake my fist at
 I would have preferred to use ADSBExchange, but their API does not seem to
offer any historical tracks. FlightAware has one of those business models that
is "collect data from volunteers and then sell it," which I have always found
 Some context here is that APD recently purchased a second helicopter
(N125PD). This seems to have originally been positioned as a replacement to
their older helicopter (N120PD), but in practice they're just using both now.
This has furthered complaints since it feels a little but like they pulled a
ruse on the taxpayers by not selling the old one and instead just having more
black helicopters on the horizon. This is all in addition to their fixed-wing
 I will, from here on, be proclaiming in work meetings that I have "trained
a computer to rearrange the fields in this CSV file."
One of the most entertaining acronyms in the world of specialty computing
equipment is POS. Let's all get the chuckle out of the way now, in this
discussion we will be talking about Point of Sale.
For those not familiar with the term, POS refers to the equipment which is used
"at the point of sale" in retail and other businesses to track sales, handle
payment, and other business functions. In essence, a POS computer is a more
entitled cash register, and there is a very direct line of evolution from the
mechanical cash registers of days past (that made that pleasant cha-ching
sound) to the multi-iPad abominations of today.
I would like to talk a bit about how we got from there to here, and, in my
opinion, just how bad here is.
As with most things I discuss, IBM is an important component of POS history.
However, I will start by introducing yet another three-letter acronym into the
fray. We'll start with a company that is somewhat, but not radically, older
than IBM, and was focused on POS while IBM was busy tabulating census cards:
NCR is one of those companies where the acronym officially no longer stands for
anything, but before they tried to modernize their branding it stood for
National Cash Register. As the name implies, NCR was an early giant in the
world of POS, and many of the old-timey mechanical cash registers you might
think of were actually manufactured by NCR or its predecessors, going back to
the late 19th century.
NCR entered the computing business around the same time as IBM, but with a
decidedly cash-register twist. Many of their computer products were systems
intended for banks and other large cash-handling institutions that handled
totally transactions, since computers of the day were largely too expensive to
be placed in retail stores. They did, however, make general-purpose computers
as well, mostly as a result of an acquisition of a computer builder.
NCR's early machines like the Post-Tronic are actually interesting in that they
were basically overbuilt electromechanical calculators designed to allow a
banker to post transactions to accounts very quickly, by keying in transactions
and getting a report of the account's new state. This sped up the end-of-day
process for banks appreciably. I like these kinds of machines since they take
me back to my youthful education in double-entry accounting, but unfortunately
it's not that easy to find detailed information about them since their lifespan
was generally fairly short.
I hope to one day write about the posting machines used by hotels,
electromechanical calculators that at the late stage read punched cards
reflecting various charges to a person's account (room, restaurant, etc) and
updated the room folio so that, when the customer checked out, a report of all
of their charges was ready. This greatly accelerated the work of the hotel
clerks and the night auditor who checked their work; in fact, it accelerated
the work of the night auditor so much that I understand that in many hotels
today the title "night auditor" is almost purely historic, and that staff
member simply runs the front desk at night and has no particular accounting
duties. The problem is that these wondrous hospitality calculators were niche
and had a short lifespan so there's not really a lot out there about them.
Anyway, back to the point of this whole thing. NCR racked up a number of
interesting achievements and firsts in computing, including being a key
developer of OCR and barcode reading and inventing the ATM. More relevant
to my point though, NCR was also an early innovator in electronic cash
It is also difficult to find especially detailed information about the very
early generation of electronic cash registers, but they were essentially the
same as the late-model mechanical cash registers but intended to be cheaper
and more reliable. For an early electronic cash register, there was usually
no networking of any sort. If central reporting was desired (as it very much
was in larger businesses for accounting), it was common for the cash register
to output a punched paper or magnetic tape which was later taken to a mainframe
or midcomputer to be read. That computer would then totalize the numbers from
all of the cash registers to produce a day-end closing report. This was an
improvement on having to read and key in the totals from the cash registers by
hand, but was not quite a revolutionary change to computer technology yet.
The situation becomes quite a bit more interesting with the introduction of
networking. Now, what we tend to think of as computer networking today is quite
a bit different from computer networking in the '80s when electronic cash
registers really became predominant. In this era, "network" usually meant the
connection between a terminal and a mainframe. Cash registers were not a whole
Reading patent 4068213 covering some very early plastic payment technology in
the mid-'70s, we get some details. An NCR 255 cash register connected to an NCR
726 controller. Even within the patent text, the term cash register is somewhat
flexibly interchanged with terminal. Indeed, the architecture of the system was
terminal-and-mainframe: to a large extent, the actual POS system at the
cashier's station was merely a thin terminal which had all of its functions
remotely operated by the NCR 726, a minicomputer, which would be placed in the
back office of the store. The POS terminals were connected to the minicomputer
via a daisy-chained serial bus, and because the cash registers didn't really do
any accounting locally, all of the store-wide totals were continuously
available at the minicomputer.
As time passed, this made it possible to add extensive inventory control,
lookup, and real-time accounting functions to the POS, which were really all
performed at the central computer. This included things like looking up item
prices based on barcodes, handling charge cards, and validating returns and
This basic architecture for POS has persisted almost to the present day,
although I would like to return somewhat to my comfort zone and transition from
discussing NCR to IBM. In the mid-'80s, at perhaps peak computer POS, IBM
introduced an operating system creatively named 4680. 4680 was a microcomputer
operating system (based on a DOS written for the 286) that was specialized to
run on a relatively "thick" POS computer, such that much of the computation and
control was done locally. However, 4680 POS systems were intended to be in
constant communication with a mini- or mid-computer which ran an application
like IBM Supermarket Application to perform data lookup, accounting, and all
of the POS functions which required access to external data and communications.
4680 was replaced by the even more creatively named 4690, and 4690 is perhaps
one of the most influential POS systems ever designed. 4690 and its newer
versions was massively successful, and is probably still in use in some places
today. In a typical installation, a set of 4690 POS systems (running on
hardware also provided by IBM) would be connected to an AS/400 or similar IBM
midcomputer running in the back office. The AS/400 would often have a telephone
or internet connection which allowed it to regularly report data up another
level to a corporate mainframe, and retrieve updated stock information.
The architecture of 4690 systems is highly typical of POS in large retail
environments to this day. A 4690 POS would be connected by multidrop serial bus
(one of the various IBM pseudo-standard network protocols) to the store
controller midcomputer. It would be connected via RS-232 serial to a thermal
receipt printer. In an add twist, this RS-232 bus was also essentially
multidrop, as the printer would have a passthrough connection to the pole
display and the pole display was basically controlled by special-case messages
to the printer. The printer also, incidentally, had a simple electrical
connection to the cash drawer and triggered it opening. Details vary, but the
credit card terminal was typically also connected to the 4690 by serial.
All of this is basically how conventional POS are cabled today, except ethernet
is usually used for the back-office connection and sometimes also for the
credit card terminal (which might also be USB). Serial is still dominant for
the printer and pole display in conventional systems.
IBM sold off their retail division to Toshiba, and Toshiba continues to develop
derivatives of the 4690 platform, although the POS side has essentially been
rewritten as a Linux application. Whenever you go to WalMart or Kroger or
another major retailer, check a look at the system the cashier operates. Not
many IBM branded devices are still out there although you might catch one,
more likely you will see a Toshiba microcomputer (usually under the desk)
connected to an unusual peripheral that consists of a set of pleasantly
clicky mechanical keys and a two-line matrix LCD display (although full on LCD
touchscreens are becoming increasingly common at WalMart, and universal at
for example Trader Joes. Note that these touchscreen LCD systems maintain the
physical keys for numeric entry and common functions).
This whole system is, essentially, a modern 4690, using direct descendants of
the original hardware. That said, many of these systems today either run more
modern software from Toshiba (I believe still SuSE based although I am far from
certain) or, in larger retailers, a custom application developed internally by
the retailer. In fact, for large retailers, it is very common for nearly the
entire POS stack to be running custom software, from the actual computer to the
credit card terminal and even the printer. The vendors of this kind of hardware
offer an SDK for developing custom applications, and this is the reason why
physically identical credit card terminals made by Verifone or Ingenico often
offer a frustratingly inconsistent user experience. It doesn't help that
some of the terminal vendors have decided that their products are nearly
iPad-ish enough and introduced touchscreens of a size once reserved for
televisions, that retailers clearly have no idea what to do with.
I am told that some major retails continue to use either an AS/400, a more
modern System i, or an AS/400 emulator on x86 to run the back-office system.
That said, there are now UNIX-based options (I mean all-caps UNIX, we are
often talking HP UX or similar) that are presumably taking over.
So we've talked a bit about the technical history, which is likely striking you
as painfully arcane... and it is. We live in the era of ubiquitous
microcomputers and flexible, fast network protocols. These enable all kinds of
simpler and yet more capable architectures for these devices. And yet... let's
focus on the usability.
One thing you will likely have noticed is that retail POS are typically either
very fast (in the case of an experienced cashier) or very slow (in the case of
a new one). Interaction with a 4690-style POS is primarily through a barcode
reading weighscale and a set of relegendable buttons. While large color screens
are becoming more common, lots of retailers still stick to only a two-line text
display. The learning curve to operate these systems, especially in less common
cases, is fairly substantial.
And yet, they are very fast. For barcoded items any kind of interaction with
the user interface is seldom necessary. For non-barcoded items like produce,
once a lookup table is memorized it is a matter of keying in an item number
and weighing. Often there are one-press functions provided for operations
like repeating an item. There are few distractors, as there are little to no
"system notifications" or other software to interfere with the POS operation.
The POS has the property of being built for purpose. It contains the features
necessary for efficient POS operations, and no other features. It uses a
physical keyboard for entry because these can be operated quickly and by feel.
It expects the user to learn how to operate it, but pays out the benefit of
seldom ever providing any kind of prompt the operator needs to read or context
the operator needs to determine, allowing operation to become muscle-memory.
These are traits which are, today, thought of as archaic, obsolete, and perhaps
worst of all, unfashionable.
Compare to the "modern" POS, which consists of an iPad in a chunky mount. If
you are lucky, there is a customer-facing iPad Mini or even iPod touch, but
more often it is necessary to physically rotate the iPad around to face the
customer for customer interactions.
This is a system which is not built-for-purpose. It is based on commodity
hardware not intended for POS or even business use. It has few or no physical
peripherals, making even functionality as core as producing a printed receipt
something that many businesses with "modern" technology are not able to do.
Interaction with the system is physically clunky, with the iPad being spun
around, finger-tip signatures, and a touchscreen which is not conducive to
operation by touch or high-speed operation in general due to lack of a positive
tactile response. The user interface is full of pop-ups, dialogs, and other
modal situations which are confusingly not always directly triggered by the
user, making it difficult to operate by rote sequence of interactions. Even
worse, some of these distractors and confusers come from the operating system,
outside the control of the POS software.
All of this because Square either cannot afford to or has made a strategic
decision not to develop any meaningful hardware. It does keep prices down.
In many regards these iPad-based POS are inferior to the computer POS
technology of the 1980s. At the same time, though, they are radically
less expensive and more accessible to small businesses. Buying an iPad and
using the largely free Square POS app is radically easier to reach than buying
an AS/400 and a 4690 and hiring an expensive integration consultant to get any
of it to work---not to mention the licensing on the back-office software.
I make a point of this whole thing because it is an example of this philosophy
I have been harping on: the advancing of technology has lead to computers becoming
highly commodified. This has the advantage of making computing less expensive,
more accessible, and more flexible. The downside is that, in general, it also
makes computers less fit for purpose, because more and more applications of
computers consist of commodity, consumer hardware (specifically iPads) running
applications on top of a general-purpose operating system.
The funny thing is that the user experience of these newer solutions is often
viewed as being better, because they are far more discoverable and easier to
learn. There is obviously some subjectivity here, but I would strongly argue
that any system which a person interacts with continuously as a part of their
job (e.g. POS) should be designed first for speed and efficiency, and second
for learnability. Or at least this is what I repeat to myself every time the
nice lady at the bakery struggles to enter a purchase of three items, and then
I have to sign the screen with my finger.
I'm not necessarily saying that any of this has gotten worse. No, it's always
been bad. But the landscape of business and special-purpose computing is slowly
transforming from highly optimized, purpose-built devices (that cost a fortune
and require training to use) to low-cost, consumer devices running rapidly
developed software (that is slower to operate and lacks features that were
formerly considered core).
This change is especially painful in the POS space, because key POS features
like a cash drawer, printer, barcode reader, weighscale, and customer-facing
display are difficult to implement by taping iPads to things and so are often
absent from "modern" POS configurations, which has a significant deleterious
impact on efficiency and customer assurance. Amusingly, many of these
peripherals are completely available for iPad-based systems, but seldom used, I
suspect in part due to uneven reliability considering the iPad's limited
peripheral interface options.
There is technical progress occurring in the conventional POS space, with far
more retailers adopting full-color LCD interfaces and taking advantage of the
programmability of peripherals to offer features like emailed receipts. But as
much as parts of silicon valley feel that they are disrupting the POS space...
4690's creaky decedents are likely to persist well into the future.
Postscript: I am trying out not eschewing all social media again, and I am
using the new Pleroma instance at my janky computer operation waffle.tech.
Topics will be diverse but usually obscure. If you're a fediperson, take a
 It is also usually painfully clear which retailers have invested in
developing good UX for their payment terminals (McDonalds), vs made a
half-assed effort (Walgreens), vs throwing their hands in the air and just
adding a lot of red circles to an otherwise "my first Qt app" experience
 Receipt printers are only supported by Square on iOS for some reason, and
Square is cagey about whether receipt printers not on their special list will
work. It obviously does support the industry-standard ESC/POS protocol but I
think the core issue is the lack of flexible USB support in iOS. Bluetooth
devices frequently have reliability issues and are too often battery-based.
IP-based peripherals are excessively expensive and can be painful to configure.
Somehow, POS peripherals have gone from eccentric daisy-chained RS-232 to a
 This also reflects a related shift in the computing industry I hope to
focus on in the future, which is that often modern UX practices do not really
do well with users being good at anything. Many modern user interfaces
prioritize discoverability, ease of learning, and white space to such a degree
that they are downright painful once you have more than two hours of experience
with the product. I'm not very old at all and I remember using text-based
interfaces that were extremely fast once you learned to use them... that were
later replaced with webapps that make you want to pull your hair out by guiding
you and hiding functions in menus. This is all part of the "consumerization" of
business computing and increasing expectations that all software feel like an
iOS app made for first-launch engagement, even if it's software that you will
spend eight hours a day operating for the foreseeable future. A lot of software
people really get this because they prefer the power and speed of
command-line interfaces, but then assume professional users of their product
to be idiots who cannot handle having more than four options at a time. But now
I'm just ranting, aren't I? I'll talk about examples later.
While I have more radio topics to talk about, I think it'd be good to take a
break from the airwaves and get back to basics with computer topics. I've
mentioned before that one of the things I really enjoy are pre-IP network
protocols, from the era when the design of computer networks was still a
competitive thing with a variety of different ideas. One of the most notable
of the pre-IP protocols, as I've mentioned before, is the Xerox Network System
It is an oversimplification, but not entirely wrong, to say that XNS was
created by Bob Metcalfe, the creator of Ethernet, so that he had something to
use Ethernet for. In fact, XNS is an evolution of an earlier protocol (called
PUP but more adorably written Pup) which was designed by Metcalfe and David
Boggs for use with Ethernet as a demonstration. For reasons that are difficult
to understand now but tied to the context of the time, Xerox was not
particularly enthusiastic about Ethernet as a technology and Metcalfe found
himself fighting to gain traction for the technology, including by developing
higher-level protocols which took advantage of its capability.
This bit of history tells us two important things:
The widespread misunderstanding that IP and Ethernet are somehow designed
for each other is quite incorrect---in fact, if Ethernet "naturally" goes with
another protocol and vice versa, that stack is Ethernet and XNS.
As has been seen many times in computer history, XNS's lack of popularity
with its corporate sponsors was, ironically, a major factor in its success.
Xerox's roots in more academic research (Metcalfe and the Xerox PARC) and
Xerox's lack of vigor in commercializing the technology essentially lead to it
being openly published as a research paper and then Xerox not doing a whole lot
else with it (using it only for a couple of less important projects). XNS was
viewed as academic rather than commercial, and that's how it escaped.
Xerox's lack of motivation to pursue the project was not shared by the rest of
the industry. After XNS was published, a number of other software vendors, and
especially designers of Network Operating Systems, picked it up as the basis of
their work. The result is that XNS was used in a variety of different network
systems by different vendors (although not always by that name), and that it
became quite influential in the design of later protocols because of its being
a "common denominator" between many protocols based on it.
IP and XNS are largely contemporaries, the two having been under active
development during the same span of a few years. Both appear to incorporate
ideas from the other, in part because IP originated out of ARPANET which was
one of the biggest network projects of the time and the designers of XNS were
no doubt keeping an eye on it. There were also a couple of personal
relationships between designers of XNS and designers of IP, so it's likely
there were some notes exchanged. This is a powerful part of how these early
standards form, people working in parallel and adopting similar ideas.
So let's talk about XNS. Wikipedia starts its explanation of XNS by saying that
"In comparison to the OSI model's 7 layers, XNS is a five-layer system, like
the later Internet protocol suite." The "later" here is a little odd and
depends on where exactly you set the milestones, but I like this start to the
design explanation because it emphasizes that both XNS and IP have little to do
with the OSI model.
As I like to repeat to myself under my breath on a daily basis, the widespread
use of the OSI model as a teaching device in computer networking is a mistake.
It leads students and instructors of computing alike to act as if the IP stack
is somehow defined by or even correlates to the OSI stack. This is not true.
The OSI model defines the OSI network protocols, which are an independent
network architecture that ultimately failed to gain the traction that IP did.
IP is different from the OSI stack in a number of intentional and important
ways, which makes attempts to describe the IP stack in terms of the OSI model
intrinsically foolish, and worse, confusing and misleading to students.
Anyway, given that the XNS stack has five layers (and NOT seven like OSI
adherents feel the need to tell you), what are those layers?
Physical (not defined by XNS, generally Ethernet)
Application (not defined by XNS)
Layer 1 of XNS is the internet datagram protocol, or IDP. If this sounds kind
of similar to IP, it is, and beyond just the naming. There are some important
differences though, which are illuminating when we look at the eccentricities
To start with, IDP makes use of Ethernet addressing. Sparing the details of
bits and offsets, IDP network addresses consist of the Ethernet (MAC) address
of the interface, a network number (specified by the router), and a socket
number. While the MAC address serves as a globally unique identifier, the
network number is useful for routing (so that routers need not know the
addresses of every host in every network). The socket number identifies
services within a given host, replacing the ports that we use in the IP stack.
That difference is particularly interesting to highlight: IP chooses to
identify only the host, leaving identification of specific services or sockets
to higher-level network protocols like TCP. In contrast, XNS
identifies individual sockets within IDP. As usual it's hard to say
that either method is "better" or "worse," but the decision IP made certainly
leads to some odd situations with regards to protocols like ICMP that do not
provide socket-level addressing.
Another interesting difference is that, while IDP allows for checksums, it does
not require them. This is an allowance for the fact that Ethernet provides
checksums, making bit-errors on Ethernet networks exceedingly rare. In
contrast, IP requires a checksum (but curiously only over the header), which is
effectively wasted computation on media like Ethernet that already provide an
To bring my grousing about IP full circle, these differences reflect two
things: First, IP was designed with no awareness of the addressing scheme that
is now virtually always used at a lower layer. Second, IP has a redundant
integrity scheme. Both are simply results of IP having not been designed for a
lower layer that provides these, while XNS was.
At the next layer, the interprocess communications layer, XNS provides us with
options that will once again look fairly familiar. Sequenced packet protocol
(SPP) provides reliable delivery, while packet exchange protocol (PEP) provides
unreliable delivery. The design of these protocols is largely similar to TCP
and UDP, respectively, but of course with the notable difference that there is
no concept of port numbers since that differentiation is already provided by
As more of a special case, there is the XNS error protocol, which is used to
deliver certain low-level network information in a way analogous to (but
simpler than) ICMP. The error protocol enjoys the advantage, compared to ICMP,
of being easily correlated to and delivered to specific sockets, since it has
the socket number information from IDP. This means that, for example, an XNS
implementation of "ping" on Linux would not require root (or rather raw socket)
The resource control layer in XNS is somewhat ill-defined, but was implemented
for example by Novell as essentially a service-discovery scheme filling a
similar role to UPnP, mDNS, etc. today. Resource control was not necessary for
the operation of an XNS network, but was useful for autoconfiguration scenarios
and implemented that way by many vendors. We can thus question whether or not
resource control really counts as a "layer" since it was not, in practice,
generally used to encapsulate the next layer, but everyone who teaches with the
OSI model is guilty of far greater sins, so I will let that slide. Sometimes it
is useful to view a protocol as occupying a "lower layer" even if it does not
encapsulate traffic, if it fulfills a utility function used for connection
setup. I am basically making excuses for ARP, here.
Application protocols are largely out of scope, but it is worth noting that
Xerox did design application layer protocols over XNS, which consisted
primarily of remote procedure call. This makes sense, as RPC was a very popular
concept in networking at the time, likely because it was closely analogous to
how terminals interacted with mainframes. Nowadays, of course, RPC tends to
make everyone slightly nauseous. Instead we have REST, which is analogous to
how something, uh, er, nevermind.
XNS is now largely forgotten, as all of the systems that implemented it failed
to compete with IP's ARPANET-born takeover. That said, it does have one curious
legacy still with us today. Routing Information Protocol (RIP), commonly used
as a "lowest common denominator" interior gateway protocol, was apparently
originally designed as part of XNS and later ported to IP.
I promised that I would say a bit about mobile data terminals, and now here we
are. This is an interesting topic to me for two reasons: first, it involves
weird old digital radio and network protocols. Second, MDTs have weird
intersection with both paging and cellular data, such that I would present them
as being a "middle step" in the evolution from early mobile telephony (e.g.
IMTS car phones) to our modern concept of cellular networks as being data-first
(particularly VoLTE where voice is "over the top" of the data network).
To start with, what is a mobile data terminal (MDT)? An MDT is a device
installed in a vehicle and used by a field worker to interact with central
information services. Perhaps the best-known users of MDTs are police agencies,
which typically use MDTs to allow officers in their vehicles to retrieve motor
vehicle and law enforcement records, and sometimes also to write citations and
reports in an online manner (meaning that they are filed in a computer system
immediately, rather than at the end of the shift).
MDTs are not restricted to law enforcement, though. MDTs are also commonly used
by utility companies such as gas and electric, where GIS features are
particularly important to allow service technicians to view system diagrams and
maps. They are also commonly used by public transit agencies, taxis, and other
transportation companies, although these tend to be somewhat more specialized
devices with more limited capabilities---for example, a common MDT in public
transit scenarios is a device which reports position to dispatch, displays the
route schedule to the driver, and allows the driver to send a small number of
preset messages (e.g. "off schedule") to dispatch and see the response.
I'm more interested in the more "general purpose" MDTs which may, but do not
necessarily, run a desktop operating system such as Windows. Today, MDT
typically refers to a Toughbook or similar laptop computer which is equipped
with an LTE modem (sometimes external) and can be locked into a dock which is
hard mounted to the vehicle. Since most modern MDTs are just laptops, they can
typically also be removed from the vehicle and used in a portable fashion, but
that's a fairly new development.
There is also some slight terminology confusion to address before I get into
the backstory: the term "mobile data computer" or MDC is essentially synonymous
with MDT, and you may see it used instead in some cases. Handheld devices, on
the other hand, are largely a Whole Different Thing.
MDTs were, for the most part, invented by Motorola. Early MDTs had vacuum
fluorescent character displays, although they fairly quickly progressed to
CRTs. The classic Motorola MDT has a full keyboard, but is also equipped with a
number of "preset" buttons which send a given message to dispatch with a single
press. Early MDTs ran special-purpose operating systems which were presumably
very simple, and applications for them were largely custom-developed by
Motorola or an integrator.
So how did these things actually communicate? MDTs were a fairly common tool of
various municipal and utility agencies by the end of the 1970s, well before any
kind of cellular data network. Indeed, they may be the first instance of a
wide-area radio data network with more flexible capabilities than paging
systems, and in many ways they worked with infrastructure that was ahead of its
time---and also excessively expensive.
Various MDT data protocols have come and gone, but perhaps the earliest to be
significantly capable and widespread is a Motorola system called MDC-4800
(Motorola tended to prefer the term MDC), introduced in 1980. The "4800" in the
name is for 4800 bits per second, and the protocol, at a low level, is
Typically, a Motorola MDT would be connected to a "Vehicular Radio Modem,"
although in early MDTs the VRM was not necessarily viewed as a separate
product but rather part of the system. The VRM is essentially a VHF or UHF
two-way radio which has the discriminator output connected to a packet modem.
True to this description, many Motorola VRMs were closely based on contemporary
VHF/UHF radio models.
MDC-4800 moved 256-byte packets and the protocol had support for packet
reassembly into larger messages, although the messages were still fairly
constrained in length. In many ways it is a clear ancestor to modern cellular
data systems, being a packet-based radio data system intended for general
purpose computer applications.
Where MDC-4800 gets particularly interesting, though, is in its applications.
MDC-4800 was directly used by proprietary, semi-custom systems developed for
various MDT users. Much of MDC-4800's ubiquity, though, came from a
collaboration of Motorola and IBM. At the time, being the late '70s into the
early '80s, IBM was in possession of a large fleet of service technicians who
worked out of trucks, a substantial budget, and a limitless lust for solving
problems with their computers. IBM began a partnership with Motorola to
develop a futuristic computerized dispatch and communications system for their
service technicians, which would be based on Motorola MDTs.
In the course of developing a solution for IBM, Motorola developed an
integrated network system called DataTAC. DataTAC expanded on MDC-4800 to build
a multi-subscriber data network operating in the 800MHz band, and Motorola
partnered with various other organizations (mostly telcos) to establish DataTAC
as a generally available service. In the US, the DataTAC service was known as
ARDIS. ARDIS was widely used by MDT users of all stripes including municipal
governments and businesses, but it could also support pagers and in a clear
bridge to the modern era, early BlackBerry devices actually used ARDIS for
messaging and email. ARDIS continued to operate as a commercial service into
the late '90s and was upgraded to subsequent protocols to improve its speed
DataTAC is often recognized as a "1G" cellular technology, for example by the
Wikipedia categories. This is a bit confusing, though, as for the large part
"1G" is synonymous with AMPS which was an analog, voice-only system. I believe
that it is only from a modern perspective that DataTAC would be put in the same
category as AMPS---the perspective that mobile telephony and data would become
a unified service, which was not nearly so obvious in the '80s or even '90s
when these were separate technologies offered by separate vendors as separate
product lines, and generally seen as having completely separate applications.
In a full-circle to my last message, it was pagers that seemed to "bridge the
gap," as relatively sophisticated pagers can and did operate on the ARDIS
network while still feeling like a "phone-ish" item.
ARDIS was later transitioned to using a protocol called RD-LAP, which was also
developed by Motorola for MDT use. RD-LAP was similar to MDC-4800 in many ways
except being faster, and so represented an evolutionary step rather than
revolutionary. However, RD-LAP stands out for having had an impressively long
lifespan, and while largely obsolete it is still seen today in various
municipal agencies that have not found the budget to modernize. RD-LAP is
capable of an impressive 19.2 kbps, which doesn't sound like much but was
impressive for the time.
ARDIS was not alone in being an odd data service in an era largely seen as
before mobile data. ARDIS had a contemporary in Mobitex, which was developed in
Europe but also seen in the US. Mobitex was a centralized network with very
similar capabilities to ARDIS, and was particularly popular for two-way pagers.
Mobitex was also used by BlackBerry, and the fact that BlackBerries used
Mobitex and ARDIS in various models, perhaps the first wide-area radio data
protocols to exist, is a reminder of just how revolutionary the product once
was, considering RIM's total lack of relevance today.
Mobitex also saw significant use for MDTs, although in the US it was less
popular than ARDIS for this purpose. Mobitex seems to have been particularly
popular for in-the-field credit card processing, although I would not be the
least bit surprised if ARDIS credit card terminals were also made.
ARDIS and Mobitex represent an important early stage of modern cellular
networks, but also show some significant differences from the cellular networks
of today. Both systems were available as commercial services with nationwide
networks but were also often deployed on a local scale, especially by municipal
governments and, in some areas, state governments, for public safety and
municipal utility use. This remains surprisingly common today in the case of
municipal LTE (significant spectrum reserved for government use makes it
surprisingly easy for municipalities to launch private LTE networks and many do
so), but for the most part isn't something we think about any more, at least in
the business world. A large part of the popularity of MDC-4800 and RD-LAP in
particular is the fact that they could be deployed on existing business or
municipal band VHF or UHF allocations, making them fairly easy to fit into an
existing public safety or land mobile radio infrastructure.
About those VRMs, by the way: when Motorola began to offer VRM data modems as
independent products, they sported a serial interface so that they could
essentially be used as typical data modems by any arbitrary computer. This was
the transformation from MDTs as dedicated special-purpose devices to MDTs as
general-purpose computers that happen to have a data radio capability. Motorola
themselves made a series of MDTs which were really just Windows computers in
ruggedized enclosures with a touchscreen and a VRM.
Architecturally, MDT systems strongly showed their origins in the early ear of
computing and the involvement of IBM in their evolution. Most of the software
used on MDTs historically and in many cases to this day really just amounted to
a textmode terminal that exchanged messages with a mainframe application, and
often with an IBM ISPF-type user interface full of function keys and a "screen
as message" metaphor.
MDTs and the data networks they used are an important but largely forgotten
development in mobile networks... are there others? Of course, quite a number
of them. One that I find interesting and worth pointing out is a technology
that also took an approach of merging telephony together with land mobile
radio technology: iDEN. Also developed by Motorola, iDEN was a cellular
standard ("2G"-ish) that was directly derived from trunking radio technology.
First available in '93, iDEN could carry phone calls much like AMPS (or more
like digital AMPS) but also inherited many of the features of a trunking radio
system, meaning that a group of iDEN users could have an ongoing push-to-talk
connection to a "channel" much like a two-way radio. This was particularly
popular with small businesses, which gained the convenience of two-way radio
dispatch without the cost of the equipment---it was built into their phones.
iDEN is, of course, recognized by most under the name Nextel, the carrier
which deployed a wide-scale iDEN network in the US. Nextel heavily advertised
its PTT functionality not just to businesses but also to chatty consumers.
Nextel television commercials and the distinctive Motorola roger beep are
deeply burned into my brain from many hours of childhood cable television,
and even as a kid I was pretty amazed by the PTT capability.
Nextel was of course merged with Sprint, and while the iDEN service continued
to exist for some time it was not of great interest to Sprint and was
officially terminated in 2013. This is actually rather sad to me, because
modern cellular networks are surprisingly incapable of offering the quality of
PTT service achieved by Nextel---modern PTT services are generally IP based and
suffer from significant latency and reliability problems.
So let's try to come back to the idea that I brought up in the previous post,
that commodification of technology also tends to eliminate special-purpose
technologies which ultimately reduces the utility of technology to many
use-cases. iDEN seems to me actually an exceptionally clear case of this: iDEN
solved a specific problem in a very effective way, making something akin to
two-way radio significantly more accessible to three-person plumbing companies
and teenagers alike. Ultimately, though, iDEN was not able to compete with the
far larger market share and rapidly improving data services of the GSM and
CDMA/IS-2000 family of standards, and it seems that simple market forces
destroyed it. Because the problem that iDEN solved well is actually a rather
hard problem (reliable, real-time delivery of voice with minimal or no
connection setup required, guaranteed bandwidth via traffic engineering being
the big part IP fails to deliver), it's one that has in a large way gone from
solved to unsolved in the last ten years. We have witnessed a regression of
cellular technology, which is particularly prominent to me since I've developed
an odd fascination with "Android radios" that are really just Android phones in
a handheld radio format (with push-to-talk button) and that there aren't really
many good ways to actually use.
What about the decline of Mobitex and ARDIS, though? To my knowledge Mobitex
actually remains available in the US to this day, but I believe ARDIS fizzled
out after Motorola divested the operation. I have a hard time shedding too many
tears for these services, because since they were basically just
packet-switched radio networks, modern cellular networks can outdo them on
essentially all metrics. Mobitex and ARDIS were generally more reliable than
modern cellular data, part of the reason they survived so long, but a lot of
this realistically is just due to their low data rates and low subscriber
counts. A Mobitex weighed down by a city worth of Instagram users would
presumably collapse just as much as LTE does at the state fair.
It is, however, notable that many of these older technologies were quite
amenable to being stood up by a government or company as a private network.
That's something that isn't especially easy for a private company to achieve
today, even if they for reason wanted to (I want to). Most radio data protocols
that are available on an unlicensed basis or that it's reasonably easy to
obtain licenses for are either low-speed or require directional antennas and
short ranges. This is even true in the amateur radio world, although it's
fairly clear there that progress has been held back by the FCC's archaic rules
regarding data rates. I think that this essentially comes down to the strong
competition between cellular carriers meaning that any bandwidth freed for
broadband data applications ends up going for a high value at auction, not to
Joe Schmuck who put in a license application. Perhaps we're better off anyway
as shared networks (the cellular carriers) are presumably always going to be
more economical... but given the reliability and customer relationship issues
that mobile carriers often face it's not clear to me that the business world
wouldn't have more interest in private networks if they were reasonably
Low-power unlicensed LTE does present one opportunity, and I'll let you know
when I one day give in and buy a low-power LTE base station for my roof.
 By "screen as message" I refer to the interface design common among IBM
and other mainframe applications, and formalized in various standards such as
ISPF, in which the interface is screen-oriented rather than
line-oriented---meaning that the mainframe paints an entire screen on the
terminal, the user "edits" the screen by filling in form fields and etc, and
the terminal sends the entire screen back and waits for a new one. I actually
find this UI paradigm to be remarkably intuitive even in textmode (it is a
direct parallel to "documents" and "forms") and regret that, largely due to
simple technical limitations, the microcomputers that took over in the '90s
mostly discarded this design in favor of a line-oriented command shell. Web
applications are (or at least were, before SPAs) based on a very similar model
which I think shows that it has some enduring value, but it's very hard to find
any significant implementation in textmode today outside of a few Newt-based
Linux tools which suffer from Newt's UX being, frankly, an absolute mess
compared to IBM's carefully planned and researched UX conventions. Besides,
most of us still have 12 function keys, might as well actually use them.
To start with, as a result of a brief and uncomfortable brush with fame I have
acquired a whole lot more readers. As a result, I pledge to make a sincere
effort to 1) pay a little more attention when I mash my way through aspell
check on each post, 2) post more regularly, and 3) modify my Enterprise
Content Management System so that it is slightly more intelligent than cating
text files in between other text files. Because it's a good idea to only make
promises you can keep, I have already done the third and hopefully text now
flows more coherently on mobile browsers and/or exceptionally narrow monitors.
I am also going to start the process of migrating the mailing list to a service
that doesn't, amusingly, repeatedly bungle its handling of plaintext messages
(HTML is somehow easier for MailChimp), but rest assured I will move over the
subscriber list when I do that. I am still recovering from years of trauma with
Gnu Mailman as a mail server admin, so mass email sending is not something I
can approach easily.
I will not, however, make the ASCII art heading represent properly at narrower
than 80 characters. Some things are simply going too far.
So after the topic of odd scanner regulation around cellular phones, I wanted
to talk a little more about cellular phones. The thing is, I have decided not
to try to explain the evolution of current cellphone standards because it is
not something I have any expertise in and frankly every time I try to read
up on it I get confused. The short version is that there are two general
lineages of cellular standards that are (mostly incorrectly) referred to as
CDMA and GSM, but as of LTE the two more or less merged, differentiating CDMA
and GSM phones by what they fall back to when LTE is not available. Then 5G
happened which somehow made the situation much more complicated, and is
probably a government conspiracy to control our minds anyway.
On a serious note, one of the legacy cold war communications systems I am very
interested in is GWEN, the Ground Wave Emergency Network. GWEN was canceled
for a variety of reasons, one of which was up-swell of public opposition founded
primarily in conspiracy theories. The result is that it's hard to do much
research on GWEN today because you keep finding blogs about how cell towers are
actually just disguised GWEN sites being used to beam radiation into our homes.
GWEN itself has a slightly interesting legacy, it was canceled before it
achieved full capability but a number of sites had already been built. The more
coastal of those sites ended up being transferred to the Coast Guard which
reused them as stations for their new Differential GPS network, so a large
portion of DGPS sites were just disused GWEN sites. DGPS was replaced with
NDGPS which itself has recently been decommissioned, in part because the FAA's
Wide Area Augmentation System (WAAS) is generally superior and can also be used
for maritime purposes. So the GWEN sites have now died two deaths.
So we can all hopefully agree to call that tangent a segue to the topic I
really want to discuss: less-well-known terrestrial mobile communications
systems. Basically, I want to take the family tree of pagers and cellular
phones and call out a few lesser known members of the order.
Let's talk first about pagers. Pagers have largely died out, but I had the
fortune of working in a niche industry for a bit such that I carried one
around with me. Serious '80s drug dealer vibes. The basic idea of the pager
is very simple: there is a radio transmitter somewhere, and a bunch of people
carry around belt-pack pagers that beep at them when the radio transmitter
sends a message intended for them. In its late forms, it was essentially
a dedicated text-messaging infrastructure, although early pagers delivered no
payload at all (only the fact that a message existed) and later pagers, many
into the '90s, only delivered the phone number of the caller.
By the '90s though not only had alphanumeric pagers come about with full text
message capability, two-way alphanumeric pagers had been introduced where
it was possible to respond. Two-way pagers basically represent a weird
mutation on the way to modern cellphones and so I won't discuss them too
much, I'm more interested in the "pager" as distinguished by being a strictly
one-way device. This is, for example, the reason that I sometimes jokingly
refer to my phone as my pager: I detest typing on it so much that I often use
it was a one-way device, responding to messages I receive later when I'm at a
Pagers have gone through a number of different technical evolutions, but most
modern pagers run a protocol called POCSAG. One of the reasons for widespread
standardization on POCSAG is that it is not uncommon for institutions to
operate their own private paging transmitters, so standardization is more or
less required to make a sale to any of these users (which today probably
represent most of the paging market). Understanding this requires commenting
a bit on another huge way that pagers are differentiated from cellular
Modern cellular phones (and really all cellular phones if you use the term
strictly) employ a "presence" or "registration" system. Essentially, as you
walk the mean streets of suburban Des Moines your cellular phone is involved
in an ongoing dialog with base stations, and central systems at your provider
continuously keep track of which base station your cellphone is in contact
with. This way, whenever you get a call or text message or push notification,
the system knows which base station it should use to transmit directly to
your phone---it knows where you are.
Pagers, excepting some weird late-model pager-ish things, don't have any such
concept. The pager itself has no transmitter to advertise its whereabouts
(this is a large portion of why pagers remain in use today). Instead, every
page destined for a pager must be sent via every transmitter that that pager
might be near.
You can imagine that this poses inherent scalability limitations on pagers.
When you purchase pager service from a commercial provider, you generally
have to specify if you want "city," "regional," or "nationwide" service. This
really just determines whether they will transmit your pages in just your
city, throughout a state or other regional area, or nationwide. Nationwide
service is surprisingly expensive considering cellphone competition. Even
then, paging transmitters tend to only be located in more urban areas and so
coverage is poor compared to cellphones.
This limitation of pagers, though, is also an advantage. The simplicity of
the total paging system (just encode messages and send them out a
transmitter, no huge technology stack involved) encourages private paging
systems. In my area, hospitals and universities operate private paging
systems, and government facilities contract them out but still to a local,
small-scale scheme that is effectively private. They're particularly
popular with hospitals because an already-installed paging system is fairly
cheap to maintain, it's guaranteed to work throughout your building if you
put the transmitter on the roof (not something cellphones can always offer
in large, byzantine hospitals), and as long as your staff live reasonably
nearby their pagers will work at home as well.
So that's "what a pager is" at a bit of a technical level. More interesting
to me are some pager-adjacent devices, such as the Motorola MINITOR. MINITORs
are so popular with volunteer fire departments that you can pretty reliably
identify volunteer firefighters by the MINITOR on their belts, although the
nineteen bumper stickers and embroidered hat tend to give it away first.
So what is a MINITOR and how does it relate to a pager... this requires
getting a little bit into radio systems and the concept of a coded squelch.
Let's say that you are, example out of nowhere, a fire department. You have
a VHF or UHF FM radio system that you use to communicate between dispatch and
units. When dispatch receives an event they want to notify the units that
should respond, but they don't want to wake up the entire department. One
common way of achieving this is some manner of coded squelch. This is not the
only application of coded squelches (they're often used just as a way to
minimize false-positive squelch opens), but it's one of the most complex and
The idea is this: instead of a given radio just opening squelch (enabling
the speaker basically) when it receives a carrier, the radio will only open
the squelch when it receives a specific tone, series of tones, data packet,
or other positive message that that radio is supposed to open squelch. By
programming different tone sequences into different radios, the dispatcher
can now "selective call" by transmitting only tone sequences to open squelch
for the specific units they wish to contact.
There are two major coded squelch systems used in public safety (actually there
are a ton but these are the two most widely seen on analog, non-trunking FM
systems): two-tone, also called Selcall, and Motorola MDC. Two-tone is the
format supported by MINITORs and probably the more common of the two because
it has more cross-vendor support, but it's also much more primitive than MDC.
The concept of two-tone selective calling is very simple and you can probably
guess from the name: Before a voice transmission, essentially as a preamble,
the transmitter sends two tones in sequence, each for about a second. Yes, this
takes a while, especially if calling multiple units, enough that it's not done
on key-up like MDC or many other selective calling schemes. Instead, the
dispatcher's radio console usually has a dedicated button that starts sending
tones and they have to wait until it's good and ready before they talk. It's
not uncommon to hear the dispatcher say something like "wait for tones" or
"tones coming" to warn others that things will be tied up for a bit.
So how does this all relate to paging... the MINITOR and other devices like
it are basically handheld radios with the entire transmit section removed.
Instead, they are only receivers, and they are equipped with a two-tone
decoder. So if you are, say, a volunteer firefighter, you can carry a MINITOR
which continuously monitors the dispatch frequency but only opens squelch if
it receives a two-tone sequence indicating that the dispatcher intends to
activate a given group of volunteers. This is basically a paging system, but
simply "built in" as a side feature of the two-way FM radio system.
I'll also mention MDC briefly. MDC is a more sophisticated system that uses a
short FSK data packet as the selective calling preamble. This transmits quickly
enough that the radios simply send it every time the PTT is pressed. This
allows some more advanced features, for example, every time someone in the
field transmits the dispatcher's console can tell them the ID of the radio that
just transmitted. Auxiliary information in addition to addressing can also be
send in the MDC preambles. MDC is also very popular in public safety and if
you've spent much time with a scanner you'll probably recognize the sound of
the MDC preamble. It's actually very common to mix-and-match these systems,
for example, some fire departments use MDC but also send Selcall tones when
dispatching, often specifically to trigger MINITORS.
Selective calling systems in public safety are often also used to trigger
outdoor warning systems such as sirens, which are of course one of my favorite
things. A surprising number of outdoor sirens used in tornado-prone areas, for
college campus public safety, etc. are just equipped with a radio receiver
monitoring a dispatch frequency for a specific selective call. This can
interact in amusing ways with "mixed" selective calling. I used to work on an
Air Force base with a fairly modern Federal Signal outdoor warning system.
When it played Reveille and Retreat each day it sounded fine, but when they
tested the emergency sirens one day a week you actually heard MDC and then
Selcall tones over the speakers before the siren. My assumption is that
regularly scheduled events like Reveille were played via a Federal Signal
digital system while emergency alerts went out over some force protection
dispatch frequency, and the "siren" speakers opened squelch in reaction to some
Federal Signal-specific preamble that was sent before the preambles used for
mobile radios. As another anecdote, the US military has the charming habit of
referring to all outdoor warning systems as "Giant Voice," which was the brand
name of a long-discontinued Altec Lansing system that had been very popular
with DOD. Other siren systems are triggered using telephone leased-lines, and
of course on modern systems there are options for cellular or other more
advanced data radio protocols.
There are also a number of other selective calling systems in use. Another
example I am aware of is a proposal among amateur radio groups called "long
tone zero," which suggests that persons experiencing an emergency should tune
to a nearby repeater and transmit a DTMF zero for several seconds. The idea is
that other radio amateurs who wish to be helpful but not have their ears glued
to their radios (or more likely be woken up at night) can set up a software or
hardware detector for the zero digit and essentially use it as a
selective-calling scheme, with their radio (presumably with the volume cranked
to eleven) only opening squelch upon receiving a long-tone-zero. It's a clever
idea but to my knowledge not one that is widely enough implemented to be
particularly useful. Of course selective calling is also widely used to open
the squelch on repeaters but I find that less interesting.
A similar scheme that is oddly well-known to the public ear is employed by the
Emergency Alert System and NOAA All-Hazards Radio. Those "emergency weather
radio" receivers you buy at the store from the likes of Midland monitor an
All-Hazards Radio frequency but only open squelch when they receive a preamble
indicating that there is an emergency notification. Historically this was based
on a simple dual-tone scheme (the tones that are now used as the emergency
alert ringtone on most cellphones), but nowadays a digital scheme is used that
allows the radio to know the type of alert and area it applies to. This is
actually how EAS messaging is triggered on many television and radio stations
as well. I will devote a whole post some time to the history of the Emergency
Alert System in its various outdated and modern versions, because it's really
pretty interesting---and frankly I am amazed that incidents of unauthorized
triggering of EAS are not more common, as the measures in place to prevent it
are not particularly sophisticated.
So that's one pager-adjacent thing. Let's talk a bit about a different pager
adjacent thing, and one that I know less about because it's more proprietary
and less frequently heard in the modern world: taxi and freight dispatch.
Several manufacturers used to build taxi dispatch systems that allowed for
individually addressed text messages to specific receivers installed in cabs.
This is basically a pager system but using a larger display, and the systems
were almost always two-way and allowed the cab driver to at least send a
response that they were on the way. The system in use and its technical details
tended to vary by area and it's hard for me to say too much in general about
them, other than that they have been wholly replaced today by cellular systems.
A system similar to taxi dispatch systems is ubiquitous in the freight
trucking industry, but is far more standardized. Qualcomm Omnitracs is an
integrated hardware and service product that places a small computer in the cab
of a truck which both reports telemetry and exchanges text messages between the
driver and dispatcher. The system has been bought and sold (I don't think
Qualcomm even owns it any more) and has been moved from technology to
technology over years, but for most of its lifespan it has relied on a
proprietary satellite network. This gives it the advantage of being more
reliable in between urban areas than cellular, although the fact that the
system is now available in a cellular variant shows that this advantage is
getting slimmer. It's also the reason that a great many semi tractors
feature a big goofy radome, usually mounted behind the roof fairing. You just
don't see that kind of antenna on vehicles very often. Like most satellite
communications networks, Omnitracs relies on the messages being small and
infrequent (very low duty cycle) to make the service affordable to operate.
What I particularly like about the Omnitracs system (which seems to be widely
referred to by truckers as Qualcomm regardless of who owns it now) is that the
long near-monopoly it enjoyed, and probably its relationship to a big
engineering operation in Qualcomm, lead to some very high quality hardware
design compared to what we expect from communications devices today. The system
was always designed to be usable on the road, and featured a dash-mounted
remote control and speech synthesizer and recognition (to hear and reply to
messages) long before these became highly usable on cellphones. The system also
integrates secondary features like engine performance management and even
guided pre-trip checklists. It's an example of what can be achieved if you
really put hardware and software engineering expertise into solving a specific
problem, which has become uncommon now that the software industry has realized
it is cheaper (at the expense of user experience, productivity, etc) to solve
all problems by taping iPads to things. And that's the direction that freight
dispatch is increasingly going today, "integrated products" that consist of a
low-end Android tablet in a dashboard mount running some barely stable app that
is mostly just a WebView. And taxi dispatch barely even exists now because
silicon valley replaced the entire taxi with an iPhone app, which if you
think about it is kind of amazing and also depressing.
These two topics get very close to the world of mobile data terminals, and
that's what I'll talk about next. MDTs are car-mounted computers often used by
first responders and utility crews, and while nowadays "MDT" usually just means
a Panasonic Toughbook with an LTE modem (maybe for a municipal LTE network),
historically it referred to much more interesting systems that paired a
Panasonic Toughbook with a VHF/UHF data modem that relied on some eccentric
protocols and software stacks. One thing has never changed: Panasonic Toughbooks
are way overpriced, even on the government surplus market, which is why I still
don't have one to take apart.
So I'll talk a bit about MDTs and the protocols they use next, since in many
ways they're more the ancestors of our modern smartphones than actual phones.
So, is there any big conclusion we can draw from looking at these largely
"pre-cellular" (but still present today) wireless systems? I don't know. On the
one hand, in some ways these confirm one of my theses that increasing
commodification of software and hardware tends to make technology solutions
less fit for purpose rather than more. That is, technology devices today
are better only in a certain way, and worse in other ways in that
increasing abstraction, complexity, and unification of design tends to
eliminate features which are specific to a given application (everything is an
iOS and/or Android app now, and half of those are really just websites) and
increase complexity for users (what was once a truck dispatch system is now an
Android tablet with all the ways that can go wrong).
At the same time, these effects tend to drive the cost of these devices down.
So you might say that everything from semi-truck dispatch to restaurant POS (a
favorite example of mine) is now more available but less fit for purpose.
This is one of the big themes of my philosophy, and is basically what I mean
when I say "computers are bad," so I hope to explore it more in this blog
newsletter thing. So next time, let's try to look at mobile data terminals
and dispatch systems under that framework---how is it that they have become
cheaper and more available, but at the same time have gotten worse at the
purpose they're intended for? But mostly we'll talk about some old radio data
protocols, because those are what I love.
Postscript: now that I have 100+ email subscribers and probably as many as ten
people who somehow still use RSS (I don't know, I don't really have any
analytics because I'm both lazy and ethically concerned about delivering any
interesting about my rambles? What do they want to hear more about? You can
always email me at email@example.com, or hit me up on Matrix at
@jesse:waffle.tech. If you send me an email I like enough I'll throw it in here
sometime like an old-fashioned letter to the editor. Like The Economist, if you
do not begin it "SIR -" I will edit it so that people think you did.
 Differential GPS is an interesting technique where a site with a known
location (e.g. by conventional survey techniques) runs a GPS receiver and then
broadcasts the error between the GPS fix and the known good location. The
nature of GPS is that error tends to be somewhat consistent over a geographical
region, e.g. due to orbital perturbations, so other GPS users in the area can
apply the reverse of the error calculated by the DGPS site and cancel out a
good portion of the systematic error. The FAA WAAS system was designed to
enable RNAV GPS approaches, basically aircraft instrument operations by GPS.
The main innovation of WAAS over DGPS/NDGPS is that the correction messages
are actually sent back to orbit to be broadcast by satellites and so are
available throughout North America.
 A huge downside to this is that POCSAG lacks any kind of security scheme.
I have personally found more than one hospital campus transmitting patient
name, DOB, clinical information, and occasionally SSN over an unencrypted
POCSAG system. My understanding is that, from a legal and regulatory
perspective, this is basically an accepted practice right now. Maybe congress
will pass legislation against POCSAG decoders.
 I am not Richard M. Stallman, although I did once spend much more time than
I am comfortable with in his presence. I'm more like Jaron Lanier, I guess. My
of your personal computer or whatever, but rather that it feels like the
gateway drug to pivoting to a social network and creating a Data Science
department. When it comes to modern webdev, "DARE To Say No to SPAs and
Websockets." Oh god I'm going to design a T-shirt.