COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.
I have an MS in information security, more certifications than any human should, and ready access to a keyboard. These are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.
You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.
Last but not least, please consider supporting me on Ko-Fi. Monthly supporters receive EYES ONLY, a special bonus edition that is lower effort and higher sass, covering topics that don't quite make it to a full article.
--------------------------------------------------------------------------------
Sometimes I think I should pivot my career to home automation critic, because I
have many opinions on the state of the home automation industry---and they're
pretty much all critical. Virtually every time I bring up home automation,
someone says something about the superiority of the light switch. Controlling
lights is one of the most obvious applications of home automation, and there is
a roughly century long history of developments in light control---yet,
paradoxically, it is an area where consumer home automation continues to
struggle.
An analysis of how and why billion-dollar tech companies fail to master the
simple toggling of lights in response to human input will have to wait for a
future article, because I will have a hard time writing one without descending
into incoherent sobbing about the principles of scene control and the interests
of capital. Instead, I want to just dip a toe into the troubled waters of
"smart lighting" by looking at one of its earliest precedents: low-voltage
lighting control.
A source I generally trust, the venerable "old internet" website
Inspectapedia, says that low-voltage lighting
control systems date back to about 1946. The earliest conclusive evidence I can
find of these systems is a newspaper ad from 1948, but let's be honest, it's a
holiday and I'm only making a half effort on the research. In any case, the
post-war timing is not a coincidence. The late 1940s were a period of both
rapid (sub)urban expansion and high copper prices, and the original impetus for
relay systems seems to have been the confluence of these two.
But let's step back and explain what a relay or low-voltage lighting control
system is. First, I am not referring to "low voltage lighting" meaning lights
that run on 12 or 24 volts DC or AC, as was common in landscape lighting and is
increasingly common today for integrated LED lighting. Low-voltage lighting
control systems are used for conventional 120VAC lights. In the most
traditional construction, e.g. in the 1940s, lights would be served by a "hot"
wire that passed through a wall box containing a switch. In many cases the
neutral (likely shared with other fixtures) went directly from the light back
to the panel, bypassing the switch... running both the hot and neutral through
the switch box did not become conventional until fairly recently, to the
chagrin of anyone installing switches that require a neutral for their own
power, like timers or "smart" switches.
The problem with this is that it lengthens the wiring runs. If you have a
ceiling fixture with two different switches in a three-way arrangement, say in
a hallway in a larger house, you could be adding nearly 100' in additional wire
to get the hot to the switches and the runner between them. The cost of that
wiring, in the mid-century, was quite substantial. Considering how difficult it
is to find an employee to unlock the Romex cage at Lowes these days, I'm not
sure that's changed that much.
There are different ways of dealing with this. In the UK, the "ring main"
served in part to reduce the gauge (and thus cost) of outlet wiring, but we
never picked up that particular eccentricity in the US (for good reason). In
commercial buildings, it's not unusual for lighting to run on 240v for similar
reasons, but 240v is discouraged in US residential wiring. Besides, the
mid-century was an age of optimism and ambition in electrical technology, the
days of Total Electric Living. Perhaps the technology of the relay, refined by
so many innovations of WWII, could offer a solution.
Switch wiring also had to run through wall cavities, an irritating requirement
in single-floor houses where much of the lighting wiring could be contained to
the attic. The wiring of four-way and other multi-switch arrangements could
become complex and require a lot more wall runs, discouraging builders
providing switches in the most convenient places. What if relays also made
multiple switches significantly easier to install and relocate?
You probably get the idea. In a typical low-voltage lighting control system, a
transformer provides a low voltage like 24VAC, much the same as used by
doorbells. The light switches simply toggle the 24VAC control power to the
coils of relays. Some (generally older) systems powered the relay continuously,
but most used latching relays. In this case, all light switches are momentary,
with an "on" side and an "off" side. This could be a paddle that you push up or
down (much like a conventional light switch), a bar that you push the left or
right sides of, or a pair of two push buttons.
In most installations, all of the relays were installed together in a single
enclosure, usually in the attic where the high-voltage wiring to the actual
lights would be fairly short. The 24VAC cabling to the switches was much
smaller gauge, and depending on the jurisdiction might not require any sort of
license to install.
Many systems had enclosures with separate high voltage and low voltage
components, or mounted the relays on the outside of an enclosure such that the
high voltage wiring was inside and low voltage outside. Both arrangements
helped to meet code requirements for isolating high and low voltage systems and
provided a margin of safety in the low voltage wiring. That provided additional
cost savings as well; low voltage wiring was usually installed without any kind
of conduit or sheathed cable.
By 1950, relay lighting controls were making common appearances in real estate
listings. A feature piece on the "Melody House," a builder's model home, in the
Tacoma News Tribune reads thus:
Newest features in the house are the low voltage touch plate and relay system
lighting controls, with wide plates instead of snap buttons---operated like
the stops of a pipe organ, with the merest flick of a finger.
The comparison to a pipe organ is interesting, first in its assumption that
many readers were familiar with typical organ stops. Pipe organs were,
increasingly, one of the technological marvels of the era: while the concept of
the pipe organ is very old, this same era saw electrical control systems
(replete with relays!) significantly reduce the cost and complexity of organ
consoles. What's more, the tonewheel electric organ had become well-developed
and started to find its way into homes.
The comparison is also interesting because of its deficiencies. The Touch-Plate
system described used wide bars, which you pressed the left or right side
of---you could call them momentary SPDT rocker switches if you wanted. There
were organs with similar rocker stops but I do not think they were common in
1950. My experience is that such rocker switch stops usually indicate a fully
digital control system, where they make momentary action unobtrusive and avoid
state synchronization problems. I am far from an expert on organs, though,
which is why I haven't yet written about them. If you have a guess at which
type of pipe organ console our journalist was familiar with, do let me know.
Touch-Plate seems to have been one of the first manufacturers of these systems,
although I can't say for sure that they invented them. Interestingly,
Touch-Plate is still around today, but their badly broken WordPress site
("Welcome to the new touch-plate.com" despite it actually being touchplate.com)
suggests they may not do much business. After a few pageloads their WordPress
plugin WAF blocked me for "exceed[ing] the maximum number of page not found
errors per minute for humans." This might be related to my frustration that
none of the product images load. It seems that the Touch-Plate company has
mostly pivoted to reselling imported LED lighting (touchplateled.com), so I
suppose the controls business is withering on the vine.
The 1950s saw a proliferation of relay lighting control brands, with GE
introducing a particularly popular system with several generations of fixtures.
Kyle Switch Plates, who sell replacement switch plates (what else?), list
options for Remcon, Sierra, Bryant, Pyramid, Douglas, and Enercon systems in
addition to the two brands we have met so far. As someone who pays a little too
much attention to light switches, I have personally seen four of these brands,
three of them still in use and one apparently abandoned in place.
Now, you might be thinking that simply economizing wiring by relocating the
switches does not constitute "home automation," but there are other features to
consider. For one, low-voltage light control systems made it feasible to
install a lot more switches. Houses originally built with them often go a
little wild with the n-way switching, every room providing lightswitches at
every door. But there is also the possibility of relay logic. From the same
article:
The necessary switches are found in every room, but in the master bedroom
there is a master control panel above the bed, from where the house and yard
may be flooded with instant light in case of night emergency.
Such "master control panels" were a big attraction for relay lighting, and the
finest homes of the 1950s and 1960s often displayed either a grid of buttons
near the head of the master bed, or even better, a GE "Master Selector" with a
curious system of rotary switches. On later systems, timers often served as
auxiliary switches, so you could schedule exterior lights. With a creative
installer, "scenes" were even possible by wiring switches to arbitrary sets of
relays (this required DC or half-wave rectified control power and diodes to
isolate the switches from each other).
Many of these relay control systems are still in use today. While they are
quite outdated in a certain sense, the design is robust and the simple
components mean that it's usually not difficult to find replacement parts when
something does fail. The most popular system is the one offered by GE, using
their RR series relays (RR3, RR4, etc., to the modern RR9). That said, GE
suggests a modernization path to their LightSweep system, which is really a
0-10v analog dimming controller that has the add-on ability to operate relays.
The failure modes are mostly what you would expect: low voltage wiring can
chafe and short, or the switches can become stuck. This tends to cause the
lights to stick on or off, and the continuous current through the relay coil
often burns it out. The fix requires finding the stuck switch or short and
correcting it, and then replacing the relay.
One upside of these systems that persists today is density: the low voltage
switches are small, so with most systems you can fit 3 per gang. Another is
that they still make N-way switching easier. There is arguably a safety
benefit, considering the reduction in mains-voltage wire runs.
Yet we rarely see such a thing installed in homes newer than around the '80s.
I don't know that I can give a definitive explanation of the decline of relay
lighting control, but reduced prices for copper wiring were probably a main
factor. The relays added a failure point, which might lead to a perception of
unreliability, and the declining familiarity of electricians means that
installing a relay system could be expensive and frustrating today.
What really interests me about relay systems is that they weren't really
replaced... the idea just went away. It's not like modern homes are providing a
master control panel in the bedroom using some alternative technology. I mean,
some do, those with prices in the eight digits, but you'll hardly ever see it.
That gets us to the tension between residential lighting and architectural
lighting control systems. In higher-end commercial buildings, and in
environments like conference rooms and lecture halls, there's a well
established industry building digital lighting control systems. Today, DALI is
a common standard for the actual lighting control, but if you look at a range
of existing buildings you will find everything from completely proprietary
digital distributed dimming to 0-10v analog dimming to central dimmer racks
(similar to traditional theatrical lighting).
Relay lighting systems were, in a way, a nascent version of residential
architectural lighting control. And the architectural lighting control industry
continues to evolve. If there is a modern equivalent to relay lighting, it's
something like Lutron QSX. That's a proprietary digital lighting (and shade)
control system, marketed for both residential and commercial use. QSX offers a
wide range of attractive wall controls, tight integration to Lutron's HomeSense
home automation platform, and a price tag that'll make your eyes water. Lutron
has produced many generations of these systems, and you could make an argument
that they trace their heritage back to the relay systems of the 1940s. But
they're just priced way beyond the middle-class home.
And, well, I suppose that requires an argument based on economics. Prices have
gone up. Despite tract construction being a much older idea than people often
realize, it seems clear that today's new construction homes have been "value
engineered" to significantly lower feature and quality levels than those of the
mid-century---but they're a lot bigger. There is a sort of maxim that today's
home buyers don't care about anything but square footage, and if you've seen
what Pulte or D. R. Horton are putting up... well, I never knew that 3,000
sq ft could come so cheap, and look it too.
Modern new-construction homes just don't come with the gizmos that older ones
did, especially in the '60s and '70s. Looking at the sales brochure for a new
development in my own Albuquerque ("Estates at La Cuentista"), besides 21st
century suburbanization (Gated Community! "East Access to Paseo del Norte" as
if that's a good thing!) most of the advertised features are "big." I'm
serious! If you look at the "More Innovation Built In" section, the
"innovations" are a home office (more square footage), storage (more square
footage), indoor and outdoor gathering spaces (to be fair, only the indoor ones
are square footage), "dedicated learning areas" for kids (more square footage),
and a "basement or bigger garage" for a home gym (more square footage). The
only thing in the entire innovation section that I would call a "technical"
feature is water filtration. You can scroll down for more details, and you get
to things like "space for a movie room" and a finished basement described eight
different ways.
Things were different during the peak of relay lighting in the '60s. A house
might only be 1,600 sq ft, but the builder would deck it out with an intercom
(including multi-room audio of a primitive sort), burglar alarm, and yes, relay
lighting. All of these technologies were a lot newer and people were more
excited about them; I bring up Total Electric Living a lot because of an
aesthetic obsession but it was a large-scale advertising and partnership
campaign by the electrical industry (particularly Westinghouse) that gave
builders additional cross-promotion if they included all of these bells and
whistles.
Remember, that was when people were watching those old videos about the
"kitchen of the future." What would a 2025 "Kitchen of the Future" promotional
film emphasize? An island bigger than my living room and a nook for every meal,
I assume. Features like intercoms and even burglar alarms have become far less
common in new construction, and even if they were present I don't think most
buyers would use them.
But that might seem a little odd, right, given the push towards home
automation? Well, built-in home automation options have existed for longer
than any of today's consumer solutions, but "built in" is a liability for a
technology product. There are practical reasons, in that built-in equipment is
harder to replace, but there's also a lamer commercial reason. Consumer
technology companies want to sell their products like consumer technology, so
they've recontextualized lighting control as "IoT" and "smart" and "AI" rather
than something an electrician would hook up.
While I was looking into relay lighting control systems, I ran into an
interesting example. The Lutron Lu Master Lumi 5. What a name! Lutron loves
naming things like this. The Lumi 5 is a 1980s era product with essentially
the same features as a relay system, but architected in a much stranger way. It
is, essentially, five three way switches in a box with remote controls. That
means that each of the actual light switches in the house (which could also be
dimmers) need mains-voltage wiring, including runner, back to the Lumi 5
"interface."
Pressing a button on one of the Lutron wall panels toggles the state of the
relay in the "interface" cabinet, toggling the light. But, since it's all wired
as a three-way switch, toggling the physical switch at the light does the same
thing. As is typical when combining n-way switches and dimming, the Lumi 5 has
no control over dimmers. You can only dim a light up or down at the actual
local control, the Lumi 5 can just toggle the dimmer on and off using the 3-way
runner. The architecture also means that you have two fundamentally different
types of wall panels in your house: local switches or dimmers wired to each
light, and the Lu Master panels with their five buttons for the five circuits,
along with "all on" and "all off."
The Lumi 5 "interface" uses simple relay logic to implement a few more
features. Five mains-voltage-level inputs can be wired to time clocks, so that
you can schedule any combination(s) of the circuits to turn on and off. The
manual recommends models including one with an astronomical clock for
sunrise/sunset. An additional input causes all five circuits to turn on; it's
suggested for connection to an auxiliary relay on a burglar alarm to turn all
of the lights on should the alarm be triggered.
The whole thing is strange and fascinating. It is basically a relay lighting
control system, like so many before it, but using a distinctly different wiring
convention. I think the main reason for the odd wiring was to accommodate
dimmers, an increasingly popular option in the 1980s that relay systems could
never really contend with. It doesn't have the cost advantages of relay systems
at all, it will definitely be more expensive! But it adds some features over
the fancy Lutron switches and dimmers you were going to install anyway.
The Lu Master is the transitional stage between relay lighting systems and
later architectural lighting controls, and it straddled too the end of relay
light control in homes. It gives an idea of where relay light control in homes
would have evolved, had the whole technology not been doomed to the niche zone
of conference centers and universities.
If you think about it, the Lu Master fills the most fundamental roles of home
automation in lighting: control over multiple lights in a convenient place,
scheduling and triggers, and an emergency function. It only lacks scenes, which
I think we can excuse considering that the simple technology it uses does not
allow it to adjust dimmers. And all of that with no Node-RED in sight!
Maybe that conveys what most frustrates me about the "home automation"
industry: it is constantly reinventing the wheel, an oligopoly of tech
companies trying to drag people's homes into their "ecosystem." They do so
by leveraging the buzzword of the moment, IoT to voice assistants to, I guess
now AI?, to solve a basic set of problems that were pretty well solved at
least as early as 1948.
That's not to deny that modern home automation platforms have features that old
ones don't. They are capable of incredibly sophisticated things! But
realistically, most of their users want only very basic functionality: control
in convenient places, basic automation, scenes. It wouldn't sting so much if
all these whiz-bang general purpose computers were good at those tasks, but
they aren't. For the very most basic tasks, things like turning on and off a
group of lights, major tech ecosystems like HomeKit provide a user experience
that is significantly worse than the model home of 1950.
You could install a Lutron system, and it would solve those fundamental tasks
much better... for a much higher price. But it's not like Lutron uses all that
money to be an absolute technical powerhouse, a center of innovation at the
cutting edge. No, even the latest Lutron products are really very simple,
technically. The technical leaders here, Google, Apple, are the companies that
can't figure out how to make a damn light switch.
The problem with modern home automation platforms is that they are too
ambitious. They are trying to apply enormously complex systems to very simple
tasks, and thus contaminating the simplest of electrical systems with all the
convenience and ease of a Smart TV.
Sometimes that's what it feels like this whole industry is doing: adding
complexity while the core decays. From automatic programming to AI coding
agents, video terminals to Electron, the scope of the possible expands while
the fundamentals become more and more irritating.
But back to the real point, I hope you learned about some cool light switches.
Check out the Kyle Switch Plates
reference
and you'll start seeing these buildings and homes, at least if you live in an
area that built up during the era that they were common (1950s to the 1970s).
Air traffic control has been in the news lately, on account of my country's
declining ability to do it. Well, that's a long-term trend, resulting from
decades of under-investment, severe capture by our increasingly incompetent
defense-industrial complex, no small degree of management incompetence in the
FAA, and long-lasting effects of Reagan crushing the PATCO strike. But that's
just my opinion, you know, maybe airplanes got too woke. In any case, it's an
interesting time to consider how weird parts of air traffic control are. The
technical, administrative, and social aspects of ATC all seem two notches more
complicated than you would expect. ATC is heavily influenced by its peculiar
and often accidental development, a product of necessity that perpetually
trails behind the need, and a beneficiary of hand-me-down military practices
and technology.
Aviation Radio
In the early days of aviation, there was little need for ATC---there just
weren't many planes, and technology didn't allow ground-based controllers to do
much of value. There was some use of flags and signal lights to clear aircraft
to land, but for the most part ATC had to wait for the development of aviation
radio. The impetus for that work came mostly from the First World War.
Here we have to note that the history of aviation is very closely intertwined
with the history of warfare. Aviation technology has always rapidly advanced
during major conflicts, and as we will see, ATC is no exception.
By 1913, the US Army Signal Corps was experimenting with the use of radio to
communicate with aircraft. This was pretty early in radio technology, and the
aircraft radios were huge and awkward to operate, but it was also early in
aviation and "huge and awkward to operate" could be similarly applied to the
aircraft of the day. Even so, radio had obvious potential in aviation. The
first military application for aircraft was reconnaissance. Pilots could fly
past the front to find artillery positions and otherwise provide useful
information, and then return with maps. Well, even better than returning with a
map was providing the information in real-time, and by the end of the war
medium-frequency AM radios were well developed for aircraft.
Radios in aircraft led naturally to another wartime innovation: ground
control. Military personnel on the ground used radio to coordinate the
schedules and routes of reconnaissance planes, and later to inform on the
positions of fighters and other enemy assets. Without any real way to know
where the planes were, this was all pretty primitive, but it set the basic
pattern that people on the ground could keep track of aircraft and provide
useful information.
Post-war, civil aviation rapidly advanced. The early 1920s saw numerous
commercial airlines adopting radio, mostly for business purposes like schedule
coordination. Once you were in contact with someone on the ground, though, it
was only logical to ask about weather and conditions. Many of our modern
practices like weather briefings, flight plans, and route clearances originated
as more or less formal practices within individual airlines.
Air Mail
The government was not left out of the action. The Post Office operated what
may have been the largest commercial aviation operation in the world during the
early 1920s, in the form of Air Mail. The Post Office itself did not have any
aircraft; all of the flying was contracted out---initially to the Army Air
Service, and later to a long list of regional airlines. Air Mail was considered
a high priority by the Post Office and proved very popular with the public.
When the transcontinental route began proper operation in 1920, it became
possible to get a letter from New York City to San Francisco in just 33 hours
by transferring it between airplanes in a nearly non-stop relay race.
The Post Office's largesse in contracting the service to private operators
provided not only the funding but the very motivation for much of our modern
aviation industry. Air travel was not very popular at the time, being loud and
uncomfortable, but the mail didn't complain. The many contract mail carriers of
the 1920s grew and consolidated into what are now some of the United States'
largest companies. For around a decade, the Post Office almost singlehandedly
bankrolled civil aviation, and passengers were a side hustle [1].
Air mail ambition was not only of economic benefit. Air mail routes were often
longer and more challenging than commercial passenger routes. Transcontinental
service required regular flights through sparsely populated parts of the
interior, challenging the navigation technology of the time and making rescue
of downed pilots a major concern. Notably, air mail operators did far more
nighttime flying than any other commercial aviation in the 1920s. The post
office became the government's de facto technical leader in civil aviation.
Besides the network of beacons and markers built to guide air mail between
cities, the post office built 17 Air Mail Radio Stations along the
transcontinental route.
The Air Mail Radio Stations were the company radio system for the entire air
mail enterprise, and the closest thing to a nationwide, public air traffic
control service to then exist. They did not, however, provide what we would now
call control. Their role was mainly to provide pilots with information
(including, critically, weather reports) and to keep loose tabs on air mail
flights so that a disappearance would be noticed in time to send search and
rescue.
In 1926, the Watres Act created the Aeronautic Branch of the Department of
Commerce. The Aeronautic Branch assumed a number of responsibilities, but one
of them was the maintenance of the Air Mail routes. Similarly, the Air Mail
Radio Stations became Aeronautics Branch facilities, and took on the new name
of Flight Service Stations. No longer just for the contract mail carriers, the
Flight Service Stations made up a nationwide network of government-provided
services to aviators. They were the first edifices in what we now call the
National Airspace System (NAS): a complex combination of physical facilities,
technologies, and operating practices that enable safe aviation.
In 1935, the first en-route air traffic control center opened, a facility in
Newark owned by a group of airlines. The Aeronautic Branch, since renamed the
Bureau of Air Commerce, supported the airlines in developing this new concept
of en-route control that used radio communications and paperwork to track which
aircraft were in which airways. The rising number of commercial aircraft made
in-air collisions a bigger problem, so the Newark control center was quickly
followed by more facilities built on the same pattern. In 1936, the Bureau of
Air Commerce took ownership of these centers, and ATC became a government
function alongside the advisory and safety services provided by the flight
service stations.
En route center controllers worked off of position reports from pilots via
radio, but needed a way to visualize and track aircraft's positions and their
intended flight paths. Several techniques helped: first, airlines shared their
flight planning paperwork with the control centers, establishing "flight plans"
that corresponded to each aircraft in the sky. Controllers adopted a work aid
called a "flight strip," a small piece of paper with the key information about
an aircraft's identity and flight plan that could easily be handed between
stations. By arranging the flight strips on display boards full of slots,
controllers could visualize the ordering of aircraft in terms of altitude and
airway.
Second, each center was equipped with a large plotting table map where
controllers pushed markers around to correspond to the position reports from
aircraft. A small flag on each marker gave the flight number, so it could
easily be correlated to a flight strip on one of the boards mounted around the
plotting table. This basic concept of air traffic control, of a flight strip
and a position marker, is still in use today.
Radar
The Second World War changed aviation more than any other event of history.
Among the many advancements were two British inventions of particular
significance: first, the jet engine, which would make modern passenger
airliners practical. Second, the radar, and more specifically the magnetron.
This was a development of such significance that the British government
treated it as a secret akin to nuclear weapons; indeed, the UK effectively
traded radar technology to the US in exchange for participation in US
nuclear weapons research.
Radar created radical new possibilities for air defense, and complimented
previous air defense development in Britain. During WWI, the organization
tasked with defending London from aerial attack had developed a method called
"ground-controlled interception" or GCI. Under GCI, ground-based observers
identify possible targets and then direct attack aircraft towards them via
radio. The advent of radar made GCI tremendously more powerful, allowing a
relatively small number of radar-assisted air defense centers to monitor for
inbound attack and then direct defenders with real-time vectors.
In the first implementation, radar stations reported contacts via telephone to
"filter centers" that correlated tracks from separate radars to create a
unified view of the airspace---drawn in grease pencil on a preprinted map.
Filter center staff took radar and visual reports and updated the map by moving
the marks. This consolidated information was then provided to air defense
bases, once again by telephone.
Later technical developments in the UK made the process more automated. The
invention of the "plan position indicator" or PPI, the type of radar scope we
are all familiar with today, made the radar far easier to operate and
interpret. Radar sets that automatically swept over 360 degrees allowed each
radar station to see all activity in its area, rather than just aircraft
passing through a defensive line. These new capabilities eliminated the need
for much of the manual work: radar stations could see attacking aircraft and
defending aircraft on one PPI, and communicated directly with defenders by
radio.
It became routine for a radar operator to give a pilot navigation vectors by
radio, based on real-time observation of the pilot's position and heading. A
controller took strategic command of the airspace, effectively steering the
aircraft from a top-down view. The ease and efficiency of this workflow was a
significant factor in the end of the Battle of Britain, and its remarkable
efficacy was noticed in the US as well.
At the same time, changes were afoot in the US. WWII was tremendously
disruptive to civil aviation; while aviation technology rapidly advanced due to
wartime needs those same pressing demands lead to a slowdown in nonmilitary
activity. A heavy volume of military logistics flights and flight training, as
well as growing concerns about defending the US from an invasion, meant that
ATC was still a priority. A reorganization of the Bureau of Air Commerce
replaced it with the Civil Aeronautics Authority, or CAA. The CAA's role
greatly expanded as it assumed responsibility for airport control towers and
commissioned new en route centers.
As WWII came to a close, CAA en route control centers began to adopt GCI
techniques. By 1955, the name Air Route Traffic Control Center (ARTCC) had been
adopted for en route centers and the first air surveillance radars were
installed. In a radar-equipped ARTCC, the map where controllers pushed markers
around was replaced with a large tabletop PPI built to a Navy design. The
controllers still pushed markers around to track the identities of aircraft,
but they moved them based on their corresponding radar "blips" instead of radio
position reports.
Air Defense
After WWII, post-war prosperity and wartime technology like the jet engine lead
to huge growth in commercial aviation. During the 1950s, radar was adopted by
more and more ATC facilities (both "terminal" at airports and "en route" at
ARTCCs), but there were few major changes in ATC procedure. With more and more
planes in the air, tracking flight plans and their corresponding positions
became labor intensive and error-prone. A particular problem was the increasing
range and speed of aircraft, and corresponding longer passenger flights, that
meant that many aircraft passed from the territory of one ARTCC into another.
This required that controllers "hand off" the aircraft, informing the "next"
ARTCC of the flight plan and position at which the aircraft would enter their
airspace.
In 1956, 128 people died in a mid-air collision of two commercial airliners
over the Grand Canyon. In 1958, 49 people died when a military fighter struck a
commercial airliner over Nevada. These were not the only such incidents in the
mid-1950s, and public trust in aviation started to decline. Something had to be
done. First, in 1958 the CAA gave way to the Federal Aviation Administration.
This was more than just a name change: the FAA's authority was greatly
increased compared to the CAA, most notably by granting it authority over
military aviation.
This is a difficult topic to explain succinctly, so I will only give broad
strokes. Prior to 1958, military aviation was completely distinct from civil
aviation, with no coordination and often no communication at all between the
two. This was, of course, a factor in the 1958 collision. Further, the 1956
collision, while it did not involve the military, did result in part from
communications issues between separate distinct CAA facilities and the
airline's own control facilities. After 1958, ATC was completely unified into
one organization, the FAA, which assumed the work of the military controllers
of the time and some of the role of the airlines. The military continues to
have its own air controllers to this day, and military aircraft continue to
include privileges such as (practical but not legal) exemption from transponder
requirements, but military flights over the US are still beholden to the same
ATC as civil flights. Some exceptions apply, void where prohibited, etc.
The FAA's suddenly increased scope only made the practical challenges of ATC
more difficult, and commercial aviation numbers continued to rise. As soon as
the FAA was formed, it was understood that there needed to be major investments
in improving the National Airspace System. While the first couple of years were
dominated by the transition, the FAA's second director (Najeeb Halaby) prepared
two lengthy reports examining the situation and recommending improvements. One
of these, the Beacon report (also called Project Beacon), specifically
addressed ATC. The Beacon report's recommendations included massive expansion
of radar-based control (called "positive control" because of the controller's
access to real-time feedback on aircraft movements) and new control procedures
for airways and airports. Even better, for our purposes, it recommended the
adoption of general-purpose computers and software to automate ATC functions.
Meanwhile, the Cold War was heating up. US air defense, a minor concern in the
few short years after WWII, became a higher priority than ever before. The
Soviet Union had long-range aircraft capable of reaching the United States, and
nuclear weapons meant that only a few such aircraft had to make it to cause
massive destruction. Considering the vast size of the United States (and,
considering the new unified air defense command between the United States and
Canada, all of North America) made this a formidable challenge.
During the 1950s, the newly minted Air Force worked closely with MIT's Lincoln
Laboratory (an important center of radar research) and IBM to design a
computerized, integrated, networked system for GCI. When the Air Force
committed to purchasing the system, it was christened the Semi-Automated Ground
Environment, or SAGE. SAGE is a critical juncture in the history of the
computer and computer communications, the first system to demonstrate many
parts of modern computer technology and, moreover, perhaps the first
large-scale computer system of any kind.
SAGE is an expansive topic that I will not take on here; I'm sure it will be
the focus of a future article but it's a pretty well-known and well-covered
topic. I have not so far felt like I had much new to contribute, despite it
being the first item on my "list of topics" for the last five years. But one of
the things I want to tell you about SAGE, that is perhaps not so well known, is
that SAGE was not used for ATC. SAGE was a purely military system. It was
commissioned by the Air Force, and its numerous operating facilities (called
"direction centers") were located on Air Force bases along with the interceptor
forces they would direct.
However, there was obvious overlap between the functionality of SAGE and the
needs of ATC. SAGE direction centers continuously received tracks from remote
data sites using modems over leased telephone lines, and automatically
correlated multiple radar tracks to a single aircraft. Once an operator entered
information about an aircraft, SAGE stored that information for retrieval by
other radar operators. When an aircraft with associated data passed from the
territory of one direction center to another, the aircraft's position and
related information were automatically transmitted to the next direction center
by modem.
One of the key demands of air defense is the identification of aircraft---any
unknown track might be routine commercial activity, or it could be an inbound
attack. The air defense command received flight plan data on commercial flights
(and more broadly all flights entering North America) from the FAA and entered
them into SAGE, allowing radar operators to retrieve "flight strip" data on any
aircraft on their scope.
Recognizing this interconnection with ATC, as soon as SAGE direction centers
were being installed the Air Force started work on an upgrade called SAGE Air
Traffic Integration, or SATIN. SATIN would extend SAGE to serve the ATC
use-case as well, providing SAGE consoles directly in ARTCCs and enhancing SAGE
to perform non-military safety functions like conflict warning and forward
projection of flight plans for scheduling. Flight strips would be replaced by
teletype output, and in general made less necessary by the computer's ability
to filter the radar scope.
Experimental trial installations were made, and the FAA participated readily in
the research efforts. Enhancement of SAGE to meet ATC requirements seemed
likely to meet the Beacon report's recommendations and radically improve ARTCC
operations, sooner and cheaper than development of an FAA-specific system.
As it happened, well, it didn't happen. SATIN became interconnected with
another planned SAGE upgrade to the Super Combat Centers (SCC), deep
underground combat command centers with greatly enhanced SAGE computer
equipment. SATIN and SCC planners were so confident that the last three Air
Defense Sectors scheduled for SAGE installation, including my own Albuquerque,
were delayed under the assumption that the improved SATIN/SCC equipment should
be installed instead of the soon-obsolete original system. SCC cost estimates
ballooned, and the program's ambitions were reduced month by month until it was
canceled entirely in 1960. Albuquerque never got a SAGE installation, and the
Albuquerque air defense sector was eliminated by reorganization later in 1960
anyway.
Flight Service Stations
Remember those Flight Service Stations, the ones that were originally built by
the Post Office? One of the oddities of ATC is that they never went away. FSS
were transferred to the CAB, to the CAA, and then to the FAA. During the 1930s
and 1940s many more were built, expanding coverage across much of the country.
Throughout the development of ATC, the FSS remained responsible for non-control
functions like weather briefing and flight plan management. Because aircraft
operating under instrument flight rules must closely comply with ATC, the
involvement of FSS in IFR flights is very limited, and FSS mostly serve VFR
traffic.
As ATC became common, the FSS gained a new and somewhat odd role: playing
go-between for ATC. FSS were more numerous and often located in sparser areas
between cities (while ATC facilities tended to be in cities), so especially in
the mid-century, pilots were more likely to be able to reach an FSS than ATC.
It was, for a time, routine for FSS to relay instructions between pilots and
controllers. This is still done today, although improved communications have
made the need much less common.
As weather dissemination improved (another topic for a future post), FSS gained
access to extensive weather conditions and forecasting information from the
Weather Service. This connectivity is bidirectional; during the midcentury FSS
not only received weather forecasts by teletype but transmitted pilot reports
of weather conditions back to the Weather Service. Today these communications
have, of course, been computerized, although the legacy teletype format doggedly
persists.
There has always been an odd schism between the FSS and ATC: they are operated
by different departments, out of different facilities, with different functions
and operating practices. In 2005, the FAA cut costs by privatizing the FSS
function entirely. Flight service is now operated by Leidos, one of the largest
government contractors. All FSS operations have been centralized to one
facility that communicates via remote radio sites.
While flight service is still available, increasing automation has made the
stations far less important, and the general perception is that flight service
is in its last years. Last I looked, Leidos was not hiring for flight service
and the expectation was that they would never hire again, retiring the service
along with its staff.
Flight service does maintain one of my favorite internet phenomenon, the phone
number domain name: 1800wxbrief.com. One of the odd manifestations of the
FSS/ATC schism and the FAA's very partial privatization is that Leidos
maintains an online aviation weather portal that is separate from, and competes
with, the Weather Service's aviationweather.gov. Since Flight Service
traditionally has the responsibility for weather briefings, it is honestly
unclear to what extent Leidos vs. the National Weather Service should be
investing in aviation weather information services. For its part, the FAA seems
to consider aviationweather.gov the official source, while it pays for
1800wxbrief.com. There's also weathercams.faa.gov, which duplicates a very
large portion (maybe all?) of the weather information on Leidos's portal and
some of the NWS's. It's just one of those things. Or three of those things,
rather. Speaking of duplication due to poor planning...
The National Airspace System
Left in the lurch by the Air Force, the FAA launched its own program for ATC
automation. While the Air Force was deploying SAGE, the FAA had mostly been
waiting, and various ARTCCs had adopted a hodgepodge of methods ranging from
one-off computer systems to completely paper-based tracking. By 1960 radar was
ubiquitous, but different radar systems were used at different facilities, and
correlation between radar contacts and flight plans was completely manual. The
FAA needed something better, and with growing congressional support for ATC
modernization, they had the money to fund what they called National Airspace
System En Route Stage A.
Further bolstering historical confusion between SAGE and ATC, the FAA decided
on a practical, if ironic, solution: buy their own SAGE.
In an upcoming article, we'll learn about the FAA's first fully integrated
computerized air traffic control system. While the failed detour through SATIN
delayed the development of this system, the nearly decade-long delay between
the design of SAGE and the FAA's contract allowed significant technical
improvements. This "New SAGE," while directly based on SAGE at a functional
level, used later off-the-shelf computer equipment including the IBM
System/360, giving it far more resemblance to our modern world of computing
than SAGE with its enormous, bespoke AN/FSQ-7.
And we're still dealing with the consequences today!
[1] It also laid the groundwork for the consolidation of the industry, with a
1930 decision that took air mail contracts away from most of the smaller
companies and awarded them instead to the precursors of United, TWA, and
American Airlines.
You know sometimes a technology just sort of... comes and goes? Without leaving
much of an impression? And then gets lodged in your brain for the next decade?
Let's talk about one of those: the iBeacon.
I think the reason that iBeacons loom so large in my memory is that the
technology was announced at WWDC in 2013. Picture yourself in 2013: Steve Jobs
had only died a couple of years ago, Apple was still widely viewed as a
visionary leader in consumer technology, and WWDC was still happening. Back
then, pretty much anything announced at an Apple event was a Big Deal that got
Big Coverage. Even, it turns out, if it was a minor development for a niche
application. That's the iBeacon, a specific solution to a specific problem.
It's not really that interesting, but the valance of it's Apple origin makes
it seem cool?
iBeacon Technology
Let's start out with what iBeacon is, as it's so simple as to be
underwhelming. Way back in the '00s, a group of vendors developed a sort of
"Diet Bluetooth": a wireless protocol that was directly based on Bluetooth but
simplified and optimized for low-power, low-data-rate devices. This went
through an unfortunate series of names, including the delightful Wibree, but
eventually settled on Bluetooth Low Energy (BLE). BLE is not just lower-power,
but also easier to implement, so it shows up in all kinds of smart devices
today. Back in 2011, it was quite new, and Apple was one of the first vendors
to adopt it.
BLE is far less connection-oriented than regular Bluetooth; you may have
noticed that BLE devices are often used entirely without conventional
"pairing." A lot of typical BLE profiles involve just broadcasting some data
into the void for any device that cares (and is in short range) to receive,
which is pretty similar to
ANT+ and
unsurprisingly appears in ANT+-like applications of fitness monitors and other
sensors. Of course, despite the simpler association model, BLE applications
need some way to find devices, so BLE provides an advertising mechanism in
which devices transmit their identifying info at regular intervals.
And that's all iBeacon really is: a standard for very simple BLE devices that
do nothing but transmit advertisements with a unique ID as the payload. Add a
type field on the advertising packet to specify that the device is trying to
be an iBeacon and you're done. You interact with an iBeacon by receiving its
advertisements, so you know that you are near it. Any BLE device with
advertisements enabled could be used this way, but iBeacons are built only for
this purpose.
The applications for iBeacon are pretty much defined by its implementation in
iOS; there's not much of a standard even if only for the reason that there's
not much to put in a standard. It's all obvious. iOS provides two principle
APIs for working with iBeacons: the region monitoring API allows an app to
determine if it is near an iBeacon, including registering the region so that
the app will be started when the iBeacon enters range. This is useful for apps
that want to do something in response to the user being in a specific location.
The ranging API allows an app to get a list of all of the nearby iBeacons and a
rough range from the device to the iBeacon. iBeacons can actually operate at
substantial ranges---up to hundreds of meters for more powerful beacons with
external power, so ranging mode can potentially be used as sort of a
lightweight local positioning system to estimate the location of the user
within a larger space.
iBeacon IDs are in the format of a UUID, followed by a "major" number and a
"minor" number. There are different ways that these get used, especially if you
are buying cheap iBeacons and not reconfiguring them, but the general idea is
roughly that the UUID identifies the operator, the major a deployment, and the
minor a beacon within the deployment. In practice this might be less common
than just every beacon having its own UUID due to how they're sourced. It would
be interesting to survey iBeacon applications to see which they do.
Promoted Applications
So where do you actually use these? Retail! Apple seems to have designed the
iBeacon pretty much exclusively for "proximity marketing" applications in the
retail environment. It goes something like this: when you're in a store and
open that store's app, the app will know what beacons you are nearby and
display relevant content. For example, in a grocery store, the grocer's app
might offer e-coupons for cosmetics when you are in the cosmetics section.
That's, uhh, kind of the whole thing? The imagined universe of applications
around the launch of iBeacon was pretty underwhelming to me, even at the time,
and it still seems that way. That's presumably why iBeacon had so little
success in consumer-facing applications. You might wonder, who actually used
iBeacons?
Well, Apple did, obviously. During 2013 and into 2014 iBeacons were installed
in all US Apple stores, and prompted the Apple Store app to send notifications
about upgrade offers and other in-store deals. Unsurprisingly, this Apple Store
implementation was considered the flagship deployment. It generated a fair
amount of press, including speculation as to whether or not it would prove the
concept for other buyers.
Around the same time, Apple penned a deal with Major League Baseball that would
see iBeacons installed in MLB stadiums. For the 2014 season, MLB Advanced
Marketing, a joint venture of team owners, had installed iBeacon technology in
20 stadiums.
Baseball fans will be able to utilize iBeacon technology within MLB.com At
The Ballpark when the award-winning app's 2014 update is released for Opening
Day. Complete details on new features being developed by MLBAM for At The
Ballpark, including iBeacon capabilities, will be available in March.
What's the point? the iBeacons "enable the At The Ballpark app to play specific
videos or offer coupons."
This exact story repeats for other retail companies that have picked the
technology up at various points, including giants like Target and WalMart. The
iBeacons are simply a way to target advertising based on location, with better
indoor precision and lower power consumption than GPS. Aiding these
applications along, Apple integrated iBeacon support into the iOS location
framework and further blurred the lines between iBeacon and other positioning
services by introducing location-based-advertising features that operated on
geofencing alone.
Some creative thinkers did develop more complex applications for the iBeacon.
One of the early adopters was a company called Exact Editions, which prepared
the Apple Newsstand version of a number of major magazines back when "readable
on iPad" was thought to be the future of print media. Exact Editions explored a
"read for free" feature where partner magazines would be freely accessible to
users at partnering locations like coffee shops and book stores. This does not
seem to have been a success, but using the proximity of an iBeacon to unlock
some paywalled media is at least a little creative, if probably ill-advised
considering security considerations we'll discuss later.
The world of applications raises interesting questions about the other half of
the mobile ecosystem: how did this all work on Android? iOS has built-in
support for iBeacons. An operating system service scans for iBeacons and
dispatches notifications to apps as appropriate. On Android, there has never
been this type of OS-level support, but Android apps have access to relatively
rich low-level Bluetooth functionality and can easily scan for iBeacons
themselves. Several popular libraries exist for this purpose, and it's not
unusual for them to be used to give ported cross-platform apps more or less
equivalent functionality. These apps do need to run in the background if
they're to notify the user proactively, but especially back in 2013 Android was
far more generous about background work than iOS.
iBeacons found expanded success through ShopKick, a retail loyalty platform
that installed iBeacons in locations of some major retailers like American
Eagle. These powered location-based advertising and offers in the ShopKick app
as well as retailer-specific apps, which is kind of the start of a larger, more
seamless network, but it doesn't seem to have caught on. Honestly, consumers
just don't seem to want location-based advertising that much. Maybe because,
when you're standing in an American Eagle, getting ads for products carried in
the American Eagle is inane and irritating. iBeacons sort of foresaw
cooler screens in this regard.
To be completely honest, I'm skeptical that anyone ever really believed in the
location-based advertising thing. I mean, I don't know, the advertising
industry is pretty good at self-deception, but I don't think there were ever
any real signs of hyper-local smartphone-based advertising taking off. I think
the play was always data collection, and advertising and special offers just
provided a convenient cover story.
Real Applications
iBeacons are one of those technologies that feels like a flop from a consumer
perspective but has, in actuality, enjoyed surprisingly widespread deployments.
The reason, of course, is data mining.
To Apple's credit, they took a set of precautions in the design of the iBeacon
iOS features that probably felt sufficient in 2013. Despite the fact that a lot
of journalists described iBeacons as being used to "notify a user to install an
app," that was never actually a capability (a very similar-seeming iOS feature
attached to Siri actually used conventional geofencing rather than iBeacons).
iBeacons only did anything if the user already had an app installed that
either scanned for iBeacons when in the foreground or registered for region
notifications.
In theory, this limited iBeacons to companies with which consumers already had
some kind of relationship. What Apple may not have foreseen, or perhaps simply
accepted, is the incredible willingness of your typical consumer brand to sell
that relationship to anyone who would pay.
iBeacons became, in practice, just another major advancement in pervasive consumer
surveillance. The New York Times reported in
2019
that popular applications were including SDKs that reported iBeacon contacts to
third-party consumer data brokers. This data became one of several streams that
was used to sell consumer location history to advertisers.
It's a little difficult to assign blame and credit, here. Apple, to their
credit, kept iBeacon features in iOS relatively locked down. This suggests that
they weren't trying to facilitate massive location surveillance. That said,
Apple always marketed iBeacon to developers based on exactly this kind of
consumer tracking and micro-targeting, they just intended for it to be done
under the auspices of a single brand. That industry would obviously form data
exchanges and recruit random apps into reporting everything in your proximity
isn't surprising, but maybe Apple failed to foresee it.
They certainly weren't the worst offender. Apple's promotion of iBeacon opened
the floodgates for everyone else to do the same thing. During 2014 and 2015,
Facebook started offering bluetooth beacons to businesses that were ostensibly
supposed to facilitate in-app special offers (though I'm not sure that those
ever really materialized) but were pretty transparently just a location data
collection play.
Google jumped into the fray in their Signature Google style, with an offering
that was confusing, semi-secret, incoherently marketed, and short lived. Google's
Project Beacon, or Google My Business, also shipped free Bluetooth beacons out
to businesses to give Android location services a boost. Google My Business
seems to have been the source of a fair amount of confusion even at the time,
and we can virtually guarantee that (as reporters speculated at the time)
Google was intentionally vague and evasive about the system to avoid negative
attention from privacy advocates.
In the case of Facebook, well, they don't have the level of opsec that Google
does so things are a little better documented:
Leaked documents show that Facebook worried that users would 'freak out' and
spread 'negative memes' about the program. The company recently removed the
Facebook Bluetooth beacons section from their website.
The real deployment of iBeacons and closely related third-party iBeacon-like
products [1] occurred at massive scale but largely in secret. It became yet another
dark project of the advertising-industrial complex, perhaps the most successful
yet of a long-running series of retail consumer surveillance systems.
Payments
One interesting thing about iBeacon is how it was compared to NFC. The two
really aren't that similar, especially considering the vast difference in
usable ranges, but NFC was the first radio technology to be adopted for
"location marketing" applications. "Tap your phone to see our menu," kinds of
things. Back in 2013, Apple had rather notably not implemented NFC in its
products, despite its increasing adoption on Android.
But, there is much more to this story than learning about new iPads and
getting a surprise notification that you are eligible for a subsidized iPhone
upgrade. What we're seeing is Apple pioneering the way mobile devices can be
utilized to make shopping a better experience for consumers. What we're
seeing is Apple putting its money where its mouth is when it decided not to
support NFC. (MacObserver)
Some commentators viewed iBeacon as Apple's response to NFC, and I think
there's more to that than you might think. In early marketing, Apple kept
positioning iBeacon for payments. That's a little weird, right, because
iBeacons are a purely one-way broadcast system.
Still, part of Apple's flagship iBeacon implementation was a payment system:
Here's how he describes the purchase he made there, using his iPhone and the
EasyPay system: "We started by using the iPhone to scan the product barcode
and then we had to enter our Apple ID, pretty much the way we would for any
online Apple purchase [using the credit card data on file with one's Apple
account]. The one key difference was that this transaction ended with a
digital receipt, one that we could show to a clerk if anyone stopped us on
the way out."
Apple Wallet only kinda-sorta existed at the time, although Apple was clearly
already midway into a project to expand into consumer payments. It says a lot
about this point in time in phone-based payments that several reporters talk
about iBeacon payments as a feature of iTunes, since Apple was mostly
implementing general-purpose billing by bolting it onto iTunes accounts.
It seems like what happened is that Apple committed to developing a
pay-by-phone solution, but decided against NFC. To be competitive with
other entrants in the pay-by-phone market, they had to come up with some
kind of technical solution to interact with retail POS, and iBeacon was
their choice. From a modern perspective this seems outright insane; like,
Bluetooth broadcasts are obviously not the right way to initiate a payment
flow, and besides, there's a whole industry-standard stack dedicated to
that purpose... built on NFC.
But remember, this was 2013! EMV was not yet in meaningful use in the US;
several major banks and payment networks had just committed to rolling it out
in 2012 and every American can tell you that the process was long and
torturous. Because of the stringent security standards around EMV, Android
devices did not implement EMV until ARM secure enclaves became widely
available. EMVCo, the industry body behind EMV, did not have a certification
program for smartphones until 2016.
Android phones offered several "tap-to-pay" solutions, from Google's frequently
rebranded Google Wallet^w^wAndroid Pay^w^wGoogle Wallet to Verizon's
embarrassingly rebranded ISIS^wSoftcard and Samsung Pay. All of these initially
relied on proprietary NFC protocols with bespoke payment terminal
implementations. This was sketchy enough, and few enough phones actually had
NFC, that the most successful US pay-by-phone implementations like Walmart's
and Starbucks' used barcodes for communication. It would take almost a decade
before things really settled down and smartphones all just implemented EMV.
So, in that context, Apple's decision isn't so odd. They must have figured
that iBeacon could solve the same "initial handshake" problem as Walmart's
QR codes, but more conveniently and using radio hardware that they already
included in their phones. iBeacon-based payment flows used the iBeacon only
to inform the phone of what payment devices were nearby, everything else
happened via interaction with a cloud service or whatever mechanism the
payment vendor chose to implement. Apple used their proprietary payments
system through what would become your Apple Account, PayPal slapped together
an iBeacon-based fast path to PayPal transfers, etc.
I don't think that Apple's iBeacon-based payments solution ever really
shipped. It did get some use, most notably by Apple, but these all seem to have
been early-stage implementations, and the complete end-to-end SDK that a lot of
developers expected never landed.
You might remember that this was a very chaotic time in phone-based payments,
solutions were coming and going. When Apple Pay was properly announced a year
after iBeacons, there was little mention of Bluetooth. By the time in-store
Apple Pay became common, Apple had given up and adopted NFC.
Limitations
One of the great weaknesses of iBeacon was the security design, or lack
thereof. iBeacon advertisements were sent in plaintext with no authentication
of any type. This did, of course, radically simplify implementation, but it
also made iBeacon untrustworthy for any important purpose. It is quite trivial,
with a device like an Android phone, to "clone" any iBeacon and transmit its
identifiers wherever you want. This problem might have killed off the whole
location-based-paywall-unlocking concept had market forces not already done so.
It also opens the door to a lot of nuisance attacks on iBeacon-based location
marketing, which may have limited the depth of iBeacon features in major apps.
iBeacon was also positioned as a sort of local positioning system, but it
really wasn't. iBeacon offers no actual time-of-flight measurements, only
RSSI-based estimation of range. Even with correct on-site calibration (which
can be aided by adjusting a fixed RSSI-range bias value included in some
iBeacon advertisements) this type of estimation is very inaccurate, and in my
little experiments with a Bluetooth beacon location library I can see swings
from 30m to 70m estimated range based only on how I hold my phone. iBeacon
positioning has never been accurate enough to do more than assert whether or
not a phone is "near" the beacon, and "near" can take on different values
depending on the beacon's transmit power.
Developers have long looked towards Bluetooth as a potential local positioning
solution, and it's never quite delivered. The industry is now turning towards
Ultra-Wideband or UWB technology, which combines a high-rate, high-bandwidth
radio signal with a time-of-flight radio ranging protocol to provide very
accurate distance measurements. Apple is, once again, a technical leader in
this field and UWB radios have been integrated into the iPhone 11 and later.
Senescence
iBeacon arrived to some fanfare, quietly proliferated in the shadows of the
advertising industry, and then faded away. The Wikipedia article on iBeacons
hasn't really been updated since support on Windows Phone was relevant. Apple
doesn't much talk about iBeacons any more, and their compatriots Facebook and
Google both sunset their beacon programs years ago.
Part of the problem is, well, the pervasive surveillance thing. The idea of
Bluetooth beacons cooperating with your phone to track your every move proved
unpopular with the public, and so progressively tighter privacy restrictions in
mobile operating systems and app stores have clamped down on every grocery
store app selling location data to whatever broker bids the most. I mean, they
still do, but it's gotten harder to use Bluetooth as an aid. Even Android, the
platform of "do whatever you want in the background, battery be damned,"
strongly discourages Bluetooth scanning by non-foreground apps.
Still, the basic technology remains in widespread use. BLE beacons have
absolutely proliferated, there are plenty of apps you can use to list nearby
beacons and there almost certainly are nearby beacons. One of my cars has,
like, four separate BLE beacons going on all the time, related to a
phone-based keyless entry system that I don't think the automaker even supports
any more. Bluetooth beacons, as a basic primitive, are so useful that they get
thrown into all kinds of applications. My earbuds are a BLE beacon, which the
(terrible, miserable, no-good) Bose app uses to detect their proximity when
they're paired to another device. A lot of smart home devices like light bulbs
are beacons. The irony, perhaps, of iBeacon-based location tracking is that
it's a victim of its own success. There is so much "background" BLE beacon
activity that you scarcely need to add purpose-built beacons to track users,
and only privacy measures in mobile operating systems and the beacons
themselves (some of which rotate IDs) save us.
Apple is no exception to the widespread use of Bluetooth beacons: iBeacon lives
on in virtually every apple device. If you do try out a Bluetooth beacon
scanning app, you'll discover pretty much every Apple product in a 30 meter
radius. From MacBooks Pro to Airpods, almost all Apple products transmit
iBeacon advertisements to their surroundings. These are used for the initial
handshake process of peer-to-peer features like Airdrop, and Find My/AirTag
technology seems to be derived from the iBeacon protocol (in the sense that
anything can be derived from such a straightforward design). Of course, pretty
much all of these applications now randomize identifiers to prevent passive use
of device advertisements for long-term tracking.
Here's some good news: iBeacons are readily available in a variety of form
factors, and they are very cheap. Lots of libraries exist for working with
them. If you've ever wanted some sort of location-based behavior for something
like home automation, iBeacons might offer a good solution. They're neat, in
an old technology way. Retrotech from the different world of 2013.
It's retro in more ways than one. It's funny, and a bit quaint, to read the
contemporary privacy concerns around iBeacon. If only they had known how bad
things would get! Bluetooth beacons were the least of our concerns.
[1] Things can be a little confusing here because the iBeacon is such a
straightforward concept, and Apple's implementation is so simple. We could
define "iBeacon" as including only officially endorsed products from Apple
affiliates, or as including any device that behaves the same as official
products (e.g. by using the iBeacon BLE advertisement type codes), or as any
device that is performing substantially the same function (but using a
different advertising format). I usually mean the latter of these three as
there isn't really much difference between an iBeacon and ten million other BLE
beacons that are doing the same thing with a slightly different identifier
format. Facebook and Google's efforts fall into this camp.
When we last talked about Troposcatter, it was Pole
Vault. Pole Vault was the
first troposcatter communications network, on the east coast of Canada. It
would not be alone for long. By the time the first Pole Vault stations were
complete, work was already underway on a similar network for Alaska: the
White Alice Communication System, WACS.
Alaska has long posed a challenge for communications. In the 1860s, Western
Union wanted to extend their telegraph network from the United States into
Europe. Although the technology would be demonstrated shortly after, undersea
telegraph cables were still notional and it seemed that a route that minimized
the ocean crossing would be preferable---of course, that route maximized the
length on land, stretching through present-day Alaska and Siberia on each side
of the Bering Strait. This task proved more formidable than Western Union had
imagined, and the first transatlantic telegraph cable (on a much further south
crossing) was completed before the arctic segments of the overland route. The
"Western Union Telegraph Expedition" abandoned its work, leaving a telegraph
line well into British Columbia that would serve as one of the principle
communications assets in the region for decades after.
This ill-fated telegraph line failed to link San Francisco to Moscow, but its
aftermath included a much larger impact on Russian interests in North America:
the purchase of Alaska in 1867. Shortly after, the US military began its
expansion into the new frontier. The Army Signal Corps, mostly to fulfill its
function in observing the weather, built and staffed small installations that
stretched further and further west. Later, in the 1890s, a gold rush brought a
sudden influx of American settlers to Alaska's rugged terrain. The sudden
economic importance of Klondike, and the rather colorful personalities of the
prospectors looking to exploit it, created a much larger need for military
presence. Fortuitously, many of the forts present had been built by the Signal
Corps, which had already started on lines of communication. Construction was
difficult, though, and without Alaskan communications as major priority there
was only minimal coverage.
Things changed in 1900, when Congress appropriated a substantial budget to the
Washington-Alaska Military Cable and Telegraph System. The Signal Corps set on
Alaska like, well, like an army, and extensive telegraph and later telephone
lines were built to link the various military outposts. Later renamed the
Alaska Communications System, these cables brought the first telecommunication
to much of Alaska. The arrival of the telegraph was quite revolutionary for
remote towns, who could now receive news in real-time that had previously been
delayed by as much as a year [1]. Telegraphy was important to civilians as
well, something that Congress had anticipated: The original act authorizing the
Alaska Communications System dictated that it would carry commercial traffic as
well. The military had an unusual role in Alaska, and one aspect of it was
telecommunications provider.
In 1925, an outbreak of diphtheria began to claim the lives of children in
Nome, a town in far western Alaska on the Seward Peninsula. The daring winter
delivery of antidiphtheria serum by dog sled is widely remembered due to its
tangential connection to the Iditarod, but there were two sides of the "serum
run." The message from Nome's sole doctor requesting the urgent shipment was
transmitted from Nome to the Public Health Service in DC over the Alaska
Communications System. It gives us some perspective on the importance of the
telegraph in Alaska that the 600 mile route to Nome took five days and many
feats of heroism---but at the same time could be crossed instantaneously by
telegrams.
The Alaska Communications System included some use of radio from the beginning.
A pair of HF radio stations specifically handled traffic for Nome, covering a
100-mile stretch too difficult for even the intrepid Signal Corps. While not a
totally new technology to the military, radio was quite new to the telegraph
business, and the ACS to Nome was probably the first commercial radiotelegraph
system on the continent. By the 1930s, the condition of the Alaskan telegraph
cables had decayed while demand for telephony had increased. Much of ACS was
upgraded and modernized to medium-frequency radiotelephone links. In towns
small and large, even in Anchorage itself, the sole telephone connection to the
contiguous United States was an ACS telephone installed in the general store.
Alaskan communications became an even greater focus of the military with the
onset of the Second World War. A few weeks after Pearl Harbor, the Japanese
attacked Fort Mears in the Aleutian Islands. Fort Mears had no
telecommunications connections, so despite the proximity of other airbases
support was slow to come. The lack of a telegraph or telephone line contributed
to 43 deaths and focused attention on the ACS. By 1944, the Army Signal Corps
had a workforce of 2,000 dedicated to Alaska.
WWII brought more than one kind of attention to Alaska. Several Japanese
assaults on the Aleutian islands represented the largest threats to American
soil outside of Pearl Harbor, showing both Alaska's vulnerability and the
strategic importance given to it by its relative proximity to Eurasia. WWII
ended but, in 1949, the USSR demonstrated an atomic weapon. A combination of
Soviet expansionism and the new specter of nuclear war turned military planners
towards air defense. Like the Canadian Maritimes in the East, Alaska covered a
huge swath of the airspace through which Soviet bombers might approach the US.
Alaska was, once again, a battleground.
The early Cold War military buildup of Alaska was particularly heavy on air
defense. During the late '40s and early '50s, more than a dozen new radar and
control sites were built. The doctrine of ground-controlled interception
requires real-time communication between radar centers, stressing the limited
number of voice channels available on the ACS. As early as 1948, the Signal
Corps had begun experiments to choose an upgrade path. Canadian early-warning
radar networks, including the Distant Early Warning Line, were on the drawing
board and would require many communications channels in particularly remote
parts of Alaska.
Initially, point-to-point microwave was used in relatively favorable terrain
(where the construction of relay stations about every 50 miles was practical).
For the more difficult segments, the Signal Corps found that VHF radio could
provide useful communications at ranges over 100 miles. VHF radiotelephones
were installed at air defense radar stations, but there was a big problem: the
airspace surveillance radar of the 1950s also operated in the VHF band, and
caused so much interference with the radiotelephones that they were difficult
to use. The radar stations were probably the most important users of the
network, so VHF would have to be abandoned.
In 1954, a military study group was formed to evaluate options for the ACS.
That group, in turn, requested a proposal from AT&T. Bell Laboratories had
been involved in the design and evaluation of Pole Vault, the first sites of
which had been completed two years before, so they naturally positioned
troposcatter as the best option.
It is worth mentioning the unusual relationship AT&T had with Alaska, or
rather, the lack of one. While the Bell System enjoyed a monopoly on telephony
in most of the United States [2], they had never expanded into Alaska. Alaska
was only a territory, after all, and a very sparsely populated one at that.
The paucity of long-distance leads to or from Alaska (only one connected to
Anchorage, for example) limited the potential for integration of Alaska into
the broader Bell System anyway. Long-distance telecommunications in Alaska were
a military project, and AT&T was involved only as a vendor.
Because of the high cost of troposcatter stations, proven during Pole Vault
construction, a hybrid was proposed: microwave stations could be spaced every
50 miles along the road network, while troposcatter would cover the long
stretches without roads.
In 1955, the Signal Corps awarded Western Electric a contract for the White
Alice Communications System. The Corps of Engineers surveyed the locations of
31 sites, verifying each by constructing a temporary antenna tower. The Corps
of Engineers led construction of the first 11 sites, and the final 20 were
built on contract by Western Electric itself. All sites used radio equipment
furnished by Western Electric and were built to Western Electric designs.
Construction was far from straightforward. Difficult conditions delayed
completion of the original network until 1959, two years later than intended.
A much larger issue, though, was the budget. The original WACS was expected to
cost $38 million. By the time the first 31 sites were complete, the bill
totaled $113 million---equivalent to over a billion dollars today. Western
Electric had underestimated not only the complexity of the sites but the
difficulty of their construction. A WECo report read:
On numerous occasions, the men were forced to surrender before the onslaught
of cold, wind and snow and were immobilized for days, even weeks . This
ordeal of waiting was of times made doubly galling by the knowledge that
supplies and parts needed for the job were only a few miles distant but
inaccessible because the white wall of winter had become impenetrable
WACS initial capability included 31 stations, of which 22 were troposcatter and
the remainder only microwave (using Western Electric's TD-2). A few stations
were equipped with both troposcatter and microwave, serving as relays between
the two carriers.
In 1958, construction started on the Ballistic Missile Early Warning System or
BMEWS. BMEWS was an over-the-horizon radar system intended to provide early
warning of a Soviet attack. BMEWS would provide as little as 15 minutes warning,
requiring that alerts reach NORAD in Colorado as quickly as possible. One BMEWS
set was installed in Greenland, where the Pole Vault system was expanded to
provide communications. Similarly, the BMEWS set at Clear Missile Early Warning
Station in central Alaska relied on White Alice. Planners were concerned about
the ability of the Soviet Union to suppress an alert by destroying infrastructure,
so two redundant chains of microwave sites were added to White Alice. One stretched
from Clear to Ketchikan where it connected to an undersea cable to Seattle. The
other went east, towards Canada, where it met existing telephone cables on the
Alaska Highway.
A further expansion of White Alice started the next year, in 1959. Troposcatter
sites were extended through the Aleutian islands in "Project Stretchout" to
serve new DEW Line stations. During the 1960s, existing WACS sites were expanded
and new antennas were installed at Air Force installations. These were generally
microwave links connecting the airbases to existing troposcatter stations.
In total, WACS reached 71 sites. Four large sites served as key switching
points with multiple radio links and telephone exchanges. Pedro Dome, for
example, had a 15,000 square foot communications building with dormitories, a
power plant, and extensive equipment rooms. Support facilities included a
vehicle maintenance building, storage warehouse, and extensive fuel tanks. A
few WACS sites even had tramways for access between the "lower camp" (where
equipment and personnel were housed) and the "upper camp" (where the antennas
were located)... although they apparently did not fare well in the Alaskan
conditions.
While Western Electric had initially planned for six people and 25 KW of power
at each station, the final requirements were typically 20 people and 120-180 KW
of generator capacity. Some sites stored over half a million gallons of
fuel---conditions often meant that resupply was only possible during the
summer.
Besides troposcatter and microwave radios, the equipment included tandem
telephone exchanges. These are described in a couple of documents as "ATSS-4A,"
ATSS standing for Alaska Telephone Switching System. Based on the naming and
some circumstantial evidence, I believe these were Western Electric 4A crossbar
exchanges. They were later incorporated into AUTOVON, but also handled
commercial long-distance traffic between Alaskan towns.
With troposcatter comes large antennas, and depending on connection lengths,
WACS troposcatter antennas ranged from 30' dishes to 120' "billboard" antennas
similar to those seen at Pole Vault sites. The larger antennas handled up to
50kW of transmit power. Some 60' and 120' antennas included their own fuel
tanks and steam plants that heated the antennas through winter to minimize snow
accumulation.
Nearly all of the equipment used by WACS was manufactured by Western Electric,
with a lot of reuse of standard telephone equipment. For example, muxing on the
troposcatter links used standard K-carrier (originally for telephone cables)
and L-carrier (originally for coaxial cables). Troposcatter links operated at
about 900 MHz with a wide bandwidth, and carrier two L-carrier supergroups (60
channels) and one K-carrier (12 channels) for a nominal capacity of 132
channels, although most stations did not have fully-populated L-carrier groups
so actual capacity varied based on need. This was standard telephone carrier
equipment in widespread use on the long-distance network, but some output
modifications were made to suit the radio application.
The exception to the Western Electric rule was the radio sets themselves. They
were manufactured by Radio Engineering Laboratories, the same company that
built the radios for Pole Vault. REL pulled out all of the tricks they had
developed for Pole Vault, and the longer WACS links used two antennas at
different positions for space diversity. Each antenna had two feed horns, of
orthoganal polarization, matching similar dual-transmitters for further
diversity. REL equipment selected the best signal of the four available
receiver options.
WACS represented an enormous improvement in Alaskan communications. The entire
system was multi-channel with redundancy in many key parts of the network.
Outside of the larger cities, WACS often brought the first usable long-distance
telephone service. Even in Anchorage, WACS provided the only multi-channel
connection. Despite these achievements, WACS was set for much the same fate as
other troposcatter systems: obsolescence after the invention of communications
satellites.
The experimental satellites Telstar 1 and 2 launched in the early 1960s, and the
military began a shift towards satellite communications shortly after. Besides,
the formidable cost of WACS had become a political issue. Maintenance of the
system overran estimates by just as much as construction, and placing this cost
on taxpayers was controversial since much of the traffic carried by the system
consisted of regular commercial telephone calls. Besides, a general reticence to
allocate money to WACS had lead to a general decay of the system. WACS capacity
was insufficient for the rapidly increasing long-distance telephone traffic of
the '60s, and due to decreased maintenance funding reliability was beginning to
decline.
The retirement of a Cold War communications system is not unusual, but the
particular fate of WACS is. It entered a long second life.
After acting as the sole long-distance provider for 60 years, the military
began its retreat. In 1969, Congress passed the Alaska Communications Disposal
Act. It called for complete divestment of the Alaska Communications System and
WACS, to a private owner determined by a bidding process. Several large
independent communications companies bid, but the winner was RCA. Committing to
a $28.5 million purchase price followed by $30 million in upgrades, RCA
reorganized the Alaska Communications System as RCA Alascom.
Transfer of the many ACS assets from the military to RCA took 13 years,
involving both outright transfer of property and complex lease agreements on
sites colocated with military installations. RCA's interest in Alaskan
communications was closely connected to the coming satellite revolution: RCA
had just built the Bartlett Earth Station, the first satellite ground station
in Alaska. While Bartlett was originally an ACS asset owned by the Signal
Corps, it became just the first of multiple ground stations that RCA would
build for Alascom. Several of the new ground stations were colocated with WACS
sites, establishing satellite as an alternative to the troposcatter links.
Alascom appears to have been the first domestic satellite voice network in
commercial use, initially relying on a Canadian communications satellite [3]. In
1974, SATCOM 1 and 2 launched. These were not the first commercial
communications satellites, but they represented a significant increase in
capacity over previous commercial designs and are sometimes thought of as the
true beginning of the satellite communications era. Both were built and owned
by RCA, and Alascom took advantage of the new transponders.
At the same time, Alascom launched a modernization effort. 22 of the former
WACS stations were converted to satellite ground stations, a project that took
much of the '70s as Alascom struggled with the same conditions that had made
WACS so challenging to begin with. Modernization also included the installation
of DMS-10 telephone switches and conversion of some connections to digital.
A series of regulatory and business changes in the 1970s lead RCA to step away
from the domestic communications industry. In 1979, Alascom sold to Pacific
Power and Light, now for $200 million and $90 million in debt. PP&L continued
on much the same trajectory, expanding the Alascom system to over 200 ground
stations and launching the satellite Aurora I---the first of a small series of
satellites that gave Alaska the distinction of being the only state with its
own satellite communications network. For much of the '70s to the '00s, large
parts of Alaska relied on satellite relay for calls between towns.
In a slight twist of irony considering its long lack of interest in the state,
AT&T purchased parts of Alascom from PP&L in 1995, forming AT&T Alascom which
has faded away as an independent brand. Other parts of the former ACS network,
generally non-toll (or non-long-distance) operations, were split off into then
PP&L subsidiary CenturyTel. While CenturyTel has since merged into CenturyLink,
the Alaskan assets were first sold to Alaska Communications. Alaska
Communications considers itself the successor of the ACS heritage, giving them
a claim to over 100 years of communications history.
As electronics technology has continued to improve, penetration of microwave
relays into inland Alaska has increased. Fewer towns rely on satellite today
than in the 1970s, and the half-second latency to geosynchronous orbit is
probably not missed. Alaska communications have also become more competitive,
with long-distance connectivity available from General Communications (GCI) as
well as AT&T and Alaska Communications.
Still, the legacy of Alaska's complex and expensive long-distance
infrastructure still echoes in our telephone bills. State and federal
regulators have allowed for extra fees on telephone service in Alaska and calls
into Alaska, both intended to offset the high cost of infrastructure. Alaska is
generally the most expensive long-distance calling destination in the United
States, even when considering the territories.
But what of White Alice?
The history of the Alaska Communications System's transition to private
ownership is complex and not especially well documented. While RCA's winning
bid following the Alaska Communications Disposal Act set the big picture, the
actual details of the transition were established by many individual
negotiations spanning over a decade. Depending on the station, WACS
troposcatter sites generally conveyed to RCA in 1973 or 1974. Some, colocated
with active military installations, were leased rather than included in the
sale. RCA generally decommissioned each WACS site once a satellite ground
station was ready to replace it, either on-site or nearby.
For some WACS sites, this meant the troposcatter equipment was shut down in
1973. Others remained in use later. The Boswell Bay troposcatter station seems
to have been the last turned down, in 1985. The 1980s were decidedly the end of
WACS. Alascom's sale to PP&L cemented the plan to shut down all troposcatter
operations, and the 1980 Comprehensive Environmental Response, Compensation,
and Liability Act lead to the establishment of the Formerly Used Defense Sites
(FUDS) program within DoD. Under FUDS, the Corps of Engineers surveyed the
disused WACS sites and found nearly all had significant contamination by
asbestos (used in seemingly every building material in the '50s and '60s) and
leaked fuel oil.
As a result, most White Alice sites were demolished between 1986 and 1999. The
cost of demolition and remediation in such remote locations was sometimes
greater than the original construction. No WACS sites remain intact today.
Postscript:
A 1988 Corps of Engineers historical inventory of WACS, prepared due to the
demolition of many of the stations, mentions that meteor burst communications
might replace troposcatter. Meteor burst is a fascinating communications mode,
similar in many ways to troposcatter but with the twist that the reflecting
surface is not the troposphere but the ionized trail of meteors entering the
atmosphere. Meteor burst connections only work when there is a meteor actively
vaporizing in the upper atmosphere, but atmospheric entry of small meteors is
common enough that meteor burst communications are practical for low-rate
packetized communications. For example, meteor burst has been used for large
weather and agricultural telemetry systems.
The Alaska Meteor Burst Communications System was implemented in 1977 by
several federal agencies, and was used primarily for automated environmental
telemetry. Unlike most meteor burst systems, though, it seems to have been used
for real-time communications by the BLM and FAA. I can't find much information,
but they seem to have built portable teleprinter terminals for this use.
Even more interesting, the Air Force's Alaskan Air Command built its own meteor
burst network around the same time. This network was entirely for real-time
use, and demonstrated the successful transmission of radar track data from
radar stations across the state to Elmendorf Air Force base. Even better, the
Air Force experimented with the use of meteor burst for intercept control by
fitting aircraft with a small speech synthesizer that translated coded messages
into short phrases. The Air Force experimented with several meteor burst
systems during the Cold War, anticipating that it might be a survivable
communications system in wartime. More details on these will have to fill a
future article.
[1] Crews of the Western Union Telegraph Expedition reportedly continued work
for a full year after the completion of the transatlantic telephone cable,
because news of it hadn't reached them yet.
[2] Eliding here some complexities like GTE and their relationship to the Bell
System.
[3] Perhaps owing to the large size of the country and many geographical
challenges to cable laying, Canada has often led North America in satellite
communications technology.
Note: I have edited this post to add more information, a couple of hours after
originally publishing it. I forgot about a source I had open in a tab. Sorry.
We've talked before about
carphones,
and certainly one of the only ways to make phones even more interesting is to
put them in modes of transportation. Installing telephones in cars made a lot
of sense when radiotelephones were big and required a lot of power; and they
faded away as cellphones became small enough to have a carphone even outside of
your car.
There is one mode of transportation where the personal cellphone is pretty
useless, though: air travel. Most readers are probably well aware that the use
of cellular networks while aboard an airliner is prohibited by FCC regulations.
There are a lot of urban legends and popular misconceptions about this rule,
and fully explaining it would probably require its own article. The short
version is that it has to do with the way cellular devices are certified and
cellular networks are planned. The technical problems are not impossible to
overcome, but honestly, there hasn't been a lot of pressure to make changes.
One line of argument that used to make an appearance in cellphones-on-airplanes
discourse is the idea that airlines or the telecom industry supported the
cellphone ban because it created a captive market for in-flight telephone
services.
Wait, in-flight telephone services?
That theory has never had much to back it up, but with the benefit of hindsight
we can soundly rule it out: not only has the rule persisted well past the
decline and disappearance of in-flight telephones, in-flight telephones were
never commercially successful to begin with.
Let's start with John Goeken. A 1984 Washington Post article tells us that
"Goeken is what is called, predictably enough, an 'idea man.'" Being the "idea
person" must not have had quite the same connotations back then, it was a good
time for Goeken. In the 1960s, conversations with customers at his two-way
radio shop near Chicago gave him an idea for a repeater network to allow
truckers to reach their company offices via CB radio. This was the first
falling domino in a series that lead to the founding of MCI and the end of
AT&T's long-distance monopoly. Goeken seems to have been the type who grew
bored with success, and he left MCI to take on a series of new ventures. These
included an emergency medicine messaging service, electrically illuminated
high-viz clothing, and a system called the Mercury Network that built much of
the inertia behind the surprisingly advanced computerization of florists [1].
"Goeken's ideas have a way of turning into dollars, millions of them," the
Washington Post continued. That was certainly true of MCI, but every ideas guy
had their misses. One of the impressive things about Goeken was his ability to
execute with speed and determination, though, so even his failures left their
mark. This was especially true of one of his ideas that, in the abstract,
seemed so solid: what if there were payphones on commercial flights?
Goeken's experience with MCI and two-way radios proved valuable, and starting
in the mid-1970s he developed prototype air-ground radiotelephones. In its
first iteration, "Airfone" consisted of a base unit installed on an aircraft
bulkhead that accepted a credit card and released a cordless phone. When the
phone was returned to the base station, the credit card was returned to the
customer. This equipment was simple enough, but it would require an extensive
ground network to connect callers to the telephone system. The infrastructure
part of the scheme fell into place when long-distance communications giant
Western Union signed on with Goeken Communications to launch a 50/50 joint
venture under the name Airfone, Inc.
Airfone was not the first to attempt air-ground telephony---AT&T had pursued
the same concept in the 1970s, but abandoned it after resistance from the FCC
(unconvinced the need was great enough to justify frequency allocations) and
the airline industry (which had formed a pact, blessed by the government, that
prohibited the installation of telephones on aircraft until such time as a
mature technology was available to all airlines). Goeken's hard-headed attitude,
exemplified in the six-year legal battle he fought against AT&T to create MCI,
must have helped to defeat this resistance.
Goeken brought technical advances, as well. By 1980, there actually was an
air-ground radiotelephone service in general use. The "General Aviation
Air-Ground Radiotelephone Service" allocated 12 channels (of duplex pairs) for
radiotelephony from general aviation aircraft to the ground, and a company
called Wulfsberg had found great success selling equipment for this service
under the FliteFone name. Wulfsberg FliteFones were common equipment on
business aircraft, where they let executives shout "buy" and "sell" from the
air. Goeken referred to this service as evidence of the concept's appeal,
but it was inherently limited by the 12 allocated channels.
General Aviation Air-Ground Radiotelephone Service, which I will call AGRAS
(this is confusing in a way I will discuss shortly), operated at about 450MHz.
This UHF band is decidedly line-of-sight, but airplanes are very high up and
thus can see a very long ways. The reception radius of an AGRAS transmission,
used by the FCC for planning purposes, was 220 miles. This required assigning
specific channels to specific cities, and there the limits became quite severe.
Albuquerque had exactly one AGRAS channel available. New York City got three.
Miami, a busy aviation area but no doubt benefiting from its relative
geographical isolation, scored a record-setting four AGRAS channels. That meant
AGRAS could only handle four simultaneous calls within a large region... if
you were lucky enough for that to be the Miami region; otherwise capacity was
even more limited.
Back in the 1970s, AT&T had figured that in-flight telephones would be very
popular. In a somewhat hand-wavy economic analysis, they figured that about a
million people flew in the air on a given day, and about a third of them would
want to make telephone calls. That's over 300,000 calls a day, clearly more
than the limited AGRAS channels could handle... leading to the FCC's objection
that a great deal of spectrum would have to be allocated to make in-flight
telephony work.
Goeken had a better idea: single-sideband. SSB is a radio modulation technique
that allows a radio transmission to fit within a very narrow bandwidth
(basically by suppressing half of the signal envelope), at the cost of a
somewhat more fiddly tuning process for reception. SSB was mostly used down in
the HF bands, where the low frequencies meant that bandwidth was acutely limited.
Up in the UHF world, bandwidth seemed so plentiful that there was little need for
careful modulation techniques... until Goeken found himself asking the FCC for 10
blocks of 29 channels each, a lavish request that wouldn't really fit anywhere in
the popular UHF spectrum. The use of UHF SSB, pioneered by Airfone, allowed far
more efficient use of the allocation.
In 1983, the FCC held hearings on Airfone's request for an experimental license
to operate their SSB air-ground radiotelephone system in two allocations
(separate air-ground and ground-air ranges) around 850MHz and 895MHz. The total
spectrum allocated was about 1.5MHz in each of the two directions. The FCC
assented and issued the experimental license in 1984, and Airfone was in
business.
Airfone initially planned 52 ground stations for the system, although I'm not
sure how many were ultimately built---certainly 37 were in progress in 1984, at
a cost of about $50 million. By 1987, the network had reportedly grown to 68.
Airfone launched on six national airlines (a true sign of how much airline
consolidation has happened in recent decades---there were six national
airlines?), typically with four cordless payphones on a 727 or similar
aircraft. The airlines received a commission on the calling rates, and Airfone
installed the equipment at their own expense. Still, it was expected to be
profitable... Airfone projected that 20-30% of passengers would have calls to
make.
I wish I could share more detail on these ground stations, in part because I
assume there was at least some reuse of existing Western Union facilities (WU
operated a microwave network at the time and had even dabbled in cellular
service in the 1980s). I can't find much info, though. The antennas for the
800MHz band would have been quite small, but the 1980s multiplexing and
control equipment probably took a fare share of floorspace.
Airfone was off to a strong start, at least in terms of installation base and
press coverage. I can't say now how many users it actually had, but things looked
good enough that in 1986 Western Union sold their share of the company to GTE.
Within a couple of years, Goeken sold his share to GTE as well, reportedly as a
result of disagreements with GTE's business strategy.
Airfone's SSB innovation was actually quite significant. At the same time, in
the 1980s, a competitor called Skytel was trying to get a similar idea off the
ground with the existing AGRAS allocation. It doesn't seem to have gone
anywhere, I don't think the FCC ever approved it. Despite an obvious concept,
Airfone pretty much launched as a monopoly, operating under an experimental
license that named them alone. Unsurprisingly there was some upset over this
apparent show of favoritism by the FCC, including from AT&T, which vigorously
opposed the experimental license.
As it happened, the situation would be resolved by going the other way: in
1990, the FCC established the "commercial aviation air-ground service" which
normalized the 800 MHz spectrum and made licenses available to other operators.
That was six years after Airfone started their build-out, though, giving them a
head start that severely limited competition.
Still, AT&T was back. AT&T introduced a competing service called AirOne. AirOne
was never as widely installed as Airfone but did score some customers including
Southwest Airlines, which only briefly installed AirOne handsets on their fleet.
"Only briefly" describes most aspects of AirOne, but we'll get to that in a moment.
The suddenly competitive market probably gave GTE Airfone reason to innovate,
and besides, a lot had changed in communications technology since Airfone was
designed. One of Airfone's biggest limitations was its lack of true roaming: an
Airfone call could only last as long as the aircraft was within range of the
same ground station. Airfone called this "30 minutes," but you can imagine that
people sometimes started their call near the end of this window, and the
problem was reportedly much worse. Dropped calls were common, adding insult to
the injury that Airfone was decidedly expensive. GTE moved towards digital
technology and automation.
1991 saw the launch of Airfone GenStar, which used QAM digital modulation to
achieve better call quality and tighter utilization within the existing
bandwidth. Further, a new computerized network allowed calls to be handed off
from one ground station to another. Capitalizing on the new capacity and
reliability, the aircraft equipment was upgraded as well. The payphone like
cordless stations were gone, replaced by handsets installed in seatbacks. First
class cabins often got a dedicated handset for every seat, economy might have
one handset on each side of a row. The new handsets offered RJ11 jacks,
allowing the use of laptop modems while in-flight. Truly, it was the future.
During the 1990s, satellites were added to the Airfone network as well,
improving coverage generally and making telephone calls possible on overseas
flights. Of course, the rise of satellite communications also sowed the seeds
of Airfone's demise. A company called Aircell, which started out using the
cellular network to connect calls to aircraft, rebranded to Gogo and pivoted
to satellite-based telephone services. By the late '90s, they were taking
market share from Airfone, a trend that would only continue.
Besides, for all of its fanfare, Airfone was not exactly a smash hit. Rates
were very high, $5 a minute in the late '90s, giving Airfone a reputation as a
ripoff that must have cut a great deal into that "20-30% of fliers" they hoped
to serve. With the rise of cellphones, many preferred to wait until the
aircraft was on the ground to use their own cellphone at a much lower rate.
GTE does not seem to have released much in the way of numbers for Airfone, but
it probably wasn't making them rich.
Goeken, returning to the industry, inadvertently proved this point. He
aggressively lobbied the FCC to issue competitive licenses, and ultimately
succeeded. His second company in the space, In-Flight Phone Inc., became one of
the new competitors to his old company. In-Flight Phone did not last for long.
Neither did AT&T AirOne. A 2005 FCC ruling paints a grim picture:
Current 800 MHz Air-Ground Radiotelephone Service rules contemplate six
competing licensees providing voice and low-speed data services. Six entities
were originally licensed under these rules, which required all systems to
conform to detailed technical specifications to enable shared use of the
air-ground channels. Only three of the six licensees built systems and
provided service, and two of those failed for business reasons.
In 2002, AT&T pulled out, and Airfone was the only in-flight phone left. By
then, GTE had become Verizon, and GTE Airfone was Verizon Airfone. Far from a
third of passengers, the CEO of Airfone admitted in an interview that a typical
flight only saw 2-3 phone calls. Considering the minimum five-figure capital
investment in each aircraft, it's hard to imagine that Airfone was
profitable---even at $5 minute.
Airfone more or less faded into obscurity, but not without a detour into the
press via the events of 9/11. Flight 93, which crashed in Pennsylvania, was
equipped with Airfone and passengers made numerous calls. Many of the events on
board this aircraft were reconstructed with the assistance of Airfone records,
and Claircom (the name of the operator of AT&T AirOne, which never seems to
have been well marketed) also produced records related to other aircraft
involved in the attacks. Most notably, flight 93 passenger Todd Beamer had a
series of lengthy calls with Airfone operator Lisa Jefferson, through which he
relayed many of the events taking place on the plane in real time. During these
calls, Beamer seems to have coordinated the effort by passengers to retake
control of the plane. The significance of Airfone and Claircom records to 9/11
investigations is such that 9/11 conspiracy theories may be one of the most
enduring legacies of Claircom especially.
In an odd acknowledgment of their aggressive pricing, Airfone decided not to
bill for any calls made on 9/11, and temporarily introduced steep discounts (to
$0.99 a minute) in the weeks after. This rather meager show of generosity did
little to reverse the company's fortunes, though, and it was already well into
a backslide.
In 2006, the FCC auctioned the majority of Airfone's spectrum to new users. The
poor utilization of Airfone was a factor in the decision, as well as Airfone's
relative lack of innovation compared to newer cellular and satellite systems.
In fact, a large portion of the bandwidth was purchased by Gogo, who years
later would use to to deliver in-flight WiFi. Another portion went to a
subsidiary of JetBlue that provided in-flight television. Verizon announced
the end of Airfone in 2006, pending an acquisition by JetBlue, and while the
acquisition did complete JetBlue does not seem to have continued Airfone's
passenger airline service. A few years later, Gogo bought out JetBlue's
communications branch, making them the new monopoly in 800MHz air ground
radiotelephony. Gogo only offered telephone service for general aviation
aircraft; passenger aircraft telephones had gone the way of the carphone.
It's interesting to contrast the fate of Airfone to to its sibling, AGRAS.
Depending on who you ask, AGRAS refers to the radio service or to the Air
Ground Radiotelephone Automated Service operated by Mid-America Computer
Corporation. What an incredible set of names. This was a situation a bit like
ARINC, the semi-private company that for some time held a monopoly on aviation
radio services. MACC had a practical monopoly on general aviation telephone
service throughout the US, by operating the billing system for calls. MACC
still exists today as a vendor of telecom billing software and this always
seems to have been their focus---while I'm not sure, I don't believe that MACC
ever operated ground stations, instead distributing rate payments to private
companies that operated a handful of ground stations each. Unfortunately the
history of this service is quite obscure and I'm not sure how MACC came to
operate the system, but I couldn't resist the urge to mention the Mid-America
Computer Corporation.
AGRAS probably didn't make anyone rich, but it seems to have been generally
successful. Wulfsberg FliteFones operating on the AGRAS network gave way to
Gogo's business aviation phone service, itself a direct descendent of Airfone
technology.
The former AGRAS allocation at 450MHz somehow came under the control of a
company called AURA Network Systems, which for some years has used a temporary
FCC waiver of AGRAS rules to operate data services. This year, the FCC began
rulemaking to formally reallocate the 450MHz air ground allocation to data
services for Advanced Air Mobility, a catch-all term for UAS and air taxi
services that everyone expects to radically change the airspace system in
coming years. New uses of the band will include command and control for
long-range UAS, clearance and collision avoidance for air taxis, and ground and
air-based "see and avoid" communications for UAS. This pattern, of issuing a
temporary authority to one company and later performing rulemaking to allow
other companies to enter, is not unusual for the FCC but does make an
interesting recurring theme in aviation radio. It's typical for no real
competition to occur, the incumbent provider having been given such a big
advantage.
When reading about these legacy services, it's always interesting to look at
the licenses. ULS has only nine licenses on record for the original 800 MHz air
ground service, all expired and originally issued to Airfone (under both GTE
and Verizon names), Claircom (operating company for AT&T AirOne), and Skyway
Aircraft---this one an oddity, a Florida-based company that seems to have
planned to introduce in-flight WiFi but not gotten all the way there.
Later rulemaking to open up the 800MHz allocation to more users created a
technically separate radio service with two active licenses, both held by AC
BidCo. This is an intriguing mystery until you discover that AC BidCo is
obviously a front company for Gogo, something they make no effort to hide---the
legalities of FCC bidding processes are such that it's very common to use shell
companies to hold FCC licenses, and we could speculate that AC BidCo is the
Aircraft Communications Bidding Company, created by Gogo for the purpose of the
2006-2008 auctions. These two licenses are active for the former Airfone band,
and Gogo reportedly continues to use some of the original Airfone ground
stations.
Gogo's air-ground network, which operates at 800MHz as well as in a 3GHz band
allocated specifically to Gogo, was originally based on CDMA cellular
technology. The ground stations were essentially cellular stations pointed
upwards. It's not clear to me if this CDMA-derived system is still in use, but
Gogo relies much more heavily on their Ku-band satellite network today.
The 450MHz licenses are fascinating. AURA is the only company to hold current
licenses, but the 246 reveal the scale of the AGRAS business. Airground of
Idaho, Inc., until 1999 held a license for an AGRAS ground station on Brundage
Mountain McCall, Idaho. The Arlington Telephone Company, until a 2004
cancellation, held a license for an AGRAS ground station atop their small
telephone exchange in Arlington, Nebraska. AGRAS ground stations seem to have
been a cottage industry, with multiple licenses to small rural telephone
companies and even sole proprietorships. Some of the ground stations appear to
have been the roofs of strip mall two-way radio installers. In another life,
maybe I would be putting a 450MHz antenna on my roof to make a few dollars.
Still, there were incumbents: numerous licenses belonged to SkyTel, which after
the decline of AGRAS seems to have refocused on paging and, then, gone the same
direction as most paging companies: an eternal twilight as American Messaging
("The Dependable Choice"), promoting innovation in the form of longer-range
restaurant coaster pagers. In another life, I'd probably be doing that too.
[1] This is probably a topic for a future article, but the Mercury Network was
a computerized system that Goeken built for a company called Florist's
Telegraph Delivery (FTD). It was an evolution of FTD's telegraph system that
allowed a florist in one city to place an order to be delivered by by a florist
in another city, thus enabling the long-distance gifting of flowers. There were
multiple such networks and they had an enduring influence on the florist industry
and broader business telecommunications.