USB, the Universal Serial Bus, was first released in 1996. It did not achieve
widespread adoption until some years later; for most of the '90s RS-232-ish
serial and its
awkward sibling the parallel port
were the norm for external peripheral. It's sort of surprising that USB didn't
take off faster, considering the significant advantages it had over
conventional serial. Most significantly, USB was self-configuring: when you
plugged a device into a host, a negotiation was performed to detect a
configuration supported by both ends. No more decoding labels like 9600 8N1
and then trying both flow control modes!
There are some significant architectural differences between USB and
conventional serial that come out of autoconfiguration. Serial ports had no
real sense of which end was which. Terms like DTE and DCE were sometimes used,
but they were a holdover from the far more prescriptive genuine RS-232
standard (which PCs and most peripherals did not follow) and often
inconsistently applied by manufacturers. All that really mattered to a serial
connection is that one device's TX pin went to the other device's RX pin, and
vice versa. The real differentiation between DCE and DTE was the placement of
these pins: in principle, a computer would have them one way around, and a
peripheral the other way around. This meant that a straight-through cable would
result in a crossed-over configuration, as expected.
In practice, plenty of peripherals used the same DE-9 wiring convention as PCs,
and sometimes you wanted to connect two PCs to each other. Some peripherals
used 8p8c modular jacks, some peripherals used real RS-232 connectors, and some
peripherals used monstrosities that could only have emerged from the nightmares
of their creators. The TX pin often ended up connected to the TX pin and vice
versa. This did not work. The solution, as we so often see in networking, was
a special cable that crossed over the TX and RX wires within the cable (or
adapter). For historical reasons this was referred to as a null modem cable.
One of the other things that was not well standardized with serial connections
was the gender of the connectors. Even when both ends features the PC-standard
DE-9, there was some inconsistency over the gender of the connectors on the
devices and on the cable. Most people who interact with serial with any
regularity probably have a small assortment of "gender changers" and null-modem
shims in their junk drawer. Sometimes you can figure out the correct
configuration from device manuals (the best manuals provide a full pinout),
but often you end up guessing, stringing together adapters until the genders
fit and then trying with and without a null modem adapter.
You will notice that we rarely go through this exercise today. For that we can
thank USB's very prescriptive standards for connectors on devices and cables.
The USB standard specifies three basic connectors, A, B, and C. There are
variants of some connectors, mostly for size (mini-B, micro-B, even a less
commonly used mini-A and micro-A). For the moment, we will ignore C, which came
along later and massively complicated the situation. Until 2014, there was only
A and B. Hosts had A, and devices had B.
Yes, USB fundamentally employs a host-device architecture. When you connect two
things with USB, one is the host, and the other is the device. This
differentiation is important, not just for the cable, but for the protocol
itself. USB prior to 3, for example, does not feature interrupts. The host must
poll the device for new data. The host also has responsibility for enumeration
of devices to facilitate autoconfiguration, and for flow control throughout a
tree of USB devices.
This architecture makes perfect sense for USB's original 1990s use-case of
connecting peripherals (like mice) to hosts (like PCs). In fact, it worked so
well that once USB1.1 addressed some key limitations it became completely
ubiquitous. Microsoft used the term "legacy-free PC" to identify a new
generation of PCs at the very end of the '90s and early '00s. While there
were multiple criteria for the designation, the most visible to users was
the elimination of multiple traditional ports (like the game port! remember
those!) in favor of USB.
Times change, and so do interconnects. The consumer electronics industry made
leaps and bounds during the '00s and "peripheral" devices became increasingly
sophisticated. The introduction of portables running sophisticated operating
systems pushed the host-device model to a breaking point. It is, of course,
tempting to talk about this revolution in the context of the iPhone. I never
had an iPhone though, so the history of the iDevice doesn't have quite the
romance to me that it has to so many in this space [1]. Instead, let's talk
about Nokia. If there is a Windows XP to Apple's iPhone, it's probably Nokia.
They tried so hard, and got so far, but [...].
The Nokia 770 Internet Tablet was not by any means the first tablet computer,
but it was definitely a notable early example. Introduced in 2005, it premiered
the Linux-based Maemo operating system beloved by Nokia fans until iOS and
Android killed it off in the 2010s. The N770 was one of the first devices to
fall into a new niche: with a 4" touchscreen and OMAP/ARM SoC, it wasn't
exactly a "computer" in the consumer sense. It was more like a peripheral,
something that you would connect to your computer in order to load it up with
your favorite MP3s. But it also ran a complete general-purpose operating
system. The software was perfectly capable of using peripherals itself, and
MP3s were big when you were storing them on MMC. Shouldn't you be able to
connect your N770 to a USB storage device and nominate even more MP3s as
favorites?
Obviously Linux had mainline USB mass storage support in 2005, and by extension
Maemo did. The problem was USB itself. The most common use case for USB on the
N770 was as a peripheral, and so it featured a type-B device connector. It was
not permitted to act as a host. In fact, every PDA/tablet/smartphone type
device with sophisticated enough software to support USB peripherals would
encounter the exact same problem. Fortunately, it was addressed by a supplement
to the USB 2.0 specification released in 2001.
The N770 did not follow the supplement. That makes it fun to talk about, both
because it is weird and because it is an illustrative example of the problems
that need to be solved.
The N770 featured an unusual USB transceiver on its SoC, seemingly unique to
Nokia and called "Tahvo." The Tahvo controller exposed an interface (via
sysfs in the Linux driver) that allowed the system to toggle it between
device mode (its normal configuration) and host mode. This worked well enough
with Maemo's user interface, but host mode had a major limitation. The N770
wouldn't provide power on the USB port; it didn't have the necessary
electrical components. Instead, a special adapter cable was needed to provide
5v power from an alternate source.
So there are several challenges for a USB device to operate as host or device:
The USB controller needs a way to determine if it should behave in host or
device mode. Ideally, the user shouldn't have to think about this.
The USB controller needs to be able to supply power when in host mode, and
in most practical situations also needs to accept power (e.g. for charging)
when in device mode.
Note that "special cable" involved in host mode for the N770. You might think
this was the ugliest part of the situation. You're not wrong, but it's also
not really the hack. For many years to follow, the proper solution to this
problem would also involve a special cable.
As I mentioned, since 2001 there has been a supplement USB specification called
USB On-The-Go, commonly referred to as USB OTG, perhaps because On-The-Go is an
insufferably early '00s name. It reminds me of, okay, here goes a full-on
anecdote.
Anecdote
I attended an alternative middle school in Portland that is today known as the
Sunnyside Environmental School. I could tell any number of stories about the
bizarre goings-on at this school that you would scarcely believe, but it also
had its merits. One of them, which I think actually came from the broader
school district, was a program in which eighth graders were encouraged to "job
shadow" someone in a profession they were interested in pursuing. By good
fortune, a friend's father was an electrical engineer employed at Intel's Jones
Farm campus, and agreed to be my host. I had actually been to Jones Farm a
number of times on account of various extracurricular programs (in that era,
essentially every STEM program in the Pacific Northwest operated on the
largess of either Intel or Boeing, if not both). This was different, though:
this guy had a row of engraved brass patent awards lining his cubicle wall and
showed me through labs where technicians tinkered with prototype hardware.
Foreshadowing a concerning later trend in my career, though, the part that
stuck with me most was the meetings. I attended meetings, including one where
this engineering team was reporting to leadership on the status of a few of
their projects.
I am no doubt primed to make this comparison by the mediocre movie I watched
last night, but I have to describe the experience as Wonka-esque. These EEs
demonstrated a series of magical hardware prototypes to some partners from
another company. Each was more impressive than the last. It felt like I was
seeing the future in the making.
My host demonstrated his pet project, a bar that contained an array of
microphones and used DSP methods to compare the audio from each and
directionalize the source of sounds. This could be used for a sophisticated
form of noise canceling in which sound coming from an off-axis direction could
be subtracted, leaving only the voice of the speaker. If this sounds sort of
unremarkable, that is perhaps a reflection of its success, as the same basic
concept is now implemented in just about every laptop on the market. Back then,
when the N770 was a new release, it was challenging to make work and my host
explained that the software behind it usually crashed before he finished the
demo, and sometimes it turned the output into a high pitched whine and he
hadn't quite figured out why yet. I suppose that meeting was lucky.
But that's an aside. A long presentation, and then debate skeptical execs,
revolved around a new generation of ultramobile devices that Intel envisioned.
One, which I got to handle a prototype of, would eventually become the Intel
Core Medical Tablet. It featured chunky, colorful design that is clearly of the
same vintage as the OLPC. It was durable enough to stand on, which a lab
technician demonstrated with delight (my host, I suspect tired of this feat,
picked up some sort of lab interface and dryly remarked that he could probably
stand on it too). The Core Medical Tablet shared another trait with the OLPC:
the kind of failure that leaves no impact on the world but a big footprint at
recyclers. Years later, as an intern at Free Geek, I would come across at least
a dozen.
Another facet of this program, though, was the Mobile Metro. The Metro was a
new category of subnotebook, not just small but thin. A period article compares
its 18mm profile to the somewhat thinner Motorola Razr, another product with an
outsize representation in the Free Geek Thrift Store. Intel staff were
confident that it would appeal to a new mobile workforce, road warriors working
from cars and coffee shops. The Mobile Metro featured SideShow, a small e-ink
display (in fact, I believe, a full Windows Mobile system) on the outside of a
case that could show notifications and media controls.
The Mobile Metro was developed around the same time as the Classmate PC, but
seems to have been even less successful. It was still in the conceptual stages
when I heard of it. It was announced, to great fanfare, in 2007. I don't think
it ever went into production. It had WiMax. It had inductive charging. It only
had one USB port. It was, in retrospect, prescient in many ways both good and
bad.
The point of this anecdote, besides digging up middle school memories while
attempting to keep others well suppressed, is that the mid-2000s were an
unsettled time in mobile computing. The technology was starting to enable
practical compact devices, but manufacturers weren't really sure how people
would use them. Some innovations were hits (thin form factors). Some were
absolute misses (SideShow). Some we got stuck with (not enough USB ports).
End of anecdote
As far as I can tell, USB OTG wasn't common on devices until it started to
appear on Android smartphones in the early 2010s. Android gained OTG support
in 3.1 (codenamed Honeycomb, 2011), and it quickly appeared in higher-end
devices. Now OTG support seems nearly universal for Android devices; I'm
sure there are lower-end products where it doesn't work but I haven't yet
encountered one. Android OTG support is even admirably complete. If you have
an Android phone, amuse yourself sometime by plugging a hub into it, and then
a keyboard and mouse. Android support for desktop input peripherals is actually
very good and operating mobile apps with an MX Pro mouse is an entertaining
and somewhat surreal experience. On the second smartphone I owned, I hazily
think a Samsung in 2012-2013, I used to take notes with a USB keyboard.
iOS doesn't seem to have sprouted user-exposed OTG support until the iPhone 12,
although it seems like earlier versions probably had hardware support that
wasn't exposed by the OS. I could be wrong about this; I can't find a
straightforward answer in Apple documentation. The Apple Community Forums seem
to be... I'll just say "below average." iPads seem to have gotten OTG support a
lot earlier than the iPhone despite using the same connector, making the
situation rather confusing. This comports with my general understanding of iOS,
though, from working with bluetooth devices: Apple is very conservative about
hardware peripheral support in iOS, and so it's typical for iOS to be well
behind Android in this regard for purely software reasons. Ask me about
how this has impacted the Point of Sale market. It's not positive.
But how does OTG work? Remember, USB specifies that hosts must have an A
connector, and devices a B connector. Most smartphones, besides Apple
products and before USB-C, sported a micro-B connector as expected. How
OTG?
The OTG specification decouples, to some extent, the roles of A/B connector,
power supply, and host/device role. A device with USB OTG support should
feature a type AB socket that accommodates either an A or a B plug. Type AB is
only defined for the mini and micro sizes, typically used on portable devices.
The A or B connectors are differentiated not only by the shape of their shells
(preventing a type-A plug being inserted into a B-only socket), but also
electrically. The observant among you may have noticed that mini and micro B
sockets and plugs feature five pins, while USB2.0 only uses four. This is the
purpose of the fifth pin: differentiation of type A and B plugs.
In a mini or micro type B plug, the fifth pin is floating (disconnected). In a
mini or micro type A plug, it is connected to the ground pin. When you insert
a plug into a type AB socket, the controller checks for connectivity between
the fifth pin (called the ID pin) and the ground. If connectivity is present,
the controller knows that it must act as an OTG A-device---it is on the "A"
end of the connection. If there is no continuity, the more common case, the
controller will act as an OTG B-device, a plain old USB device [2].
The OTG A-device is always responsible for supplying 5v power (see exception
in [2]). By default, the A-device also acts as the host. This provides a
basically complete solution for the most common OTG use-case: connecting a
peripheral like a flash drive to your phone. The connector you plug into your
phone identifies itself as an A connector via the ID pin, and your phone thus
knows that it must supply power and act as host. The flash drive doesn't need
to know anything about this, it has a B connection and acts as a device as
usual. This simple case only became confusing when you consider a few flash
drives sold specifically for use with phones that had a micro-A connector
right on them. These were weird and I don't like them.
In the more common situation, though, you would use a dongle: a special cable.
A typical OTG cable, which were actually included in the package with enough
Android phones of the era that I have a couple in a drawer without having
ever purchased one, provides a micro-A connector on one end and a full-size
A socket on the other. With this adapter, you can plug any USB device into
your phone with a standard USB cable.
Here's an odd case, though. What if you plug two OTG devices into each other?
USB has always had this sort of odd edge-case. Some of you may remember "USB
link cables," which don't really have a technical name but tend to get called
Laplink cables after a popular vendor. Best Buy and Circuit City used to be
lousy with these things, mostly marketed to people who had bought a new
computer and wanted to transfer their files. A special USB cable had two A
connectors, which might create the appearance that it connected two hosts,
but in fact the cable (usually a chunky bit in the middle) acted as two
devices to connect to two different hosts. The details of how these actually
worked varied from product to product, but the short version is "it was
proprietary." Most of them didn't work unless you found the software that
came with them, but there are some pseudo-standard controllers supported
out of the box by Windows or Linux. I would strongly suggest that you protect
your mental state by not trying to use one.
OTG set out to address this problem more completely. First, it's important to
understand that this in no way poses an exception to the rule that a USB
connection has an A end and a B end. A USB cable you use to connect two phones
together might, at first glance, appear to be B-B. But, if you inspect closer,
you will find that one end is mini or micro A, and the other is mini or micro
B. You may have to look close, the micro connectors in particular have a
similar shell!
If you are anything like me, you are most likely to have encountered such a
cable in the box with a TI-84+. These calculators had a type AB connector and
came with a micro A->B cable to link two units. You might think, by extension,
that the TI-84+ used USB OTG. The answer is kind of! The USB implementation on
the TI-84+ and TI-84+SE was very weird, and the OS didn't support anything
other than TIConnect. Eventually the TI-84+CE introduced a much more standard
USB controller, although I think support for any OTG peripheral still has to be
hacked on to the OS. TI has always been at the forefront of calculator
networking, and it has always been very weird and rarely used.
This solves part of the problem: it is clear, when you connect two phones,
which should supply power and which should handle enumeration. The A-device is,
by default, in charge. There are problems where this interacts with common USB
devices types, though. One of the most common uses of USB with phones is mass
storage (and its evil twin MTP). USB mass storage has a very strong sense of
host and device at a logical level; the host can browse the devices files.
When connecting two smartphones, though, you might want to browse from either
end. Another common problem case here is that of the printer, or at least it
would be if printer USB host support was ever usable. If you plug a printer
into a phone, you might want to browse the phone as mass storage on the
printer. Or you might want to use conventional USB printing to print a document
from the phone's interface. In fact you almost certainly want to do the latter,
because even with Android's extremely half-assed print spooler it's probably a
lot more usable than the file browser your printer vendor managed to offer on
its 2" resistive touchscreen.
OTG adds Host Negotiation Protocol, or HNP, to help in this situation. HNP
allows the devices on a USB OTG connection to swap roles. While the A-device
will always be the host when first connected, HNP can reverse the logical
roles on demand.
This all sounds great, so where does it fall apart? Well, the usual places.
Android devices often went a little off the script with their OTG
implementations. First, the specification did not require devices to be
capable of powering the bus, and phones couldn't. Fortunately that seems to
have been a pretty short lived problem, only common in the first couple of
generations of OTG devices. This wasn't the only limitation of OTG
implementations; I don't have a good sense of scale but I've seen multiple
reports that many OTG devices in the wild didn't actually support HNP, they
just determined a role when connected based on the ID pin and could not change
after that point.
Finally, and more insidiously, the whole thing about OTG devices having an AB
connector didn't go over as well as intended. We actually must admire TI for
their rare dedication to standards compliance. A lot of Android phones with
OTG support had a micro-B connector only, and as a result a lot of OTG
adapters use a micro-B connector.
There's a reason this was common; since A and B plugs are electrically
differentiable regardless of the shape of the shell, the shell shape arguably
doesn't matter. You could be a heavy OTG user with such a noncompliant phone
and adapter and never notice. The problem only emerges when you get a (rare)
standards-compliant OTG adapter or, probably more common, OTG A-B cable.
Despite being electrically compatible, the connector won't fit into your phone.
Of course this behavior feeds itself; as soon as devices with an improper B
port were common, manufacturers of cables were greatly discouraged from using
the correct A connector.
The downside, conceptually, is that you could plug an OTG A connector (with a
B-shaped shell) into a device with no OTG support. In theory this could cause
problems, in practice the problems don't seem to have been common since both
devices would think they were B devices and (if standards compliant) not
provide power. Essentially these improper OTG adapters create a B-B cable. It's
a similar problem to an A-A cable but, in practice, less severe. Like an
extension cord with two female ends. Home Depot might even help you make one
of those.
While trying to figure out which iPhones had OTG support, I ran across an Apple
Community thread where someone helpfully replied "I haven't heard of OTG in
over a decade." Well, it's not a very helpful reply, but it's not exactly wrong
either. No doubt the dearth of information on iOS OTG is in part because no
one ever really cared. Much like the HDMI-over-USB support that a generation
of Android phones included, OTG was an obscure feature. I'm not sure I have
ever, even once, seen a human being other than myself make use of OTG.
Besides, it was completely buried by USB-C.
The thing is that OTG is not gone at all, in fact, it's probably more popular
than ever before. There seems to be some confusion about how OTG has evolved
with USB specifications. I came across more than one article saying that USB
3.1 Dual Role replaced OTG. This assertion is... confusing. It's not incorrect,
but there's a good chance of it leading you int he wrong direction.
Much of the confusion comes from the fact that Dual-Role doesn't mean anything
that specific. The term Dual-Role and various resulting acronyms like DRD and
DRP have been applied to multiple concepts over the life of USB. Some vendors
say "static dual role" to refer to devices that can be configured as either
host or device (like the N770). Some vendors use dual role to identify chipsets
that detect role based on the ID pin but are not actually capable of OTG
protocols like HNP. Some articles use dual role to identify chipsets with OTG
support. Subjectively, I think the intent of the changes in USB 3.1 were mostly
to formally adopt the "dual role" term that was already the norm in informal
use---and hopefully standardize the meaning.
For USB-C connectors, it's more complicated. USB-C cables are symmetric, they
do not identify a host or device end in any way. Instead, the USB-C ports
use resistance values to indicate their type. When either end indicates that
it is only capable of the device role, the situation is simple, behaving
basically the same way that OTG did: the host detects that the other end is a
device and behaves as the host.
When both ends support the host role, things work differently: the Dual Role
feature of USB-C comes into play. The actual implementation is reasonably
simple; a dual-role USB-C controller will attempt to set up a connection both
ways and go with whichever succeeds. There are some minor complications on top
of this, for example, the controller can be configured with a "preference" for
host or device role. This means that when you plug your phone into your
computer via USB-C, the computer will assume the host role, because although
it's capable of either the phone is configured with a preference for the device
role. That matches consumer expectations. When both devices are capable of dual
roles and neither specifies a preference, the outcome is random. This scenario
is interesting but not all that common in practice.
The detection of host or device role by USB-C is based on the CC pins,
basically a more flexible version of OTG's ID pin. There's another important
difference between the behavior of USB-C and A/B: USB-C interfaces provide no
power until they detect, via the CC pins, that the other device expects it.
This is an important ingredient to mitigate the problem with A-A cables, that
both devices will attempt to power the same bus.
The USB-C approach of using CC pins and having dual role controllers attempt
one or the other at their preference is, for the most part, a much more elegant
approach. There are a couple of oddities. First, in practice cables from C to A
or B connectors are extremely common. These cables must provide the appropriate
values on the CC pins to allow the USB-C controller to correctly determine its
role, both for data and power delivery.
Second, what about role reversal? For type A and B connectors, this is achieved
via HNP, but HNP is not supported on USB-C. Application notes from several USB
controller vendors explain that, oddly enough, the only way to perform role
reversal with USB-C is to implement USB Power Delivery (PD) and use the PD
negotiation protocol to change the source of power. In other words, while OTG
allows reversing host and device roles independently of the bus power source,
USB-C does not. The end supplying power is always the host end. This apparent
limitation probably isn't that big of a deal, considering that the role
reversal feature of OTG was reportedly seldom implemented.
That's a bit of a look into what happens when you plug two USB hosts into each
other. Are you confused? Yeah, I'm a little confused too. The details vary, and
a lot more based on the capabilities of the individual devices rather than
the USB version in use. This has been the malaise of USB for a solid decade
now, at least: the specification has become so expansive, with so many
non-mandatory features, that it's a crapshoot what capabilities any given USB
port actually has. The fact that USB-C supports a bevy of alternate modes
like Thunderbolt and HDMI only adds further confusion.
I sort of miss when the problem was just inappropriate micro-B connectors.
Nonetheless, USB-C dual role support seems ubiquitous in modern smartphones,
and that's the only place any of this ever really mattered. Most embedded
devices still seem to prefer to just provide two USB ports: a host port and a
device port. And no one ever uses the USB host support on their printer. It's
absurd, no one ever would. Have you seen what HP thinks is a decent file
browser? Good lord.
[1] My first smartphone was the HTC Thunderbolt. No one, not even me, will
speak of that thing with nostalgia. It was pretty cool owning one of the
first LTE devices on the market, though. There was no contention at all in
areas with LTE service and I was getting 75+Mbps mobile tethering in 2011.
Then everyone else had LTE too and the good times ended.
[2] There are actually several additional states defined by fixed resistances
that tell the controller that it is the A-device but power will be supplied by
the bus. These states were intended for Y-cables that allowed you to charge
your phone from an external charger while using OTG. In this case neither
device supplies power, the external charger does. The details of how this works
are quite straightforward but will be confusing to keep adding as an exception,
so I'm going to pretend the whole feature doesn't exist.
Programming note/shameless plug: I am finally on Mastodon.
The history of the telephone industry is a bit of an odd one. For the greatest
part of the 20th century, telephony in the United States was largely a monopoly
of AT&T and its many affiliates. This wasn't always the case, though. AT&T held
patents on their telephone implementation, but Bell's invention was not the
only way to construct a practical telephone. During the late 19th century,
telephone companies proliferated, most using variations on the design they felt
would fall outside of Ma Bell's patent portfolio. AT&T was aggressive in
challenging these operations but not always successful. During this period,
it was not at all unusual for a city to have multiple competing telephone
companies that were not interconnected.
Shortly after the turn of the 20th century, AT&T moved more decisively towards
monopoly. Theodore Newton Vail, president of AT&T during this period, adopted
the term "Universal Service" to describe the targeted monopoly state: there
would be one universal telephone system. One operated under the policies and,
by implication, the ownership of AT&T. AT&T's path to monopoly involved many
political and business maneuvers, the details of which have filled more than a
few dissertations in history and economics. By the 1920s the deal was done,
there would be virtually no (and in a legal sense literally no) long-distance
telephone infrastructure in the United States outside of The Bell System.
But what of the era's many telephone entrepreneurs? For several American
telephone companies struggling to stand up to AT&T, the best opportunities were
overseas. A number of countries, especially elsewhere in the Americas, had
telephone systems built by AT&T's domestic competitors. Perhaps the most neatly
named was ITT, the International Telephone and Telegraph company. ITT was
formed from the combination of Puerto Rican and Cuban telephone companies, and
through a series of acquisitions expanded into Europe.
Telefónica, for example, is a descendent of an early ITT acquisition. Other
European acquisitions led to wartime complications, like the C. Lorenz company,
which under ITT ownership functioned as a defense contractor to the Nazis
during WWII. Domestically, ITT also expanded into a number of businesses
outside of the monopolized telephone industry, including telegraphy and
international cables.
ITT had been bolstered as well by an effect of AT&T's first round of antitrust
cases during the 1910s and 1920s. As part of one of several settlements, AT&T
agreed to divest several overseas operations to focus instead on the domestic
market. They found a perfect buyer: ITT, a company which already seemed like a
sibling of AT&T and through acquisitions came to function as one.
ITT grew rapidly during the mid-century, and in the pattern of many industrial
conglomerates of the time ITT diversified. Brands like Sheraton Hotels and Avis
Rent-a-Car joined the ITT portfolio (incidentally, Avis would be spun off,
conglomerated with others, and then purchased by previous CAB subject
Beatrice
Foods).
ITT was a multi-billion-dollar American giant.
Elsewhere in the early technology industry, salesman Howard W. Sams worked for
the P. R. Mallory Company in Indianapolis during the 1930s and 1940s. Mallory
made batteries and electronic components, especially for the expanding radio
industry, and as Sams sold radio components to Mallory customers he saw a
common problem and a sales opportunity: radio technicians often needed
replacement components, but had a hard time identifying them and finding a
manufacturer. Under the auspices of the Mallory company Sams produced and
published several books on radio repair and electronic components, but Mallory
didn't see the potential that Sams did in these technical manuals.
Sams, driven by the same electronics industry fervor as so many telephone
entrepreneurs, struck out on his own. Incorporated in 1946, the Howard W. Sams
Company found quick success with its Photofact series. Sort of the radio
equivalent of Haynes and Chilton in the auto industry, Photofact provided
schematics, parts lists, and repair instructions for popular radio receivers.
They were often found on the shelves of both technicians and hobbyists, and
propelled the Sams Company to million-dollar revenues by the early 1950s.
Sams would expand along with the electronics industry, publishing manuals on
all types of consumer electronics and, by the 1960s, books on the use of
computers. Sams, as a technical press, eventually made its way into the
ownership of Pearson. Through Pearson's InformIT, the Sams Teach Yourself
series remains in bookstores today. I am not quite sure, but I think one of the
first technical books I ever picked up was an earlier edition of Sams HTML in
24 Hours.
The 1960s were an ambitious era, and Sams was not content with just books.
Sams had taught thousands electronics technicians through their books. Many
radio technicians had demonstrated their qualifications and kept up to date by
maintaining a membership in the Howard Sams Radio Institute, a sort of
correspondence program. It was a natural extension to teach electronics skills
in person. In 1963, Sams opened the Sams Technical Institute in Indianapolis.
Shortly after, they purchased the Acme Institute of Technology (Dayton, Ohio)
and the charmingly named Teletronic Technical Institute (Evansville, Indiana),
rebranding both as Sams campuses.
In 1965, the Sams Technical Institute had 2,300 students across five locations.
Sams added the Bramwell Business College to its training division, signaling a
move into the broader world of higher education. It was a fast growing
business; it must have looked like a great opportunity to a telephone company
looking for more ways to diversify. In 1968, ITT purchased the entire training
division from Sams, renaming it ITT Educational Services [1].
ITT approached education with the same zeal it had overseas telephone service.
ITT Educational Services spent the late '60s and early '70s on a shopping
spree, adding campus after campus to the ITT system. Two newly constructed
campuses expanded ITT's business programs, and during the '70s ITT introduced
formal curriculum standardization programs and a bureaucratic structure to
support its many locations. Along with expansion came a punchier name: the ITT
Technical Institute.
"Tri-State Businessmen Look to ITT Business Institute, Inc. for Graduates,"
reads one corner of a 1970 full-page newspaper ad. "ITT adds motorcycle repair
course to program," 1973. "THE ELECTRONICS AGE IS HERE. If your eyes are on the
future, ITT Technical institute can prepare you for a HIGH PAYING, EXCITING
career in... ELECTRONICS," 1971. ITT Tech has always known the value of
advertising, and ran everything from full-page "advertorials" to succinct
classified ads throughout their growing region.
During this period, ITT Tech clearly operated as a vocational school rather
than a higher education institution. Many of its programs ran as short as two
months, and they were consistently advertised as direct preparation for a
career. These sorts of job-oriented programs were very attractive to veterans
returning from Vietnam, and ITT widely advertised to veterans on the basis of
its approval (clearly by 1972 based on newspaper advertisements, although some
sources say 1974) for payment under the GI Bill. Around the same time ITT Tech
was approved for the fairly new federal student loan program. Many of ITT's
students attended on government money, with or without the expectation of
repayment.
ITT Tech flourished. By the mid-'70s the locations were difficult to count, and
ITT had over 1,000 students in several states. ITT Tech was the "coding boot
camp" of its day, advertising computer programming courses that were sure to
lead to employment in just about six months. Like the coding boot camps of
our day, these claims were suspect.
In 1975, ITT Tech was the subject of investigations in at least two states. In
Indiana, three students complained to the Evansville municipal government after
ITT recruiters promised them financial aid and federally subsidized employment
during their program. ITT and federal work study, they were told, would take
care of all their living expenses. Instead, they ended up living in a YWCA off
of food stamps. The Indiana board overseeing private schools allowed ITT to
keep its accreditation only after ITT promised to rework its entire recruiting
policy---and pointed out that the recruiters involved had left the company. ITT
refunded the tuition of a dozen students who joined the complaint, which no
doubt helped their case with the state.
Meanwhile, in Massachusetts, the Boston Globe ran a ten-part investigative
series on the growing for-profit vocational education industry. ITT Tech, they
alleged, promised recruits to its medical assistant program guaranteed
post-graduation employment. The Globe claimed that almost no students of the
program successfully found jobs, and the Massachusetts Attorney General agreed.
In fact, the AG found, the program's placement rate didn't quite reach 5%. For
a settlement, ITT Tech agreed to change its recruiting practices and refund
nearly half a million dollars in tuition and fees.
ITT continued to expand at a brisk pace, adding more than a dozen locations in
the early '80s and beginning to offer associates degrees. Newspapers from
Florida to California ran ads exhorting readers to "Make the right connections!
Call ITT Technical Institute." As the 1990s dawned, ITT Tech enjoyed the same
energy as the computer industry, and aspired to the same scale. In 1992, ITT
Tech announced their "Vision 2000" master plan, calling for bachelor's programs,
80 locations, and 45,000 students for beginning of the new millennium. ITT Tech
was the largest provider of vocational training the country.
In 1993, ITT Tech was one of few schools accepted into the first year of the
Direct Student Loan program. The availability of these new loans gave enrollment
another boost, as ITT Tech reached 54 locations and 20,000 students. In 1994,
ITT Tech started to gain independence from its former parent: an IPO sold 17%
ownership to the open market, with ITT retaining the remaining 83%. The next
year, ITT itself went through a reorganization and split, with its majority
share of ITT Tech landing in the new ITT Corporation.
As was the case with so many diversified conglomerates of the '90s (see
Beatrice Foods again), ITT's reorganization was a bad portent. ITT Hartford,
the spun-out financial services division, survives today as The Hartford. ITT
Industries, the spun-out defense contracting division, survives today as well,
confusingly renamed to ITT Corporation. But the third part of the 1995 breakup,
the ITT Corporation itself, merged with Starwood Hotels and Resorts. The real
estate and hospitality side-business of a telephone and telegraph company saw
the end of its parent.
Starwood had little interest in vocational education, and over the remainder
of the '90s sold off its entire share of ITT Tech. Divestment was a good idea:
the end of the '90s hit hard for ITT Tech. Besides the general decline of the
tech industry as the dot com bubble burst, ITT Tech's suspect recruiting
practices were back. This time, they had attracted federal attention.
In 1999, two ITT Tech employees filed a federal whistleblower suit alleging
that ITT Tech trained recruiters to use high-pressure sales tactics and
outright deception to obtain students eligible for federal aid. Recruiters were
paid a commission for each student they brought in, and ITT Tech obtained 70%
of its revenue from federal aid programs. A federal investigation moved slowly,
apparently protracted by the Department of Education's nervous approach
following the criticism it received for shutting down similar operation
Computer Learning Centers. In 2004, federal agents raided ITT Tech campuses
across ten states, collecting records on recruitment and federal funding.
During the early 2000s ITT Tech students defaulted on $400 million in federal
student loans. The result, a large portion of ITT Tech revenue coming from
defaulted federal loans, attracted ongoing attention. ITT Tech was deft in its
legal defense, though, and through a series of legal victories and, more often,
settlements, ITT Tech stayed in business.
ITT Tech aggressively advertised throughout its history. In the late '90s and
early '00s, ITT Tech's constant television spots filled a corner of my brain.
"How Much You Know Measures How Far You Can Go," a TV spot proclaims, before
ITT's distinctive block letter logo faded on screen in metallic silver. By the
year 2000, International Telephone and Telegraph, or rather its scattered
remains, no longer had any relationship with ITT Tech. Starwood agreed to
license the name and logo to the independent public ITT Technical Institutes
corporation, though, and with the decline of ITT's original business the ITT
name and logo became associated far more with the for-profit college than the
electronics manufacturer.
For-profit universities attracted a lot of press in the '00s---the wrong kind
of press. ITT Tech was far from unique in suspicious advertising and
recruiting, high tuition rates, and frequent defaults on the federal loans that
covered that tuition. For-profit education, it seemed, was more of a scam on
the taxpayer dollar than way to secure a promising new career. Publicly traded
colleges like DeVry and the University of Phoenix had repeated scandals over
their use, or abuse, of federal aid, and a 2004 criminal investigation into
ITT Tech for fraud on federal student aid made its future murky.
ITT Tech was a survivor. The criminal case fell apart, the whistleblower
lawsuit lead to nothing, and ITT Tech continued to grow. In 2009, ITT Tech
acquired the formerly nonprofit Daniel Webster University, part of a wave of
for-profit conversions of small colleges. ITT Tech explained the purchase as a
way to expand their aeronautics offerings, but observers suspected other
motives, ones that had more to do with the perceived legitimacy of what was
once a nonprofit, regionally accredited institution. Today, regional
accreditors re-investigate institutions that are purchased. There was a series
of suspect expansions of small colleges to encompass large for-profit
organizations during the '00s that lead to the tightening of these rules.
ITT Tech, numerically, achieved an incredible high. In 2014, ITT Tech reported
a total cost of attendance of up to $85,000. I didn't spend that much on my BS
and MS combined. Of course, I attended college in impoverished New Mexico, but
we can make a comparison locally. ITT Tech operated here as well, and
curiously, New Mexico tuition is specially listed in an ITT Tech cost estimate
report because it is higher. At its location in Albuquerque's Journal Center
office development, ITT Tech charged more than $51,000 in tuition alone for an
Associate's in Criminal Justice. The same program at Central New Mexico
Community College would have cost under $4,000 over the two years [2].
That isn't the most remarkable, though. A Bachelor's in Criminal Justice would
run over $100,000---more than the cost of a JD at UNM School of Law, for an
out-of-state student, today.
In 2014, more than 80% of ITT Tech's revenue came from federal student aid.
Their loan default rate was the highest of even for-profit programs. With their
extreme tuition costs and notoriously poor job placement rates, ITT Tech
increasingly had the appearance of an outright fraud.
Death came swiftly for ITT Tech. In 2016, they were a giant with more than 130
campuses and 40,000 students. The Consumer Financial Protection Bureau sued.
State Attorneys General followed, with New Mexico's Hector Balderas one of the
first two. The killing blow, though, came from the Department of Education,
which revoked ITT Tech's eligibility for federal student aid. Weeks later, ITT
Tech stopped accepting applications. The next month, they filed for bankruptcy,
chapter 7, liquidation.
Over the following years, the ITT Tech scandal would continue to echo. After a
series of lawsuits, the Department of Education agreed to forgive the federal
debt of ITT Tech attendees, although a decision by Betsy DeVos to end the ITT
Tech forgiveness program produced a new round of lawsuits over the matter in
2018. Private lenders faced similar lawsuits, and made similar settlements.
Between federal and private lenders, I estimate almost $4.5 billion in loans to
pay ITT Tech tuition were written off.
The Department of Education decision to end federal aid to ITT Tech was based,
in part, on ITT Tech's fraying relationship with its accreditor. The
Accrediting Council for Independent Colleges and Schools (ACICS), a favorite of
for-profit colleges, had its own problems. That same summer in 2016, the
Department of Education ended federal recognition of ACICS. ACICS accreditation
reviews had been cursory, and it routinely continued to accredit colleges
despite their failure to meet even ACIC's lax standards. ITT Tech was not the
only large ACIC-accredited institution to collapse in scandal.
Two years later, Betsy DeVos reinstated ACICS to federal recognition. Only 85
institutions still relied on ACICS, such august names as the Professional
Golfers Career College and certain campuses of the Art Institutes that were
suspect even by the norms of the Art Institutes (the Art Institutes folded just
a few months ago following a similar federal loan fraud scandal). ACICS lost
federal recognition again in 2022. Only time will tell what the next
presidential administration holds for the for-profit college industry.
ITT endured a long fall from grace. A leading electronics manufacturer in 1929,
a diversified conglomerate in 1960, scandals through the 1970s. You might say
that ITT is distinctly American in all the best and worst ways. They grew to
billions in revenue through an aggressive program of acquisitions. They were
implicated in the CIA coup in Chile. They made telephones and radios and radars
and all the things that formed the backbone of the mid-century American
electronics industry.
The modern ITT Corporation, descended from spinoff company ITT Industries,
continues on as an industrial automation company. They have abandoned the
former ITT logo, distancing themselves from their origin. The former defense
division became Exelis, later part of Harris, now part of L3, doomed to slowly
sink into the monopolized, lethargic American defense industry. German tool
and appliance company Kärcher apparently holds a license to the former ITT
logo, although I struggle to find any use of it.
To most Americans, ITT is ITT Tech, a so-called college that was actually a
scam, an infamous scandal, a sink of billions of dollars in federal money.
Dozens of telephone companies around the world, tracing their history back to
ITT, are probably better off distancing themselves from what was once a
promising international telephone operator, a meaningful technical competitor
to Western Electric. The conglomeration of the second half of the 20th century
put companies together and then tore them apart; they seldom made it out in
as good of condition as they went in. ITT went through the same cycle as so
many other large American corporations. They went into hotels, car rentals,
then into colleges. They left thousands of students in the lurch on the way
out. When ITT Tech went bankrupt, everyone else had already started the
semester. They weren't accepting applicants. They wouldn't accept transfer
credit from ITT anyway; ITT's accreditation was suspect.
"What you don't know can hurt you," a 1990s ITT Tech advertisement declares.
In Reddit threads, ITT Tech alums debate if they're better off telling
prospective employers they never went to college at all.
[1] Sources actually vary on when ITT purchased Sams Training Institute, with
some 1970s newspaper articles putting it as early as 1966, but 1968 is the year
that ITT's involvement in Sams was advertised in the papers. Further confusing
things, the former Sams locations continued to operate under the Sams Technical
Institute name until around 1970, with verbiage like "part of ITT Educational
Services" inconsistently appearing. ITT may have been weighing the value of its
brand recognition against Sams but apparently made a solid decision during
1970, after which ads virtually always use the ITT name and logo above any
other.
[2] Today, undergraduate education across all of New Mexico's public
universities and community colleges is free for state residents. Unfortunately
2014 was not such an enlightened time. I must take every opportunity to brag
about this remarkable and unusual achievement in our state politics.
The term "VHF omnidirectional range" can at first be confusing, because it
includes "range"---a measurement that the technology does not provide. The
answer to this conundrum is, as is so often the case, history. The "range"
refers not to the radio equipment but to the space around it, the area in which
the signal can be received. VOR is an inherently spatial technology; the signal
is useless except as it relates to the physical world around it.
This use of the word "range" is about as old as instrument flying, dating back
to the first radionavigation devices in the 1930s. We still use it today, in
the somewhat abstract sense of an acronym that is rarely expanded: VOR.
This is Truth or Consequences VOR. Or, perhaps more accurately, the transmitter
that defines the center of the Truth or Consequences VOR, which extends perhaps
two hundred miles around this point. The range can be observed only by
instruments, but it's there, a phase shift that varies like terrain.
The basic concept of VOR is reasonably simple: a signal is transmitted with two
components, a 30Hz tone in amplitude modulation and a 30Hz in frequency
modulation. The two tones are out of phase, by an amount that is determined by
your position in the range, and more specifically by the radial from the VOR
transmitter to your position. This apparent feat of magic, a radio signal that
is different in different locations, is often described as "space modulation."
The first VOR transmitters achieved this effect the obvious way, by rapidly
spinning a directional antenna in time with the electronically generated phase
shift. Spinning anything quickly becomes a maintenance headache, and so VOR was
quickly transitioned to solid-state techniques. Modern VOR transmitters are
electronically rotated, by one of two techniques. They rotate in the same sense
as images on a screen, a set of discrete changes in a solid state system that
produce the effect of rotation.
The Truth or Consequences VOR operates on 112.7 MHz, near the middle of the
band assigned for this use. Patterned after the nearby Truth or Consequences
Airport, KTCS, it identifies itself by transmitting "TCS" in Morse code. Modern
charts give this identifier in dots and dashes, an affordance to the poor level
of Morse literacy among contemporary pilots.
In the airspace, it defines the intersection of several airways. They all go
generally north-south, unsurprising considering that the restricted airspace
of White Sands Missile Range prevents nearly all flight to the east. Flights
following the Rio Grande, most north-south traffic in this area, will pass
directly overhead on their way to VOR transmitters at Socorro or Deming or El
Paso, where complicated airspace leads to two such sites very nearby.
This is the function that VORs serve: for the most part, you fly to or from
them. Because the radial from the VOR to you remains constant, they provide a
reliable and easy to use indication that you are still on the right track. A
warning sign, verbose by tradition, articulates the significance:
This facility is used in FAA air traffic control. Loss of human life may
result from service interruption. Any person who interferes with air traffic
control or damages or trespasses on this property will be prosecuted under
federal law.
The sign is backed up by a rustic wooden fence. Like most VOR transmitters,
this one was built in the late 1950s or 1960s. The structure has seen only
minimal changes since then, although the radio equipment has been improved and
simplified.
The central, omnidirectional antenna of a VOR transmitter makes for a
distinctive silhouette. You have likely noticed one before. I must admit that
I have somewhat simplified; most of the volume of the central antenna housing
is actually occupied by the TACAN antenna. Most VOR sites in the US are really
VORTAC sites, combining the civilian VOR and military TACAN systems into one
facility. TACAN has several minor advantages over VOR for military use, but one
big advantage: it provides not only a radial but a distance. The same system
used by TACAN for distance information, based on an unusual radio modulation
technique called "squitter," can be used by civilian aircraft as well in the
form of DME. VORTAC sites thus provide VOR, DME, and TACAN service.
True VOR sites, rare in the US but plentiful across the rest of the world, have
smaller central antennas. If you are not used to observing the ring of
radial antennas, you might not recognize them as the same system.
The radial antennas are placed in a circle some distance away, to open space
between them. This reduces, but does not eliminate, the effect of each
antenna's radiated power being absorbed by its neighbors. They are often on the
roof of the equipment building, and may be surrounded by a metallic ground
plane that extends still further. Most US VORTAC sites, originally built before
modern RF technology, rely on careful positioning on suitable terrain rather
than a ground plane.
Intriguingly, the radial antennas are not directional designs. In a modern VOR
site, the radial antennas transmit an in-phase signal. The phase shift used for
space modulation is created by rapidly changing the omnidirectional antenna in
use. The space modulation is created not by rotating the antenna, but by
moving the antenna through a circular path and allowing the Doppler effect to
vary the apparent phase of the received signal.
The lower part of the central antenna, the more cone shaped part, is mostly
empty. It encloses the structure that supports the cylindrical radome that
houses the actual antenna elements. In newer installations it is often an
exposed frame, but the original midcentury sites all provide a conical
enclosure. I suspect the circular metallic sheathing simplified calculation of
the effective radiation pattern at the time.
An access door can be used to reach the interior to service the antennas; the
rope holding this one closed is not standard equipment but is perhaps also not
very unusual. These are old facilities. When this cone was installed, adjacent
Interstate 25 wasn't an interstate yet.
Aviation engineers leave little to chance, and almost never leave a system
without a spare. Ground-based infrastructure is no exception. Each VOR
transmitter is continuously tested by a monitoring system. A pair of antennas
mounted on a post near the fence line feed redundant monitoring systems that
ensure the static antennas receive the correct radial. If failure or a bad fix
are detected, it switches the transmit antennas over to a second, redundant set
of radio equipment. The problem is reported to the FAA, and Tech Ops staff are
dispatched to investigate the problem.
Occasionally, the telephone lines VOR stations use to report problems are,
themselves, unreliable. When Tech Ops is unable to remotely monitor a VOR
station, they issue a NOTAM that it should not be relied upon.
The rear of the building better shows its age. The wall is scarred where old
electrical service equipment has been removed; the weather-tight light fixture
is a piece of incandescent history. It has probably been broken for longer than
I have been alive.
A 1000 gallon propane tank to one side will supply the generator in the
enclosure in case of a failure. Records of the Petroleum Storage Bureau of the
New Mexico Environment Department show that an underground fuel tank was
present at this site but has been removed. Propane is often selected for newer
standby generator installations where an underground tank, no longer up to
environmental safety standards, had to be removed.
It is indeed in its twilight years. The FAA has shut down about half of the VOR
transmitters. TCS was spared this round, with all but one of the VOR
transmitters in sparsely covered New Mexico. It is part of the "minimum
operational network." It remains to be seen how long VOR's skeleton crew will
carry on. A number of countries have now announced the end of VOR service.
Another casualty to satellite PNT, joining LORAN wherever dead radio systems
go.
The vastness and sparse population of southern New Mexico pose many challenges.
One the FAA has long had to contend with is communications. Very near the Truth
or Consequences VOR transmitter is an FAA microwave relay site. This tower is
part of a chain that relays radar data from southern New Mexico to the air
route traffic control center in Albuquerque.
When it was first built, the design of microwave communications equipment was
much less advanced than it is today. Practical antennas were bulky and often
pressurized for water tightness. Waveguides were expensive and cables were
inefficient. To ease maintenance, shorten feedlines, and reduce tower loading,
the actual antennas were installed on shelves near the bottom of the tower,
pointing straight upwards. At the top of the tower, two passive reflectors
acted like mirrors to redirect the signal into the distance. This "periscope"
design was widely used by Western Union in the early days of microwave data
networking.
Today, this system is partially retired, replaced by commercial fiber networks.
This tower survives, maintained under contract by L3Harris. As the compound
name suggests, half of this company used to Harris, a pioneer in microwave
technology. The other half used to be L3, which split off from Lockheed Martin,
which bought it when it was called Loral. Loral was a broad defense contractor,
but had its history and focus in radar, another application of microwave RF
engineering.
Two old radio sites, the remains of ambitious nationwide systems that helped
create today's ubiquitous aviation. A town named after an old radio show. Some
of the great achievements of radio history are out there in Sierra County.
I'm heading to Las Vegas for re:invent soon, perhaps the most boring type of
industry extravaganza there could be. In that spirit, I thought I would write
something quick and oddly professional: I'm going to complain about Docker.
Packaging software is one of those fundamental problems in system
administration. It's so important, so influential on the way a system is used,
that package managers are often the main identity of operating systems.
Consider Windows: the operating system's most alarming defect in the eyes of
many "Linux people" is its lack of package management, despite Microsoft's
numerous attempts to introduce the concept. Well, perhaps more likely,
because of the number of those attempts. And still, in the Linux world,
distributions are differentiated primarily by their approach to managing
software repositories. I don't just mean the difference between dpkg and
rpm, but rather more fundamental decisions, like opinionated vs. upstream
configuration and stable repositories vs. a rolling release. RHEL and Arch
share the vast majority of their implementation and yet have very different
vibes.
Linux distributions have, for the most part, consolidated on a certain
philosophy of how software ought to be packaged, if not how often. One of the
basic concepts shared by most Linux systems is centralization of dependencies.
Libraries should be declared as dependencies, and the packages depended on
should be installed in a common location for use of the linker. This can create
a challenge: different pieces of software might depend on different versions of
a library, which may not be compatible. This is the central challenge of
maintaining a Linux distribution, in the classical sense: providing repositories
of software versions that will all work correctly together. One of the
advantages of stable distributions like RHEL is that they are very reliable in
doing this; one of the disadvantages is that they achieve that goal by
packaging new versions very infrequently.
Because of the need to provide mutually compatible versions of a huge range of
software, and to ensure compliance with all kinds of other norms established by
distributions (which may range from philosophical policies like free software
to rules on the layout of configuration files), putting new software into Linux
distributions can be... painful. For software maintainers, it means dealing
with a bunch of distributions using a bunch of old versions with various
specific build and configuration quirks. For distribution and package
maintainers, it means bending all kinds of upstream software into compliance
with distribution policy and figuring out version and dependency problems. It's
all a lot of work, and while there are some norms, in practice it's sort of a
wild scramble to do the work to make all this happen. Software developers that
want their software to be widely used have to put up with distros. Distros that
want software have to put up with software developers. Everyone gets mad.
Naturally there have been various attempts to ease these problems. Naturally
they are indeed various and the community has not really consolidated on any
one approach. In the desktop environment, Flatpak, Snap, and AppImage are all
distressingly common ways of distributing software. The images or applications
for these systems package the software complete with its dependencies,
providing a complete self-contained environment that should work correctly on
any distribution. The fact that I have multiple times had to unpack flatpaks
and modify them to fix dependencies reveals that this concept doesn't always
work entirely as advertised, but to be fair that kind of situation usually
crops up when the software has to interact with elements of the system that
the runtime can't properly isolate them from. The video stack is a classic
example, where errant OpenGL libraries in packages might have to be removed
or replaced for them to function with your particular graphics driver.
Still, these systems work reasonably well, well enough that they continue to
proliferate. They are greatly aided by the nature of the desktop applications
for which they're used (Snapcraft's system ambitions notwithstanding). Desktop
applications tend to interact mostly with the user and receive their
configuration via their own interface. Limiting the interaction surface mostly
to a GUI window is actually tremendously helpful in making sandboxing feasible,
although it continues to show rough edges when interacting with the file
system.
I will note that I'm barely mentioning sandboxing here because I'm just not
discussing it at the moment. Sandboxing is useful for security and even
stability purposes, but I'm looking at these tools primarily as a way of
packaging software for distribution. Sandboxed software can be distributed
by more conventional means as well, and a few crusty old packages show that
it's not as modern of a concept as it's often made out to be.
Anyway, what I really wanted to complain a bit about is the realm of software
intended to be run on servers. Here, there is a clear champion: Docker, and to
a lesser degree the ecosystem of compatible tools like Podman. The release of
Docker lead to a surprisingly rapid change in what are widely considered best
practices for server operations. While Docker images a means of distributing
software first seemed to appeal mostly to large scalable environments with
container orchestration, it sort of merged together with ideas from Vagrant and
others to become a common means of distributing software for developer and
single-node use as well.
Today, Docker is the most widespread way that server-side software is
distributed for Linux. I hate it.
This is not a criticism of containers in general. Containerization is a
wonderful thing with many advantages, even if the advantages over lightweight
VMs are perhaps not as great as commonly claimed. I'm not sure that Docker has
saved me more hours than it's cost, but to be fair I work as a DevOps
consultant and, as a general rule, people don't get me involved unless the
current situation isn't working properly. Docker images that run correctly with
minimal effort don't make for many billable hours.
What really irritates me these days is not really the use of Docker images in
DevOps environments that are, to some extent, centrally planned and managed.
The problem is the use of Docker as a lowest common denominator, or perhaps
more accurately lowest common effort, approach to distributing software to end
users. When I see open-source, server-side software offered to me as a Docker
image or--even worse---Docker Compose stack, my gut reaction is irritation.
These sorts of things usually take longer to get working than equivalent
software distributed as a conventional Linux package or to be built from
source.
But wait, how does that happen? Isn't Docker supposed to make everything
completely self-contained? Let's consider the common problems, something that
I will call my Taxonomy of Docker Gone Bad.
Configuration
One of the biggest problems with Docker-as-distribution is the lack of
consistent conventions for configuration. The vast majority of server-side
Linux software accepts its configuration through an ages-old technique of
reading a text file. This certainly isn't perfect! But, it is pretty consistent
in its general contours. Docker images, on the other hand...
If you subscribe to the principles of the 12-factor-app, the best way for a
Docker image to take configuration is probably via environment variables. This
has the upside that it's quite straightforward to provide them on the command
line when starting the container. It has the downside that environment
variables aren't great for conveying structured data, and you usually interact
with them via shell scripts that have clumsy handling of long or complicated
values. A lot of Docker images used in DevOps environments take their
configuration from environment variables, but they tend to make it a lot more
feasible by avoiding complex configuration (by assuming TLS will be terminated
by "someone else" for example) or getting a lot of their configuration from a
database or service on the network.
For most end-user software though, configuration is too complex or verbose to
be comfortable in environment variables. So, often, they fall back to
configuration files. You have to get the configuration file into the
container's file system somehow, and Docker provides numerous ways of doing
so. Documentation on different packages will vary on which way it recommends.
There are frequently caveats around ownership and permissions.
Making things worse, a lot of Docker images try to make configuration less
painful by providing some sort of entry-point shell script that generates the
full configuration from some simpler document provided to the container. Of
course this level of abstraction, often poorly documented or entirely
undocumented in practice, serves mostly to make troubleshooting a lot more
difficult. How many times have we all experienced the joy of software failing
to start, referencing some configuration key that isn't in what we provided,
leading us to have to find have the Docker image build materials and read the
entrypoint script to figure out how it generates that value?
The situation with configuration entrypoint scripts becomes particularly acute
when those scripts are opinionated, and opinionated is often a nice way of
saying "unsuitable for any configuration other than the developer's." Probably
at least a dozen times I have had to build my own version of a Docker image to
replace or augment an entrypoint script that doesn't expose parameters that
the underlying software accepts.
In the worst case, some Docker images provide no documentation at all, and
you have to shell into them and poke around to figure out where the actual
configuration file used by the running software is even located. Docker images
must always provide at least some basic README information on how the
packaged software is configured.
Filesystems
One of the advantages of Docker is sandboxing or isolation, which of course
means that Docker runs into the same problem that all sandboxes do. Sandbox
isolation concepts do not interact well with Linux file systems. You don't even
have to get into UID behavior to have problems here, just a Docker Compose
stack that uses named volumes can be enough to drive you to drink. Everyday
operations tasks like backups, to say nothing of troubleshooting, can get a lot
more frustrating when you have to use a dummy container to interact with files
in a named volume. The porcelain around named volumes has improved over time,
but seemingly simple operations can still be weirdly inconsistent between
Docker versions and, worse, other implementations like Podman.
But then, of course, there's the UID thing. One of the great sins of Docker is
having normalized running software as root. Yes, Docker provides a degree of
isolation, but from a perspective of defense in depth running anything with
user exposure as root continues to be a poor practice. Of course this is one
thing that often leads me to have to rebuild containers provided by software
projects, and a number of common Docker practices don't make it easy. It all
gets much more complicated if you use hostmounts because of UID mapping, and
slightly complex environments with Docker can turn into NFS-style puzzles
around UID allocation. Mitigating this mess is one of the advantages to named
volumes, of course, with the pain points they bring.
Non-portable Containers
The irony of Docker for distribution, though, and especially Docker Compose, is
that there are a lot of common practices that negatively impact
portability---ostensibly the main benefit of this approach. Doing anything
non-default with networks in Docker Compose will often create stacks that don't
work correctly on machines with complex network setups. Too many Docker Compose
stacks like to assume that default, well-known ports are available for
listeners. They enable features of the underlying software without giving you a
way to disable them, and assume common values that might not work in your
environment.
One of the most common frustrations, for me personally, is TLS. As I have
already alluded to, I preach a general principle that Docker containers should
not terminate TLS. Accepting TLS connections means having access to the private
key material. Even if 90-day ephemeral TLS certificates and a general
atmosphere of laziness have deteriorated our discipline in this regard, private
key material should be closely guarded. It should be stored in only one place
and accessible to only one principal. You don't even have to get into these
types of lofty security concerns, though. TLS is also sort of complicated to
configure.
A lot of people who self-host software will have some type of SNI or virtual
hosting situation. There may be wildcard certificates for multiple subdomains
involved. All of this is best handled at a single point or a small number of
dedicated points. It is absolutely maddening to encounter Docker images built
with the assumption that they will individually handle TLS. Even with TLS
completely aside, I would probably never expose a Docker container with some
application directly to the internet. There are too many advantages to having a
reverse proxy in front of it. And yet there are Docker Compose stacks out there
for end-user software that want to use ACME to issue their own certificate!
Now you have to dig through documentation to figure out how to disable that
behavior.
The Single-Purpose Computer
All of these complaints are most common with what I would call hobby-tier
software. Two examples that pop into my mind are HomeAssistant and Nextcloud.
I don't call these hobby-tier to impugn the software, but rather to describe
the average user.
Unfortunately, the kind of hobbyist that deploys software has had their mind
addled by the cheap high of the Raspberry Pi. I'm being hyperbolic here, but
this really is a problem. It's absurd the number of "self-hosted" software
packages that assume they will run on dedicated hardware. Having "pi" in the
name of a software product is a big red flag in my mind, it immediately makes
me think "they will not have documented how to run this on a shared device."
Call me old-fashioned, but I like my computers to perform more than one task,
especially the ones that are running up my power bill 24/7.
HomeAssistant is probably the biggest offender here, because I run it in Docker
on a machine with several other applications. It actively resists this, popping
up an "unsupported software detected" maintenance notification after every
update. Can you imagine if Postfix whined in its logs if it detected that it
had neighbors?
Recently I decided to give NextCloud a try. This was long enough ago that the
details elude me, but I think I burned around two hours trying to get the
all-in-one Docker image to work in my environment. Finally I decided to give
up and install it manually, to discover it was a plain old PHP application of
the type I was regularly setting up in 2007. Is this a problem with kids these
days? Do they not know how to fill in the config.php?
Hiding Sins
Of course, you will say, none of these problems would be widespread of people
just made good Docker images. And yes, that is completely true! Perhaps one of
the problems with Docker is that it's too easy to use. Creating an RPM or
Debian package involves a certain barrier to entry, and it takes a whole lot of
activation energy for even me to want to get rpmbuild going (advice: just use
copr and rpkg). At the core of my complaints is the fact that distributing an
application only as a Docker image is often evidence of a relatively immature
project, or at least one without anyone who specializes in distribution. You
have to expect a certain amount of friction in getting these sorts of things
to work in a nonstandard environment.
It is a palpable irony, though, that Docker was once heralded as the ultimate
solution to "works for me" and yet seems to just lead to the same situation
existing at a higher level of configuration.
Last Thoughts
This is of course mostly my opinion and I'm sure you'll disagree on something,
like my strong conviction that Docker Compose was one of the bigger mistakes of
our era. Fifteen years ago I might have written a nearly identical article
about all the problems I run into with RPMs created by small projects, but what
surprises me about Docker is that it seems like projects can get to a large
size, with substantial corporate backing, and still distribute in the form of a
decidedly amateurish Docker Compose stack. Some of it is probably the lack of
distribution engineering personnel on a lot of these projects, since Docker is
"simple." Some of it is just the changing landscape of this class of software,
with cheap single-board computers making Docker stacks just a little less
specialized than a VM appliance image more palatable than they used to be. But
some if it is also that I'm getting older and thus more cantankerous.
I have always been fascinated by the PABX - the private automatic branch
exchange, often shortened to "PBX" in today's world where the "automatic" is
implied. (Relatively) modern small and medium business PABXs of the type I like
to collect are largely solid-state devices that mount on the wall. Picture a
cabinet that's maybe two feet wide, a foot and half tall, and five inches deep.
That's a pretty accurate depiction of my Comdial hybrid key/PABX system,
recovered from the offices of a bankrupt publisher of Christian home schooling
materials.
These types of PABX, now often associated with Panasonic on the small end, are
affordable and don't require much maintenance or space. They have their
limitations, though, particularly in terms of extension count. Besides, the
fact that these compact PABX are available at all is the result of decades of
development in electronics.
Not that long ago, PABX were far more complex. Early PBX systems were manual,
and hotels were a common example of a business that would have a telephone
operator on staff. The first PABX were based on the same basic technology as
their contemporary phone switches, using step-by-step switches or even crossbar
mechanisms. They no longer required an operator to connect every call, but were
still mostly designed with the assumption that an attendant would handle some
situations. Moreover, these early PABX were large, expensive, and required
regular maintenance. They were often leased from the telephone company, and
the rates weren't cheap.
PABX had another key limitation as well: they were specific to a location.
Each extension had to be home-run wired to the PABX, easy in a single building
but costly at the level of a campus and, especially, with buildings spread
around a city. For organizations with distributed buildings like school
districts, connecting extensions back to a central PABX could be significantly
more expensive than connecting them to the public telephone exchange.
This problem must have been especially common in a city the size of New York,
so it's no surprise that New York Telephone was the first to commercialize
an alternative approach: Centrex.
Every technology writer must struggle with the temptation to call every managed
service in history a precursor to "the Cloud." I am going to do my very best to
resist that nagging desire, but it's difficult not to note the similarity
between Centrex service and modern cloud PABX solutions. Indeed, Centrex relied
on capabilities of telephone exchange equipment that are recognizably similar
to mainframe computer concepts like LPARs and virtualization today. But we'll
get there in a bit. First, we need to talk about what Centrex is.
I've had it in my mind to write something about Centrex for years, but I've
always had a hard time knowing where to start. The facts about Centrex are
often rather dry, and the details varied over years of development, making it
hard to sum up the capabilities in short. So I hope that you will forgive this
somewhat dry post. It covers something that I think is a very important part of
telephone history, particularly from the perspective of the computer industry
today. It also lists off a lot of boring details. I will try to illustrate with
interesting examples everywhere I can. I am indebted, for many things but here
especially, to many members of the Central Office mailing list. They filled in
a lot of details that solidified my understanding of Centrex and its variants.
The basic promise of Centrex was this: instead of installing your own PABX, let
the telephone company configure their own equipment to provide the features you
want to your business phones. A Centrex line is a bit like a normal telephone
line, but with all the added capabilities of a business phone system: intercom
calling, transfers, attendants, routing and long distance policies, and so on.
All of these features were provided by central telephone exchanges, but your
lines were partitioned to be interconnected within your business.
Centrex was a huge success. By 1990, a huge range of large institutions had
either started their telephone journey with Centrex or transitioned away from a
conventional PABX and onto Centrex. It's very likely that you have interacted
with a Centrex system before and perhaps not realized. And now, Centrex's days
are numbered. Let's look at the details.
Centrex is often explained as a reuse of the existing central office equipment
to serve PABX requirements. This isn't entirely incorrect, but it can be
misleading. It was not all that unusual for Centrex to rely on equipment
installed at the customer site, but operated by the telco. For this reason,
it's better to think of Centrex as a managed service than as a "cloud" service,
or a Service-as-a-Service, or whatever modern term you might be tempted to
apply.
Centrex existed in two major variants: Centrex-CO and Centrex-CU. The CO case,
for Central Office, entailed this well-known design of each business telephone
line connecting to an existing telco central office, where a switch was
configured to provide Centrex features on that line group. CU, for Customer
Unit, looks more like a very large PABX. These systems were usually limited to
very large customers, who would provide space for the telco to build a new
central office on the customer's site. The exchange was located with the
customer, but operated by the telco.
These two different categories of service lead to two different categories of
customers, with different needs and usage patterns. Centrex-CO appealed to
smaller organizations with fewer extensions, but also to larger organizations
with extensions spread across a large area. In that case, wiring every
extension back to the CO using telco infrastructure was less expensive than
installing new wiring to a CU exchange. A prototypical example might be a
municipal school district.
Centrex-CU appealed to customers with a large number of extensions grouped in a
large building or a campus. In this case it was much less costly to wire
extensions to the new CU site than to connect them all over the longer distance
to an existing CO. A prototypical Centrex-CU customer might be a university.
Exactly how these systems worked varied greatly from exchange to exchange, but
the basic concept is a form of partitioning. Telephone exchanges with support
for Centrex service could be configured such that certain lines were grouped
together and enabled for Centrex features. The individual lines needed to have
access to Centrex-specific capabilities like service codes, but also needed to
be properly associated with each other so that internal calling would indeed be
internal to the customer. This concept of partitioning telephone switches had
several different applications, and Western Electric and other manufacturers
continued to enhance it until it reached a very high level of sophistication in
digital switches.
Let's look at an example of a Centrex-CO. The State of New Mexico began a
contract with Mountain States Telephone and Telegraph [1] for Centrex service
in 1964. The new Centrex service replaced 11 manual switchboards distributed
around Santa Fe, and included Wide-Area Telephone Service (WATS), a discount
arrangement for long-distance calls placed from state offices to exchanges
throughout New Mexico. On November 9th, 1964, technicians sent to Santa Fe
by Western Electric completed the cutover at the state capitol complex.
Incidentally, the capitol phones of the day were being installed in what is
now the Bataan Memorial Building: construction of the Roundhouse, today New
Mexico's distinctive state capitol, had just begun that same year.
The Centrex service was estimated to save $12,000 per month in the rental and
operation of multiple state exchanges, and the combination of WATS and
conference calling service was expected to produce further savings by reducing
the need for state employees to travel for meetings. The new system was
evidently a success, and lead to a series of minor improvements including a
scheme later in 1964 to ensure that the designated official phone number of
each state agency would be answered during the state lunch break (noon to
1:15). In 1965, Burns Reinier resigned her job as Chief Operator of the state
Centrex to launch a campaign for Secretary of State. Many state employees would
probably recognize her voice, but that apparently did not translate to
recognition on the ballot, as she lost the Democratic party nomination to the
Governor's former secretary.
The late 1960s saw a flurry of newspaper advertisements giving new phone
numbers for state and municipal agencies, Albuquerque Public Schools, and
universities, as they all consolidated onto the state-run Centrex system. Here
we must consider the geographical nature of Centrex: Centrex service operates
within a single telephone exchange. To span the gap between the capitol in
Santa Fe, state offices and UNM in Albuquerque, NMSU in Las Cruces, and even
the State Hospital in Las Vegas (NM), a system of tie lines were installed
between Centrex facilities in each city. These tie lines were essentially
dedicated long distance trunks leased by the state to connect calls between
Centrex exchanges at lower cost than even WATS long-distance service.
This system was not entirely CO-based: in Albuquerque, a Centrex exchange was
installed in state-leased space at what was then known as the National
Building, 505 Marquette. In the late '60s, 505 Marquette also hosted Telepak,
an early private network service from AT&T. It is perhaps a result of this
legacy that 505 Marquette houses one of New Mexico's most important network
facilities, a large carrier hotel now operated by H5 Data Centers. The
installation of the Centrex exchange at 505 Marquette saved a lot of expense on
new local loops, since a series of 1960s political and bureaucratic events lead
to a concentration of state offices in the new building.
Having made this leap to customer unit systems, let's jump almost 30 years
forward to an example of a Centrex-CU installation... one with a number of
interesting details. In late 1989, Sandia National Laboratories ended its
dependence on the Air Force for telephony services by contracting with AT&T for
the installation of a 5ESS telephone exchange. The 5ESS, a digital switch and a
rather new one at the time, brought with it not just advanced calling features
but something even more compelling to an R&D institution at the time: data
networking.
The Sandia installation went nearly all-in on ISDN, the integrated digital
telephony and data standard that largely failed to achieve adoption for
telephone applications. Besides the digital telephone sets, though, Sandia made
full use of the data capabilities of the exchange. Computers connected to the
data ports on the ISDN user terminals (the conventional term for the telephone
instrument itself in an ISDN network) could make "data calls" over the
telephone system to access IBM mainframes and other corporate computing
resources... all at a blistering 64 kbps, the speed of an ISDN basic rate
interface bearer channel. The ISDN network could even transport video calls,
by combining multiple BRIs for 384 kbps aggregate capacity.
The 5ESS was installed on a building on Air Force property near Tech Area 1,
and the 5ESS's robust support for remote switch modules was fully leveraged to
place an RSM in each Tech Area. The new system required renumbering, always a
hassle, but allowed for better matching of Sandia's phone numbers on the public
network to phone numbers on the Federal Telecommunications System or FTS... a
CCSA operated for the Federal Government. But we'll talk about that later. The
5ESS was also equipped with ISDN PRI tie lines to a sibling 5ESS at Sandia
California in Livermore, providing inexpensive calling and ISDN features
between the two sites.
This is a good time to discuss digital Centrex. Traditional telephony, even
today in residential settings, uses analog telephones. Business systems,
though, made a transition from analog to digital during the '80s and '90s.
Digital telephone sets used with business systems provided far easier access to
features of the key system, PABX, or Centrex, and with fewer wires. A digital
telephone set on one or two telephone pairs could offer multiple voice lines,
caller ID, central directory service, busy status indication for other phones,
soft keys for pickup groups and other features, even text messaging in some
later systems (like my Comdial!). Analog systems often required as many as a
half dozen pairs just for a simple configuration like two lines and busy lamp
fields; analog "attendant" sets with access to many lines could require a
25-pair Amphenol connector... sometimes even more than one.
Many of these digital systems used proprietary protocols between the switch and
telephones. A notable example would be the TCM protocol used by the Nortel
Meridian, an extremely popular PABX that can still be found in service in many
businesses. Digital telephone sets made the leap to Centrex as well: first by
Nortel themselves, who offered a "Meridian Digital Centrex" capability on their
DMS-100 exchange switch that supported telephone sets similar to (but not the
same as!) ordinary Meridian digital systems. AT&T followed several years later
by offering 5ESS-based digital Centrex over ISDN: the same basic capability
that could be used for computer applications as well, but with the advantage
of full compatibility with AT&T's broader ISDN initiative.
The ISDN user terminals manufactured by Western Electric and, later, Lucent,
are distinctive and a good indication that that digital Centrex is in use.
They are also lovely examples of the digital telephones of the era, with LCD
matrix displays, a bevy of programmable buttons, and pleasing Bellcore
distinctive ringing. It is frustrating that the evolution of telephone
technology has seemingly made ringtones far worse. We will have to forgive the
oddities of the ISDN electrical standard that required an "NT1" network
termination device screwed to the bottom of your desk or, more often, underfoot
on the floor.
Thinking about these digital phones, let's consider the user experience of
Centrex. Centrex was very flexible; there were a large number of options
available based on customer preference, and the details varied between the
Centrex host switches used in the United States: Western Electric's line from
the 5XB to the 5ESS, Nortel's DMS-100 and DMS-10, and occasionally the Siemens
EWSD. This all makes it hard to describe Centrex usage succinctly, but I will
focus on some particular common features of Centrex.
Like PABXs, most Centrex systems required that a dialing prefix (conventionally
nine) be used for an outside line. This was not universal, "assumed nine" could
often be enabled at customer request, but it created a number of complications
in the dialplan and was best avoided. Centrex systems, because they mostly
belonged to larger customers, were more likely than PABXs to offer tie lines or
other private routing arrangements, which were often used by dialing calls with
a prefix of 8. Like conventional telephone systems, you could dial 0 for the
operator, but on traditional large Centrex systems the operator would be an
attendant within the Centrex customer organization.
Centrex systems enabled internal calling by extension, much like PABXs. Because
of the large size of some Centrex-CU installations in particular you are
probably much more likely to encounter five-digit extensions with Centrex than
with a PABX. These types of extensions were usually designed by taking several
exchange prefixes in a sequence, and using the last digit of the exchange code
as the first digit of the extension. For that reason the extensions are often
written in a format like 1-2345. A somewhat charming example of this
arrangement was the 5ESS-based Centrex-CU at Los Alamos National Laboratories,
which spans exchange prefixes 662-667 in the 505 NPA. Since that includes the
less desirable exchange prefix 666, it was skipped. Of course, that didn't stop
Telnyx from starting to use it more recently. Because of the history of Los
Alamos's development, telephones in the town use these same prefixes, generally
the lower ones.
With digital telephones, Centrex features are comparatively easy to access,
since they can be assigned to buttons on the telephones. With analog systems
there are no such convenient buttons, so Centrex features had to be awkwardly
bolted on much like advanced features on non-Centrex lines. Many features are
activated using vertical service codes starting with *, although in some
systems (especially older systems for pulse compatibility) they might be mapped
to codes that look more like extensions. Operations that involve interrupting
an active call, like transfer or hold, involve flashing the hookswitch... a
somewhat antiquated operation now more often achieved with a "flash" button on
the telephone, when it's done at all.
Still, some analog Centrex systems used electrical tricks on the pair (similar
to many PABX) to provide a message waiting light and even an extra button for
common operations.
While Centrex initially appealed mainly to larger customers, improvements in
host switch technology and telephone company practices made it an accessible
option for small organizations as well. Verizon's "CustoPAK" was an affordable
offering that provided Centrex features on up to 30 extensions. These
small-scale services were also made more accessible by computerization.
Configuration changes to the first crossbar Centrex service required exchange
technicians climbing ladders to resolder jumpers. With the genesis of digital
switches, telco employees in translation centers read customer requirements and
built switch configuration plans. By the '90s, carriers offered modem services
that allowed customers to reconfigure their Centrex themselves, and later
web-based self-service systems emerged.
So what became of Centrex? Like most aspects of the conventional copper phone
network, it is on the way out. Major telephone carriers have mostly removed
Centrex service from their tariffs, meaning they are no longer required to
offer it. Even in areas where it is present on the tariff it is reportedly
hard to obtain. A report from the state of Washington notes that, as a result
particularly of CenturyLink removing copper service from its tariffs entirely,
CenturyLink has informed the state that it may discontinue Centrex service at
any time, subject to six months notice. Six months may seem like a long time
but it is a very short period for a state government to replace a statewide
telephone system... so we can anticipate some hurried acquisitions in the next
couple of years.
Centrex had always interacted with tariffs in curious ways, anyway. Centrex
was the impetus behind multiple lawsuits against AT&T on grounds varying from
anti-competitive behavior to violations of the finer points of tariff
regulation. For the most part AT&T prevailed, but some of these did lead to
changes in the way Centrex service was charged. Taxation was a particularly
difficult matter. There were excise taxes imposed on telephone service in most
cases, but AT&T held that "internal" calls within Centrex customers should not
be subject to these taxes due to their similarity to untaxed PABX and key
systems. The finer points of this debate varied from state to state, and it
made it to the Supreme Court at least once.
Centrex could also have a complex relationship with the financial policies of
many institutional customers. Centrex was often paired with services like WATS
or tie lines to make long-distance calling more affordable, but this also
encouraged employees to make their personal long-distance calls in the office.
The struggle of long-distance charge accounting lead not only to lengthy
employee "acceptable use" policies that often survive to this day, but also
schemes of accounting and authorization codes to track long distance users.
Long-distance phone charges by state employees were a perennial minor scandal
in New Mexico politics, leading to some sort of audit or investigation every
few years. Long-distance calling was often disabled except for extensions that
required it, but you will find stories of public courtesy phones accidentally
left with long-distance enabled becoming suddenly popular parts of university
buildings.
Today, Centrex is generally being replaced with VoIP solutions. Some of these
are fully managed, cloud-based services, analogous to Centrex-CO before them.
IP phones bring a rich featureset that leave eccentric dialplans and feature
codes mostly forgotten, and federal regulations around the accessibility of 911
have broadly discouraged prefix schemes for outside calls. On the flip side,
these types of phone systems make it very difficult to configure dialplan
schemes on endpoints, leading office workers to learn a new type of phone
oddity: dialing pound after a number to skip the end-of-dialing timeout. This
worked on some Centrex systems as well; some things never change.
[1] Later called US West, later called Qwest, now part of CenturyLink, which is
now part of Lumen.