_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2023-12-23 ITT Technical Institute

Programming note/shameless plug: I am finally on Mastodon.

The history of the telephone industry is a bit of an odd one. For the greatest part of the 20th century, telephony in the United States was largely a monopoly of AT&T and its many affiliates. This wasn't always the case, though. AT&T held patents on their telephone implementation, but Bell's invention was not the only way to construct a practical telephone. During the late 19th century, telephone companies proliferated, most using variations on the design they felt would fall outside of Ma Bell's patent portfolio. AT&T was aggressive in challenging these operations but not always successful. During this period, it was not at all unusual for a city to have multiple competing telephone companies that were not interconnected.

Shortly after the turn of the 20th century, AT&T moved more decisively towards monopoly. Theodore Newton Vail, president of AT&T during this period, adopted the term "Universal Service" to describe the targeted monopoly state: there would be one universal telephone system. One operated under the policies and, by implication, the ownership of AT&T. AT&T's path to monopoly involved many political and business maneuvers, the details of which have filled more than a few dissertations in history and economics. By the 1920s the deal was done, there would be virtually no (and in a legal sense literally no) long-distance telephone infrastructure in the United States outside of The Bell System.

But what of the era's many telephone entrepreneurs? For several American telephone companies struggling to stand up to AT&T, the best opportunities were overseas. A number of countries, especially elsewhere in the Americas, had telephone systems built by AT&T's domestic competitors. Perhaps the most neatly named was ITT, the International Telephone and Telegraph company. ITT was formed from the combination of Puerto Rican and Cuban telephone companies, and through a series of acquisitions expanded into Europe.

Telefónica, for example, is a descendent of an early ITT acquisition. Other European acquisitions led to wartime complications, like the C. Lorenz company, which under ITT ownership functioned as a defense contractor to the Nazis during WWII. Domestically, ITT also expanded into a number of businesses outside of the monopolized telephone industry, including telegraphy and international cables.

ITT had been bolstered as well by an effect of AT&T's first round of antitrust cases during the 1910s and 1920s. As part of one of several settlements, AT&T agreed to divest several overseas operations to focus instead on the domestic market. They found a perfect buyer: ITT, a company which already seemed like a sibling of AT&T and through acquisitions came to function as one.

ITT grew rapidly during the mid-century, and in the pattern of many industrial conglomerates of the time ITT diversified. Brands like Sheraton Hotels and Avis Rent-a-Car joined the ITT portfolio (incidentally, Avis would be spun off, conglomerated with others, and then purchased by previous CAB subject Beatrice Foods). ITT was a multi-billion-dollar American giant.

Elsewhere in the early technology industry, salesman Howard W. Sams worked for the P. R. Mallory Company in Indianapolis during the 1930s and 1940s. Mallory made batteries and electronic components, especially for the expanding radio industry, and as Sams sold radio components to Mallory customers he saw a common problem and a sales opportunity: radio technicians often needed replacement components, but had a hard time identifying them and finding a manufacturer. Under the auspices of the Mallory company Sams produced and published several books on radio repair and electronic components, but Mallory didn't see the potential that Sams did in these technical manuals.

Sams, driven by the same electronics industry fervor as so many telephone entrepreneurs, struck out on his own. Incorporated in 1946, the Howard W. Sams Company found quick success with its Photofact series. Sort of the radio equivalent of Haynes and Chilton in the auto industry, Photofact provided schematics, parts lists, and repair instructions for popular radio receivers. They were often found on the shelves of both technicians and hobbyists, and propelled the Sams Company to million-dollar revenues by the early 1950s.

Sams would expand along with the electronics industry, publishing manuals on all types of consumer electronics and, by the 1960s, books on the use of computers. Sams, as a technical press, eventually made its way into the ownership of Pearson. Through Pearson's InformIT, the Sams Teach Yourself series remains in bookstores today. I am not quite sure, but I think one of the first technical books I ever picked up was an earlier edition of Sams HTML in 24 Hours.

The 1960s were an ambitious era, and Sams was not content with just books. Sams had taught thousands electronics technicians through their books. Many radio technicians had demonstrated their qualifications and kept up to date by maintaining a membership in the Howard Sams Radio Institute, a sort of correspondence program. It was a natural extension to teach electronics skills in person. In 1963, Sams opened the Sams Technical Institute in Indianapolis. Shortly after, they purchased the Acme Institute of Technology (Dayton, Ohio) and the charmingly named Teletronic Technical Institute (Evansville, Indiana), rebranding both as Sams campuses.

In 1965, the Sams Technical Institute had 2,300 students across five locations. Sams added the Bramwell Business College to its training division, signaling a move into the broader world of higher education. It was a fast growing business; it must have looked like a great opportunity to a telephone company looking for more ways to diversify. In 1968, ITT purchased the entire training division from Sams, renaming it ITT Educational Services [1].

ITT approached education with the same zeal it had overseas telephone service. ITT Educational Services spent the late '60s and early '70s on a shopping spree, adding campus after campus to the ITT system. Two newly constructed campuses expanded ITT's business programs, and during the '70s ITT introduced formal curriculum standardization programs and a bureaucratic structure to support its many locations. Along with expansion came a punchier name: the ITT Technical Institute.

"Tri-State Businessmen Look to ITT Business Institute, Inc. for Graduates," reads one corner of a 1970 full-page newspaper ad. "ITT adds motorcycle repair course to program," 1973. "THE ELECTRONICS AGE IS HERE. If your eyes are on the future, ITT Technical institute can prepare you for a HIGH PAYING, EXCITING career in... ELECTRONICS," 1971. ITT Tech has always known the value of advertising, and ran everything from full-page "advertorials" to succinct classified ads throughout their growing region.

During this period, ITT Tech clearly operated as a vocational school rather than a higher education institution. Many of its programs ran as short as two months, and they were consistently advertised as direct preparation for a career. These sorts of job-oriented programs were very attractive to veterans returning from Vietnam, and ITT widely advertised to veterans on the basis of its approval (clearly by 1972 based on newspaper advertisements, although some sources say 1974) for payment under the GI Bill. Around the same time ITT Tech was approved for the fairly new federal student loan program. Many of ITT's students attended on government money, with or without the expectation of repayment.

ITT Tech flourished. By the mid-'70s the locations were difficult to count, and ITT had over 1,000 students in several states. ITT Tech was the "coding boot camp" of its day, advertising computer programming courses that were sure to lead to employment in just about six months. Like the coding boot camps of our day, these claims were suspect.

In 1975, ITT Tech was the subject of investigations in at least two states. In Indiana, three students complained to the Evansville municipal government after ITT recruiters promised them financial aid and federally subsidized employment during their program. ITT and federal work study, they were told, would take care of all their living expenses. Instead, they ended up living in a YWCA off of food stamps. The Indiana board overseeing private schools allowed ITT to keep its accreditation only after ITT promised to rework its entire recruiting policy---and pointed out that the recruiters involved had left the company. ITT refunded the tuition of a dozen students who joined the complaint, which no doubt helped their case with the state.

Meanwhile, in Massachusetts, the Boston Globe ran a ten-part investigative series on the growing for-profit vocational education industry. ITT Tech, they alleged, promised recruits to its medical assistant program guaranteed post-graduation employment. The Globe claimed that almost no students of the program successfully found jobs, and the Massachusetts Attorney General agreed. In fact, the AG found, the program's placement rate didn't quite reach 5%. For a settlement, ITT Tech agreed to change its recruiting practices and refund nearly half a million dollars in tuition and fees.

ITT continued to expand at a brisk pace, adding more than a dozen locations in the early '80s and beginning to offer associates degrees. Newspapers from Florida to California ran ads exhorting readers to "Make the right connections! Call ITT Technical Institute." As the 1990s dawned, ITT Tech enjoyed the same energy as the computer industry, and aspired to the same scale. In 1992, ITT Tech announced their "Vision 2000" master plan, calling for bachelor's programs, 80 locations, and 45,000 students for beginning of the new millennium. ITT Tech was the largest provider of vocational training the country.

In 1993, ITT Tech was one of few schools accepted into the first year of the Direct Student Loan program. The availability of these new loans gave enrollment another boost, as ITT Tech reached 54 locations and 20,000 students. In 1994, ITT Tech started to gain independence from its former parent: an IPO sold 17% ownership to the open market, with ITT retaining the remaining 83%. The next year, ITT itself went through a reorganization and split, with its majority share of ITT Tech landing in the new ITT Corporation.

As was the case with so many diversified conglomerates of the '90s (see Beatrice Foods again), ITT's reorganization was a bad portent. ITT Hartford, the spun-out financial services division, survives today as The Hartford. ITT Industries, the spun-out defense contracting division, survives today as well, confusingly renamed to ITT Corporation. But the third part of the 1995 breakup, the ITT Corporation itself, merged with Starwood Hotels and Resorts. The real estate and hospitality side-business of a telephone and telegraph company saw the end of its parent.

Starwood had little interest in vocational education, and over the remainder of the '90s sold off its entire share of ITT Tech. Divestment was a good idea: the end of the '90s hit hard for ITT Tech. Besides the general decline of the tech industry as the dot com bubble burst, ITT Tech's suspect recruiting practices were back. This time, they had attracted federal attention.

In 1999, two ITT Tech employees filed a federal whistleblower suit alleging that ITT Tech trained recruiters to use high-pressure sales tactics and outright deception to obtain students eligible for federal aid. Recruiters were paid a commission for each student they brought in, and ITT Tech obtained 70% of its revenue from federal aid programs. A federal investigation moved slowly, apparently protracted by the Department of Education's nervous approach following the criticism it received for shutting down similar operation Computer Learning Centers. In 2004, federal agents raided ITT Tech campuses across ten states, collecting records on recruitment and federal funding.

During the early 2000s ITT Tech students defaulted on $400 million in federal student loans. The result, a large portion of ITT Tech revenue coming from defaulted federal loans, attracted ongoing attention. ITT Tech was deft in its legal defense, though, and through a series of legal victories and, more often, settlements, ITT Tech stayed in business.

ITT Tech aggressively advertised throughout its history. In the late '90s and early '00s, ITT Tech's constant television spots filled a corner of my brain. "How Much You Know Measures How Far You Can Go," a TV spot proclaims, before ITT's distinctive block letter logo faded on screen in metallic silver. By the year 2000, International Telephone and Telegraph, or rather its scattered remains, no longer had any relationship with ITT Tech. Starwood agreed to license the name and logo to the independent public ITT Technical Institutes corporation, though, and with the decline of ITT's original business the ITT name and logo became associated far more with the for-profit college than the electronics manufacturer.

For-profit universities attracted a lot of press in the '00s---the wrong kind of press. ITT Tech was far from unique in suspicious advertising and recruiting, high tuition rates, and frequent defaults on the federal loans that covered that tuition. For-profit education, it seemed, was more of a scam on the taxpayer dollar than way to secure a promising new career. Publicly traded colleges like DeVry and the University of Phoenix had repeated scandals over their use, or abuse, of federal aid, and a 2004 criminal investigation into ITT Tech for fraud on federal student aid made its future murky.

ITT Tech was a survivor. The criminal case fell apart, the whistleblower lawsuit lead to nothing, and ITT Tech continued to grow. In 2009, ITT Tech acquired the formerly nonprofit Daniel Webster University, part of a wave of for-profit conversions of small colleges. ITT Tech explained the purchase as a way to expand their aeronautics offerings, but observers suspected other motives, ones that had more to do with the perceived legitimacy of what was once a nonprofit, regionally accredited institution. Today, regional accreditors re-investigate institutions that are purchased. There was a series of suspect expansions of small colleges to encompass large for-profit organizations during the '00s that lead to the tightening of these rules.

ITT Tech, numerically, achieved an incredible high. In 2014, ITT Tech reported a total cost of attendance of up to $85,000. I didn't spend that much on my BS and MS combined. Of course, I attended college in impoverished New Mexico, but we can make a comparison locally. ITT Tech operated here as well, and curiously, New Mexico tuition is specially listed in an ITT Tech cost estimate report because it is higher. At its location in Albuquerque's Journal Center office development, ITT Tech charged more than $51,000 in tuition alone for an Associate's in Criminal Justice. The same program at Central New Mexico Community College would have cost under $4,000 over the two years [2].

That isn't the most remarkable, though. A Bachelor's in Criminal Justice would run over $100,000---more than the cost of a JD at UNM School of Law, for an out-of-state student, today.

In 2014, more than 80% of ITT Tech's revenue came from federal student aid. Their loan default rate was the highest of even for-profit programs. With their extreme tuition costs and notoriously poor job placement rates, ITT Tech increasingly had the appearance of an outright fraud.

Death came swiftly for ITT Tech. In 2016, they were a giant with more than 130 campuses and 40,000 students. The Consumer Financial Protection Bureau sued. State Attorneys General followed, with New Mexico's Hector Balderas one of the first two. The killing blow, though, came from the Department of Education, which revoked ITT Tech's eligibility for federal student aid. Weeks later, ITT Tech stopped accepting applications. The next month, they filed for bankruptcy, chapter 7, liquidation.

Over the following years, the ITT Tech scandal would continue to echo. After a series of lawsuits, the Department of Education agreed to forgive the federal debt of ITT Tech attendees, although a decision by Betsy DeVos to end the ITT Tech forgiveness program produced a new round of lawsuits over the matter in 2018. Private lenders faced similar lawsuits, and made similar settlements. Between federal and private lenders, I estimate almost $4.5 billion in loans to pay ITT Tech tuition were written off.

The Department of Education decision to end federal aid to ITT Tech was based, in part, on ITT Tech's fraying relationship with its accreditor. The Accrediting Council for Independent Colleges and Schools (ACICS), a favorite of for-profit colleges, had its own problems. That same summer in 2016, the Department of Education ended federal recognition of ACICS. ACICS accreditation reviews had been cursory, and it routinely continued to accredit colleges despite their failure to meet even ACIC's lax standards. ITT Tech was not the only large ACIC-accredited institution to collapse in scandal.

Two years later, Betsy DeVos reinstated ACICS to federal recognition. Only 85 institutions still relied on ACICS, such august names as the Professional Golfers Career College and certain campuses of the Art Institutes that were suspect even by the norms of the Art Institutes (the Art Institutes folded just a few months ago following a similar federal loan fraud scandal). ACICS lost federal recognition again in 2022. Only time will tell what the next presidential administration holds for the for-profit college industry.

ITT endured a long fall from grace. A leading electronics manufacturer in 1929, a diversified conglomerate in 1960, scandals through the 1970s. You might say that ITT is distinctly American in all the best and worst ways. They grew to billions in revenue through an aggressive program of acquisitions. They were implicated in the CIA coup in Chile. They made telephones and radios and radars and all the things that formed the backbone of the mid-century American electronics industry.

The modern ITT Corporation, descended from spinoff company ITT Industries, continues on as an industrial automation company. They have abandoned the former ITT logo, distancing themselves from their origin. The former defense division became Exelis, later part of Harris, now part of L3, doomed to slowly sink into the monopolized, lethargic American defense industry. German tool and appliance company Kärcher apparently holds a license to the former ITT logo, although I struggle to find any use of it.

To most Americans, ITT is ITT Tech, a so-called college that was actually a scam, an infamous scandal, a sink of billions of dollars in federal money. Dozens of telephone companies around the world, tracing their history back to ITT, are probably better off distancing themselves from what was once a promising international telephone operator, a meaningful technical competitor to Western Electric. The conglomeration of the second half of the 20th century put companies together and then tore them apart; they seldom made it out in as good of condition as they went in. ITT went through the same cycle as so many other large American corporations. They went into hotels, car rentals, then into colleges. They left thousands of students in the lurch on the way out. When ITT Tech went bankrupt, everyone else had already started the semester. They weren't accepting applicants. They wouldn't accept transfer credit from ITT anyway; ITT's accreditation was suspect.

"What you don't know can hurt you," a 1990s ITT Tech advertisement declares. In Reddit threads, ITT Tech alums debate if they're better off telling prospective employers they never went to college at all.

[1] Sources actually vary on when ITT purchased Sams Training Institute, with some 1970s newspaper articles putting it as early as 1966, but 1968 is the year that ITT's involvement in Sams was advertised in the papers. Further confusing things, the former Sams locations continued to operate under the Sams Technical Institute name until around 1970, with verbiage like "part of ITT Educational Services" inconsistently appearing. ITT may have been weighing the value of its brand recognition against Sams but apparently made a solid decision during 1970, after which ads virtually always use the ITT name and logo above any other.

[2] Today, undergraduate education across all of New Mexico's public universities and community colleges is free for state residents. Unfortunately 2014 was not such an enlightened time. I must take every opportunity to brag about this remarkable and unusual achievement in our state politics.

--------------------------------------------------------------------------------

>>> 2023-12-05 vhf omnidirectional range

VORTAC site

The term "VHF omnidirectional range" can at first be confusing, because it includes "range"---a measurement that the technology does not provide. The answer to this conundrum is, as is so often the case, history. The "range" refers not to the radio equipment but to the space around it, the area in which the signal can be received. VOR is an inherently spatial technology; the signal is useless except as it relates to the physical world around it.

This use of the word "range" is about as old as instrument flying, dating back to the first radionavigation devices in the 1930s. We still use it today, in the somewhat abstract sense of an acronym that is rarely expanded: VOR.

This is Truth or Consequences VOR. Or, perhaps more accurately, the transmitter that defines the center of the Truth or Consequences VOR, which extends perhaps two hundred miles around this point. The range can be observed only by instruments, but it's there, a phase shift that varies like terrain.

The basic concept of VOR is reasonably simple: a signal is transmitted with two components, a 30Hz tone in amplitude modulation and a 30Hz in frequency modulation. The two tones are out of phase, by an amount that is determined by your position in the range, and more specifically by the radial from the VOR transmitter to your position. This apparent feat of magic, a radio signal that is different in different locations, is often described as "space modulation."

The first VOR transmitters achieved this effect the obvious way, by rapidly spinning a directional antenna in time with the electronically generated phase shift. Spinning anything quickly becomes a maintenance headache, and so VOR was quickly transitioned to solid-state techniques. Modern VOR transmitters are electronically rotated, by one of two techniques. They rotate in the same sense as images on a screen, a set of discrete changes in a solid state system that produce the effect of rotation.

Warning sign

The Truth or Consequences VOR operates on 112.7 MHz, near the middle of the band assigned for this use. Patterned after the nearby Truth or Consequences Airport, KTCS, it identifies itself by transmitting "TCS" in Morse code. Modern charts give this identifier in dots and dashes, an affordance to the poor level of Morse literacy among contemporary pilots.

In the airspace, it defines the intersection of several airways. They all go generally north-south, unsurprising considering that the restricted airspace of White Sands Missile Range prevents nearly all flight to the east. Flights following the Rio Grande, most north-south traffic in this area, will pass directly overhead on their way to VOR transmitters at Socorro or Deming or El Paso, where complicated airspace leads to two such sites very nearby.

This is the function that VORs serve: for the most part, you fly to or from them. Because the radial from the VOR to you remains constant, they provide a reliable and easy to use indication that you are still on the right track. A warning sign, verbose by tradition, articulates the significance:

This facility is used in FAA air traffic control. Loss of human life may result from service interruption. Any person who interferes with air traffic control or damages or trespasses on this property will be prosecuted under federal law.

The sign is backed up by a rustic wooden fence. Like most VOR transmitters, this one was built in the late 1950s or 1960s. The structure has seen only minimal changes since then, although the radio equipment has been improved and simplified.

Antennas

The central, omnidirectional antenna of a VOR transmitter makes for a distinctive silhouette. You have likely noticed one before. I must admit that I have somewhat simplified; most of the volume of the central antenna housing is actually occupied by the TACAN antenna. Most VOR sites in the US are really VORTAC sites, combining the civilian VOR and military TACAN systems into one facility. TACAN has several minor advantages over VOR for military use, but one big advantage: it provides not only a radial but a distance. The same system used by TACAN for distance information, based on an unusual radio modulation technique called "squitter," can be used by civilian aircraft as well in the form of DME. VORTAC sites thus provide VOR, DME, and TACAN service.

True VOR sites, rare in the US but plentiful across the rest of the world, have smaller central antennas. If you are not used to observing the ring of radial antennas, you might not recognize them as the same system.

The radial antennas are placed in a circle some distance away, to open space between them. This reduces, but does not eliminate, the effect of each antenna's radiated power being absorbed by its neighbors. They are often on the roof of the equipment building, and may be surrounded by a metallic ground plane that extends still further. Most US VORTAC sites, originally built before modern RF technology, rely on careful positioning on suitable terrain rather than a ground plane.

Intriguingly, the radial antennas are not directional designs. In a modern VOR site, the radial antennas transmit an in-phase signal. The phase shift used for space modulation is created by rapidly changing the omnidirectional antenna in use. The space modulation is created not by rotating the antenna, but by moving the antenna through a circular path and allowing the Doppler effect to vary the apparent phase of the received signal.

Central Antenna

The lower part of the central antenna, the more cone shaped part, is mostly empty. It encloses the structure that supports the cylindrical radome that houses the actual antenna elements. In newer installations it is often an exposed frame, but the original midcentury sites all provide a conical enclosure. I suspect the circular metallic sheathing simplified calculation of the effective radiation pattern at the time.

An access door can be used to reach the interior to service the antennas; the rope holding this one closed is not standard equipment but is perhaps also not very unusual. These are old facilities. When this cone was installed, adjacent Interstate 25 wasn't an interstate yet.

Monitor antennas

Aviation engineers leave little to chance, and almost never leave a system without a spare. Ground-based infrastructure is no exception. Each VOR transmitter is continuously tested by a monitoring system. A pair of antennas mounted on a post near the fence line feed redundant monitoring systems that ensure the static antennas receive the correct radial. If failure or a bad fix are detected, it switches the transmit antennas over to a second, redundant set of radio equipment. The problem is reported to the FAA, and Tech Ops staff are dispatched to investigate the problem.

Occasionally, the telephone lines VOR stations use to report problems are, themselves, unreliable. When Tech Ops is unable to remotely monitor a VOR station, they issue a NOTAM that it should not be relied upon.

Rear of building

The rear of the building better shows its age. The wall is scarred where old electrical service equipment has been removed; the weather-tight light fixture is a piece of incandescent history. It has probably been broken for longer than I have been alive.

A 1000 gallon propane tank to one side will supply the generator in the enclosure in case of a failure. Records of the Petroleum Storage Bureau of the New Mexico Environment Department show that an underground fuel tank was present at this site but has been removed. Propane is often selected for newer standby generator installations where an underground tank, no longer up to environmental safety standards, had to be removed.

It is indeed in its twilight years. The FAA has shut down about half of the VOR transmitters. TCS was spared this round, with all but one of the VOR transmitters in sparsely covered New Mexico. It is part of the "minimum operational network." It remains to be seen how long VOR's skeleton crew will carry on. A number of countries have now announced the end of VOR service. Another casualty to satellite PNT, joining LORAN wherever dead radio systems go.

Communications tower

The vastness and sparse population of southern New Mexico pose many challenges. One the FAA has long had to contend with is communications. Very near the Truth or Consequences VOR transmitter is an FAA microwave relay site. This tower is part of a chain that relays radar data from southern New Mexico to the air route traffic control center in Albuquerque.

When it was first built, the design of microwave communications equipment was much less advanced than it is today. Practical antennas were bulky and often pressurized for water tightness. Waveguides were expensive and cables were inefficient. To ease maintenance, shorten feedlines, and reduce tower loading, the actual antennas were installed on shelves near the bottom of the tower, pointing straight upwards. At the top of the tower, two passive reflectors acted like mirrors to redirect the signal into the distance. This "periscope" design was widely used by Western Union in the early days of microwave data networking.

Today, this system is partially retired, replaced by commercial fiber networks. This tower survives, maintained under contract by L3Harris. As the compound name suggests, half of this company used to Harris, a pioneer in microwave technology. The other half used to be L3, which split off from Lockheed Martin, which bought it when it was called Loral. Loral was a broad defense contractor, but had its history and focus in radar, another application of microwave RF engineering.

Two old radio sites, the remains of ambitious nationwide systems that helped create today's ubiquitous aviation. A town named after an old radio show. Some of the great achievements of radio history are out there in Sierra County.

--------------------------------------------------------------------------------

>>> 2023-11-25 the curse of docker

I'm heading to Las Vegas for re:invent soon, perhaps the most boring type of industry extravaganza there could be. In that spirit, I thought I would write something quick and oddly professional: I'm going to complain about Docker.

Packaging software is one of those fundamental problems in system administration. It's so important, so influential on the way a system is used, that package managers are often the main identity of operating systems. Consider Windows: the operating system's most alarming defect in the eyes of many "Linux people" is its lack of package management, despite Microsoft's numerous attempts to introduce the concept. Well, perhaps more likely, because of the number of those attempts. And still, in the Linux world, distributions are differentiated primarily by their approach to managing software repositories. I don't just mean the difference between dpkg and rpm, but rather more fundamental decisions, like opinionated vs. upstream configuration and stable repositories vs. a rolling release. RHEL and Arch share the vast majority of their implementation and yet have very different vibes.

Linux distributions have, for the most part, consolidated on a certain philosophy of how software ought to be packaged, if not how often. One of the basic concepts shared by most Linux systems is centralization of dependencies. Libraries should be declared as dependencies, and the packages depended on should be installed in a common location for use of the linker. This can create a challenge: different pieces of software might depend on different versions of a library, which may not be compatible. This is the central challenge of maintaining a Linux distribution, in the classical sense: providing repositories of software versions that will all work correctly together. One of the advantages of stable distributions like RHEL is that they are very reliable in doing this; one of the disadvantages is that they achieve that goal by packaging new versions very infrequently.

Because of the need to provide mutually compatible versions of a huge range of software, and to ensure compliance with all kinds of other norms established by distributions (which may range from philosophical policies like free software to rules on the layout of configuration files), putting new software into Linux distributions can be... painful. For software maintainers, it means dealing with a bunch of distributions using a bunch of old versions with various specific build and configuration quirks. For distribution and package maintainers, it means bending all kinds of upstream software into compliance with distribution policy and figuring out version and dependency problems. It's all a lot of work, and while there are some norms, in practice it's sort of a wild scramble to do the work to make all this happen. Software developers that want their software to be widely used have to put up with distros. Distros that want software have to put up with software developers. Everyone gets mad.

Naturally there have been various attempts to ease these problems. Naturally they are indeed various and the community has not really consolidated on any one approach. In the desktop environment, Flatpak, Snap, and AppImage are all distressingly common ways of distributing software. The images or applications for these systems package the software complete with its dependencies, providing a complete self-contained environment that should work correctly on any distribution. The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals that this concept doesn't always work entirely as advertised, but to be fair that kind of situation usually crops up when the software has to interact with elements of the system that the runtime can't properly isolate them from. The video stack is a classic example, where errant OpenGL libraries in packages might have to be removed or replaced for them to function with your particular graphics driver.

Still, these systems work reasonably well, well enough that they continue to proliferate. They are greatly aided by the nature of the desktop applications for which they're used (Snapcraft's system ambitions notwithstanding). Desktop applications tend to interact mostly with the user and receive their configuration via their own interface. Limiting the interaction surface mostly to a GUI window is actually tremendously helpful in making sandboxing feasible, although it continues to show rough edges when interacting with the file system.

I will note that I'm barely mentioning sandboxing here because I'm just not discussing it at the moment. Sandboxing is useful for security and even stability purposes, but I'm looking at these tools primarily as a way of packaging software for distribution. Sandboxed software can be distributed by more conventional means as well, and a few crusty old packages show that it's not as modern of a concept as it's often made out to be.

Anyway, what I really wanted to complain a bit about is the realm of software intended to be run on servers. Here, there is a clear champion: Docker, and to a lesser degree the ecosystem of compatible tools like Podman. The release of Docker lead to a surprisingly rapid change in what are widely considered best practices for server operations. While Docker images a means of distributing software first seemed to appeal mostly to large scalable environments with container orchestration, it sort of merged together with ideas from Vagrant and others to become a common means of distributing software for developer and single-node use as well.

Today, Docker is the most widespread way that server-side software is distributed for Linux. I hate it.

This is not a criticism of containers in general. Containerization is a wonderful thing with many advantages, even if the advantages over lightweight VMs are perhaps not as great as commonly claimed. I'm not sure that Docker has saved me more hours than it's cost, but to be fair I work as a DevOps consultant and, as a general rule, people don't get me involved unless the current situation isn't working properly. Docker images that run correctly with minimal effort don't make for many billable hours.

What really irritates me these days is not really the use of Docker images in DevOps environments that are, to some extent, centrally planned and managed. The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users. When I see open-source, server-side software offered to me as a Docker image or--even worse---Docker Compose stack, my gut reaction is irritation. These sorts of things usually take longer to get working than equivalent software distributed as a conventional Linux package or to be built from source.

But wait, how does that happen? Isn't Docker supposed to make everything completely self-contained? Let's consider the common problems, something that I will call my Taxonomy of Docker Gone Bad.

Configuration

One of the biggest problems with Docker-as-distribution is the lack of consistent conventions for configuration. The vast majority of server-side Linux software accepts its configuration through an ages-old technique of reading a text file. This certainly isn't perfect! But, it is pretty consistent in its general contours. Docker images, on the other hand...

If you subscribe to the principles of the 12-factor-app, the best way for a Docker image to take configuration is probably via environment variables. This has the upside that it's quite straightforward to provide them on the command line when starting the container. It has the downside that environment variables aren't great for conveying structured data, and you usually interact with them via shell scripts that have clumsy handling of long or complicated values. A lot of Docker images used in DevOps environments take their configuration from environment variables, but they tend to make it a lot more feasible by avoiding complex configuration (by assuming TLS will be terminated by "someone else" for example) or getting a lot of their configuration from a database or service on the network.

For most end-user software though, configuration is too complex or verbose to be comfortable in environment variables. So, often, they fall back to configuration files. You have to get the configuration file into the container's file system somehow, and Docker provides numerous ways of doing so. Documentation on different packages will vary on which way it recommends. There are frequently caveats around ownership and permissions.

Making things worse, a lot of Docker images try to make configuration less painful by providing some sort of entry-point shell script that generates the full configuration from some simpler document provided to the container. Of course this level of abstraction, often poorly documented or entirely undocumented in practice, serves mostly to make troubleshooting a lot more difficult. How many times have we all experienced the joy of software failing to start, referencing some configuration key that isn't in what we provided, leading us to have to find have the Docker image build materials and read the entrypoint script to figure out how it generates that value?

The situation with configuration entrypoint scripts becomes particularly acute when those scripts are opinionated, and opinionated is often a nice way of saying "unsuitable for any configuration other than the developer's." Probably at least a dozen times I have had to build my own version of a Docker image to replace or augment an entrypoint script that doesn't expose parameters that the underlying software accepts.

In the worst case, some Docker images provide no documentation at all, and you have to shell into them and poke around to figure out where the actual configuration file used by the running software is even located. Docker images must always provide at least some basic README information on how the packaged software is configured.

Filesystems

One of the advantages of Docker is sandboxing or isolation, which of course means that Docker runs into the same problem that all sandboxes do. Sandbox isolation concepts do not interact well with Linux file systems. You don't even have to get into UID behavior to have problems here, just a Docker Compose stack that uses named volumes can be enough to drive you to drink. Everyday operations tasks like backups, to say nothing of troubleshooting, can get a lot more frustrating when you have to use a dummy container to interact with files in a named volume. The porcelain around named volumes has improved over time, but seemingly simple operations can still be weirdly inconsistent between Docker versions and, worse, other implementations like Podman.

But then, of course, there's the UID thing. One of the great sins of Docker is having normalized running software as root. Yes, Docker provides a degree of isolation, but from a perspective of defense in depth running anything with user exposure as root continues to be a poor practice. Of course this is one thing that often leads me to have to rebuild containers provided by software projects, and a number of common Docker practices don't make it easy. It all gets much more complicated if you use hostmounts because of UID mapping, and slightly complex environments with Docker can turn into NFS-style puzzles around UID allocation. Mitigating this mess is one of the advantages to named volumes, of course, with the pain points they bring.

Non-portable Containers

The irony of Docker for distribution, though, and especially Docker Compose, is that there are a lot of common practices that negatively impact portability---ostensibly the main benefit of this approach. Doing anything non-default with networks in Docker Compose will often create stacks that don't work correctly on machines with complex network setups. Too many Docker Compose stacks like to assume that default, well-known ports are available for listeners. They enable features of the underlying software without giving you a way to disable them, and assume common values that might not work in your environment.

One of the most common frustrations, for me personally, is TLS. As I have already alluded to, I preach a general principle that Docker containers should not terminate TLS. Accepting TLS connections means having access to the private key material. Even if 90-day ephemeral TLS certificates and a general atmosphere of laziness have deteriorated our discipline in this regard, private key material should be closely guarded. It should be stored in only one place and accessible to only one principal. You don't even have to get into these types of lofty security concerns, though. TLS is also sort of complicated to configure.

A lot of people who self-host software will have some type of SNI or virtual hosting situation. There may be wildcard certificates for multiple subdomains involved. All of this is best handled at a single point or a small number of dedicated points. It is absolutely maddening to encounter Docker images built with the assumption that they will individually handle TLS. Even with TLS completely aside, I would probably never expose a Docker container with some application directly to the internet. There are too many advantages to having a reverse proxy in front of it. And yet there are Docker Compose stacks out there for end-user software that want to use ACME to issue their own certificate! Now you have to dig through documentation to figure out how to disable that behavior.

The Single-Purpose Computer

All of these complaints are most common with what I would call hobby-tier software. Two examples that pop into my mind are HomeAssistant and Nextcloud. I don't call these hobby-tier to impugn the software, but rather to describe the average user.

Unfortunately, the kind of hobbyist that deploys software has had their mind addled by the cheap high of the Raspberry Pi. I'm being hyperbolic here, but this really is a problem. It's absurd the number of "self-hosted" software packages that assume they will run on dedicated hardware. Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device." Call me old-fashioned, but I like my computers to perform more than one task, especially the ones that are running up my power bill 24/7.

HomeAssistant is probably the biggest offender here, because I run it in Docker on a machine with several other applications. It actively resists this, popping up an "unsupported software detected" maintenance notification after every update. Can you imagine if Postfix whined in its logs if it detected that it had neighbors?

Recently I decided to give NextCloud a try. This was long enough ago that the details elude me, but I think I burned around two hours trying to get the all-in-one Docker image to work in my environment. Finally I decided to give up and install it manually, to discover it was a plain old PHP application of the type I was regularly setting up in 2007. Is this a problem with kids these days? Do they not know how to fill in the config.php?

Hiding Sins

Of course, you will say, none of these problems would be widespread of people just made good Docker images. And yes, that is completely true! Perhaps one of the problems with Docker is that it's too easy to use. Creating an RPM or Debian package involves a certain barrier to entry, and it takes a whole lot of activation energy for even me to want to get rpmbuild going (advice: just use copr and rpkg). At the core of my complaints is the fact that distributing an application only as a Docker image is often evidence of a relatively immature project, or at least one without anyone who specializes in distribution. You have to expect a certain amount of friction in getting these sorts of things to work in a nonstandard environment.

It is a palpable irony, though, that Docker was once heralded as the ultimate solution to "works for me" and yet seems to just lead to the same situation existing at a higher level of configuration.

Last Thoughts

This is of course mostly my opinion and I'm sure you'll disagree on something, like my strong conviction that Docker Compose was one of the bigger mistakes of our era. Fifteen years ago I might have written a nearly identical article about all the problems I run into with RPMs created by small projects, but what surprises me about Docker is that it seems like projects can get to a large size, with substantial corporate backing, and still distribute in the form of a decidedly amateurish Docker Compose stack. Some of it is probably the lack of distribution engineering personnel on a lot of these projects, since Docker is "simple." Some of it is just the changing landscape of this class of software, with cheap single-board computers making Docker stacks just a little less specialized than a VM appliance image more palatable than they used to be. But some if it is also that I'm getting older and thus more cantankerous.

--------------------------------------------------------------------------------

>>> 2023-11-19 Centrex

I have always been fascinated by the PABX - the private automatic branch exchange, often shortened to "PBX" in today's world where the "automatic" is implied. (Relatively) modern small and medium business PABXs of the type I like to collect are largely solid-state devices that mount on the wall. Picture a cabinet that's maybe two feet wide, a foot and half tall, and five inches deep. That's a pretty accurate depiction of my Comdial hybrid key/PABX system, recovered from the offices of a bankrupt publisher of Christian home schooling materials.

These types of PABX, now often associated with Panasonic on the small end, are affordable and don't require much maintenance or space. They have their limitations, though, particularly in terms of extension count. Besides, the fact that these compact PABX are available at all is the result of decades of development in electronics.

Not that long ago, PABX were far more complex. Early PBX systems were manual, and hotels were a common example of a business that would have a telephone operator on staff. The first PABX were based on the same basic technology as their contemporary phone switches, using step-by-step switches or even crossbar mechanisms. They no longer required an operator to connect every call, but were still mostly designed with the assumption that an attendant would handle some situations. Moreover, these early PABX were large, expensive, and required regular maintenance. They were often leased from the telephone company, and the rates weren't cheap.

PABX had another key limitation as well: they were specific to a location. Each extension had to be home-run wired to the PABX, easy in a single building but costly at the level of a campus and, especially, with buildings spread around a city. For organizations with distributed buildings like school districts, connecting extensions back to a central PABX could be significantly more expensive than connecting them to the public telephone exchange.

This problem must have been especially common in a city the size of New York, so it's no surprise that New York Telephone was the first to commercialize an alternative approach: Centrex.

Every technology writer must struggle with the temptation to call every managed service in history a precursor to "the Cloud." I am going to do my very best to resist that nagging desire, but it's difficult not to note the similarity between Centrex service and modern cloud PABX solutions. Indeed, Centrex relied on capabilities of telephone exchange equipment that are recognizably similar to mainframe computer concepts like LPARs and virtualization today. But we'll get there in a bit. First, we need to talk about what Centrex is.

I've had it in my mind to write something about Centrex for years, but I've always had a hard time knowing where to start. The facts about Centrex are often rather dry, and the details varied over years of development, making it hard to sum up the capabilities in short. So I hope that you will forgive this somewhat dry post. It covers something that I think is a very important part of telephone history, particularly from the perspective of the computer industry today. It also lists off a lot of boring details. I will try to illustrate with interesting examples everywhere I can. I am indebted, for many things but here especially, to many members of the Central Office mailing list. They filled in a lot of details that solidified my understanding of Centrex and its variants.

The basic promise of Centrex was this: instead of installing your own PABX, let the telephone company configure their own equipment to provide the features you want to your business phones. A Centrex line is a bit like a normal telephone line, but with all the added capabilities of a business phone system: intercom calling, transfers, attendants, routing and long distance policies, and so on. All of these features were provided by central telephone exchanges, but your lines were partitioned to be interconnected within your business.

Centrex was a huge success. By 1990, a huge range of large institutions had either started their telephone journey with Centrex or transitioned away from a conventional PABX and onto Centrex. It's very likely that you have interacted with a Centrex system before and perhaps not realized. And now, Centrex's days are numbered. Let's look at the details.

Centrex is often explained as a reuse of the existing central office equipment to serve PABX requirements. This isn't entirely incorrect, but it can be misleading. It was not all that unusual for Centrex to rely on equipment installed at the customer site, but operated by the telco. For this reason, it's better to think of Centrex as a managed service than as a "cloud" service, or a Service-as-a-Service, or whatever modern term you might be tempted to apply.

Centrex existed in two major variants: Centrex-CO and Centrex-CU. The CO case, for Central Office, entailed this well-known design of each business telephone line connecting to an existing telco central office, where a switch was configured to provide Centrex features on that line group. CU, for Customer Unit, looks more like a very large PABX. These systems were usually limited to very large customers, who would provide space for the telco to build a new central office on the customer's site. The exchange was located with the customer, but operated by the telco.

These two different categories of service lead to two different categories of customers, with different needs and usage patterns. Centrex-CO appealed to smaller organizations with fewer extensions, but also to larger organizations with extensions spread across a large area. In that case, wiring every extension back to the CO using telco infrastructure was less expensive than installing new wiring to a CU exchange. A prototypical example might be a municipal school district.

Centrex-CU appealed to customers with a large number of extensions grouped in a large building or a campus. In this case it was much less costly to wire extensions to the new CU site than to connect them all over the longer distance to an existing CO. A prototypical Centrex-CU customer might be a university.

Exactly how these systems worked varied greatly from exchange to exchange, but the basic concept is a form of partitioning. Telephone exchanges with support for Centrex service could be configured such that certain lines were grouped together and enabled for Centrex features. The individual lines needed to have access to Centrex-specific capabilities like service codes, but also needed to be properly associated with each other so that internal calling would indeed be internal to the customer. This concept of partitioning telephone switches had several different applications, and Western Electric and other manufacturers continued to enhance it until it reached a very high level of sophistication in digital switches.

Let's look at an example of a Centrex-CO. The State of New Mexico began a contract with Mountain States Telephone and Telegraph [1] for Centrex service in 1964. The new Centrex service replaced 11 manual switchboards distributed around Santa Fe, and included Wide-Area Telephone Service (WATS), a discount arrangement for long-distance calls placed from state offices to exchanges throughout New Mexico. On November 9th, 1964, technicians sent to Santa Fe by Western Electric completed the cutover at the state capitol complex. Incidentally, the capitol phones of the day were being installed in what is now the Bataan Memorial Building: construction of the Roundhouse, today New Mexico's distinctive state capitol, had just begun that same year.

The Centrex service was estimated to save $12,000 per month in the rental and operation of multiple state exchanges, and the combination of WATS and conference calling service was expected to produce further savings by reducing the need for state employees to travel for meetings. The new system was evidently a success, and lead to a series of minor improvements including a scheme later in 1964 to ensure that the designated official phone number of each state agency would be answered during the state lunch break (noon to 1:15). In 1965, Burns Reinier resigned her job as Chief Operator of the state Centrex to launch a campaign for Secretary of State. Many state employees would probably recognize her voice, but that apparently did not translate to recognition on the ballot, as she lost the Democratic party nomination to the Governor's former secretary.

The late 1960s saw a flurry of newspaper advertisements giving new phone numbers for state and municipal agencies, Albuquerque Public Schools, and universities, as they all consolidated onto the state-run Centrex system. Here we must consider the geographical nature of Centrex: Centrex service operates within a single telephone exchange. To span the gap between the capitol in Santa Fe, state offices and UNM in Albuquerque, NMSU in Las Cruces, and even the State Hospital in Las Vegas (NM), a system of tie lines were installed between Centrex facilities in each city. These tie lines were essentially dedicated long distance trunks leased by the state to connect calls between Centrex exchanges at lower cost than even WATS long-distance service.

This system was not entirely CO-based: in Albuquerque, a Centrex exchange was installed in state-leased space at what was then known as the National Building, 505 Marquette. In the late '60s, 505 Marquette also hosted Telepak, an early private network service from AT&T. It is perhaps a result of this legacy that 505 Marquette houses one of New Mexico's most important network facilities, a large carrier hotel now operated by H5 Data Centers. The installation of the Centrex exchange at 505 Marquette saved a lot of expense on new local loops, since a series of 1960s political and bureaucratic events lead to a concentration of state offices in the new building.

Having made this leap to customer unit systems, let's jump almost 30 years forward to an example of a Centrex-CU installation... one with a number of interesting details. In late 1989, Sandia National Laboratories ended its dependence on the Air Force for telephony services by contracting with AT&T for the installation of a 5ESS telephone exchange. The 5ESS, a digital switch and a rather new one at the time, brought with it not just advanced calling features but something even more compelling to an R&D institution at the time: data networking.

The Sandia installation went nearly all-in on ISDN, the integrated digital telephony and data standard that largely failed to achieve adoption for telephone applications. Besides the digital telephone sets, though, Sandia made full use of the data capabilities of the exchange. Computers connected to the data ports on the ISDN user terminals (the conventional term for the telephone instrument itself in an ISDN network) could make "data calls" over the telephone system to access IBM mainframes and other corporate computing resources... all at a blistering 64 kbps, the speed of an ISDN basic rate interface bearer channel. The ISDN network could even transport video calls, by combining multiple BRIs for 384 kbps aggregate capacity.

The 5ESS was installed on a building on Air Force property near Tech Area 1, and the 5ESS's robust support for remote switch modules was fully leveraged to place an RSM in each Tech Area. The new system required renumbering, always a hassle, but allowed for better matching of Sandia's phone numbers on the public network to phone numbers on the Federal Telecommunications System or FTS... a CCSA operated for the Federal Government. But we'll talk about that later. The 5ESS was also equipped with ISDN PRI tie lines to a sibling 5ESS at Sandia California in Livermore, providing inexpensive calling and ISDN features between the two sites.

This is a good time to discuss digital Centrex. Traditional telephony, even today in residential settings, uses analog telephones. Business systems, though, made a transition from analog to digital during the '80s and '90s. Digital telephone sets used with business systems provided far easier access to features of the key system, PABX, or Centrex, and with fewer wires. A digital telephone set on one or two telephone pairs could offer multiple voice lines, caller ID, central directory service, busy status indication for other phones, soft keys for pickup groups and other features, even text messaging in some later systems (like my Comdial!). Analog systems often required as many as a half dozen pairs just for a simple configuration like two lines and busy lamp fields; analog "attendant" sets with access to many lines could require a 25-pair Amphenol connector... sometimes even more than one.

Many of these digital systems used proprietary protocols between the switch and telephones. A notable example would be the TCM protocol used by the Nortel Meridian, an extremely popular PABX that can still be found in service in many businesses. Digital telephone sets made the leap to Centrex as well: first by Nortel themselves, who offered a "Meridian Digital Centrex" capability on their DMS-100 exchange switch that supported telephone sets similar to (but not the same as!) ordinary Meridian digital systems. AT&T followed several years later by offering 5ESS-based digital Centrex over ISDN: the same basic capability that could be used for computer applications as well, but with the advantage of full compatibility with AT&T's broader ISDN initiative.

The ISDN user terminals manufactured by Western Electric and, later, Lucent, are distinctive and a good indication that that digital Centrex is in use. They are also lovely examples of the digital telephones of the era, with LCD matrix displays, a bevy of programmable buttons, and pleasing Bellcore distinctive ringing. It is frustrating that the evolution of telephone technology has seemingly made ringtones far worse. We will have to forgive the oddities of the ISDN electrical standard that required an "NT1" network termination device screwed to the bottom of your desk or, more often, underfoot on the floor.

Thinking about these digital phones, let's consider the user experience of Centrex. Centrex was very flexible; there were a large number of options available based on customer preference, and the details varied between the Centrex host switches used in the United States: Western Electric's line from the 5XB to the 5ESS, Nortel's DMS-100 and DMS-10, and occasionally the Siemens EWSD. This all makes it hard to describe Centrex usage succinctly, but I will focus on some particular common features of Centrex.

Like PABXs, most Centrex systems required that a dialing prefix (conventionally nine) be used for an outside line. This was not universal, "assumed nine" could often be enabled at customer request, but it created a number of complications in the dialplan and was best avoided. Centrex systems, because they mostly belonged to larger customers, were more likely than PABXs to offer tie lines or other private routing arrangements, which were often used by dialing calls with a prefix of 8. Like conventional telephone systems, you could dial 0 for the operator, but on traditional large Centrex systems the operator would be an attendant within the Centrex customer organization.

Centrex systems enabled internal calling by extension, much like PABXs. Because of the large size of some Centrex-CU installations in particular you are probably much more likely to encounter five-digit extensions with Centrex than with a PABX. These types of extensions were usually designed by taking several exchange prefixes in a sequence, and using the last digit of the exchange code as the first digit of the extension. For that reason the extensions are often written in a format like 1-2345. A somewhat charming example of this arrangement was the 5ESS-based Centrex-CU at Los Alamos National Laboratories, which spans exchange prefixes 662-667 in the 505 NPA. Since that includes the less desirable exchange prefix 666, it was skipped. Of course, that didn't stop Telnyx from starting to use it more recently. Because of the history of Los Alamos's development, telephones in the town use these same prefixes, generally the lower ones.

With digital telephones, Centrex features are comparatively easy to access, since they can be assigned to buttons on the telephones. With analog systems there are no such convenient buttons, so Centrex features had to be awkwardly bolted on much like advanced features on non-Centrex lines. Many features are activated using vertical service codes starting with *, although in some systems (especially older systems for pulse compatibility) they might be mapped to codes that look more like extensions. Operations that involve interrupting an active call, like transfer or hold, involve flashing the hookswitch... a somewhat antiquated operation now more often achieved with a "flash" button on the telephone, when it's done at all.

Still, some analog Centrex systems used electrical tricks on the pair (similar to many PABX) to provide a message waiting light and even an extra button for common operations.

While Centrex initially appealed mainly to larger customers, improvements in host switch technology and telephone company practices made it an accessible option for small organizations as well. Verizon's "CustoPAK" was an affordable offering that provided Centrex features on up to 30 extensions. These small-scale services were also made more accessible by computerization. Configuration changes to the first crossbar Centrex service required exchange technicians climbing ladders to resolder jumpers. With the genesis of digital switches, telco employees in translation centers read customer requirements and built switch configuration plans. By the '90s, carriers offered modem services that allowed customers to reconfigure their Centrex themselves, and later web-based self-service systems emerged.

So what became of Centrex? Like most aspects of the conventional copper phone network, it is on the way out. Major telephone carriers have mostly removed Centrex service from their tariffs, meaning they are no longer required to offer it. Even in areas where it is present on the tariff it is reportedly hard to obtain. A report from the state of Washington notes that, as a result particularly of CenturyLink removing copper service from its tariffs entirely, CenturyLink has informed the state that it may discontinue Centrex service at any time, subject to six months notice. Six months may seem like a long time but it is a very short period for a state government to replace a statewide telephone system... so we can anticipate some hurried acquisitions in the next couple of years.

Centrex had always interacted with tariffs in curious ways, anyway. Centrex was the impetus behind multiple lawsuits against AT&T on grounds varying from anti-competitive behavior to violations of the finer points of tariff regulation. For the most part AT&T prevailed, but some of these did lead to changes in the way Centrex service was charged. Taxation was a particularly difficult matter. There were excise taxes imposed on telephone service in most cases, but AT&T held that "internal" calls within Centrex customers should not be subject to these taxes due to their similarity to untaxed PABX and key systems. The finer points of this debate varied from state to state, and it made it to the Supreme Court at least once.

Centrex could also have a complex relationship with the financial policies of many institutional customers. Centrex was often paired with services like WATS or tie lines to make long-distance calling more affordable, but this also encouraged employees to make their personal long-distance calls in the office. The struggle of long-distance charge accounting lead not only to lengthy employee "acceptable use" policies that often survive to this day, but also schemes of accounting and authorization codes to track long distance users. Long-distance phone charges by state employees were a perennial minor scandal in New Mexico politics, leading to some sort of audit or investigation every few years. Long-distance calling was often disabled except for extensions that required it, but you will find stories of public courtesy phones accidentally left with long-distance enabled becoming suddenly popular parts of university buildings.

Today, Centrex is generally being replaced with VoIP solutions. Some of these are fully managed, cloud-based services, analogous to Centrex-CO before them. IP phones bring a rich featureset that leave eccentric dialplans and feature codes mostly forgotten, and federal regulations around the accessibility of 911 have broadly discouraged prefix schemes for outside calls. On the flip side, these types of phone systems make it very difficult to configure dialplan schemes on endpoints, leading office workers to learn a new type of phone oddity: dialing pound after a number to skip the end-of-dialing timeout. This worked on some Centrex systems as well; some things never change.

[1] Later called US West, later called Qwest, now part of CenturyLink, which is now part of Lumen.

--------------------------------------------------------------------------------

>>> 2023-11-04 nuclear safety

Nuclear weapons are complex in many ways. The basic problem of achieving criticality is difficult on its own, but deploying nuclear weapons as operational military assets involves yet more challenges. Nuclear weapons must be safe and reliable, even with the rough handling and potential of tampering and theft that are intrinsic to their military use.

Early weapon designs somewhat sidestepped the problem by being stored in inoperational condition. During the early phase of the Cold War, most weapons were "open pit" designs. Under normal conditions, the pit was stored separately from the weapon in a criticality-safe canister called a birdcage. The original three nuclear weapons stockpile sites (Manzano Base, Albuquerque NM; Killeen Base, Fort Hood TX; Clarksville Base, Fort Campbell KY) included special vaults to store the pit and assembly buildings where the pits would be installed into weapons. The pit vaults were designed not only for explosive safety but also to resist intrusion; the ability to unlock the vaults was reserved to a strictly limited number of Atomic Energy Commission personnel.

This method posed a substantial problem for nuclear deterrence, though. The process of installing the pits in the weapons was time consuming, required specially trained personnel, and wasn't particularly safe. Particularly after the dawn of ICBMs, a Soviet nuclear attack would require a rapid response, likely faster than weapons could be assembled. The problem was particularly evident when nuclear weapons were stockpiled at Strategic Air Command (SAC) bases for faster loading onto bombers. Each SAC base required a large stockpile area complete with hardened pit vaults and assembly buildings. Far more personnel had to be trained to complete the assembly process, and faster. Opportunities for mistakes that made weapons unusable, killed assembly staff, or contaminated the environment abounded.

As nuclear weapons proliferated, storing them disassembled became distinctly unsafe. It required personnel to perform sensitive operations with high explosives and radioactive materials, all under stressful conditions. It required that nuclear weapons be practical to assemble and disassemble in the field, which prevented strong anti-tampering measures.

The W-25 nuclear warhead, an approximately 220 pound, 1.7 kT weapon introduced in 1957, was the first to employ a fully sealed design. A relatively small warhead built for the Genie air-to-air missile, several thousand units would be stored fully assembled at Air Force sites. The first version of the W-25 was, by the AEC's own admission, unsafe to transport and store. It could detonate by accident, or it could be stolen.

The transition to sealed weapons changed the basic model of nuclear weapons security. Open weapons relied primarily on the pit vault, a hardened building with a bank-vault door, as the authentication mechanism. Few people had access to this vault, and two-man policies were in place and enforced by mechanical locks. Weapons stored assembled, though, lacked this degree of protection. The advent of sealed weapons presented a new possibility, though: the security measures could be installed inside of the weapon itself.

Safety elements of nuclear weapons protect against both unintentional and intentional attacks on the weapon. For example, from early on in the development of sealed implosion-type weapons "one-point safety" became common (it is now universal). One-point safe weapons have their high explosive implosion charge designed so that a detonation at any one point in the shell will never result in a nuclear yield. Instead, the imbalanced forces in the implosion assembly will tear it apart. This improper detonation produces a "fizzle yield" that will kill bystanders and scatter nuclear material, but produces orders of magnitude less explosive force and radiation dispersal than a complete nuclear detonation.

The basic concept of one-point safety is a useful example to explain the technical concepts that followed later. One-point safety is in some ways an accidental consequence of the complexity of implosion weapons: achieving a full yield requires an extremely precisely timed detonation of the entire HE shell. Weapons relied on complex (at the time) electronic firing mechanisms to achieve the required synchronization. Any failure of the firing system to produce a simultaneous detonation results in a partial yield because of the failure to achieve even implosion. One-point safety is essentially just a product of analysis (today computer modeling) to ensure that detonation of a single module of the HE shell will never result in a nuclear yield.

This one-point scenario could occur because of outside forces. For example, one-point safety is often described in terms of enemy fire. Imagine that, in combat conditions, anti-air weapons or even rifle fire strike a nuclear weapon. The shock forces will reach one side of the HE shell first. If they are sufficient to detonate it (not an easy task as very insensitive explosives are used), the one-point detonation will destroy the weapon with a fizzle yield.

We can also examine one-point safety in terms of the electrical function of the weapon. A malfunction or tampering with a weapon might cause one of the detonators to fire. The resulting one-point detonation will destroy the weapon. Achieving a nuclear yield requires that the shell be detonated in synchronization, which naturally functions as a measure of the correct operation of the firing system. Correctly firing a nuclear weapon is complex and difficult, requiring that multiple components are armed and correctly functioning. This itself serves as a safety mechanism since correct operation, difficult to achieve by intention, is unlikely to happen by accident.

Like most nuclear weapons, the W-25 received a series of modifications or "mods." The second, mod 1 (they start at 0), introduced a new safety mechanism: an environmental sensing device. The environmental sensing device allowed the weapon to fire only if certain conditions were satisfied, conditions that were indicative of the scenario the weapon was intended to fire in. The details of the ESD varied by weapon and probably even by application within a set of weapons, but the ESD generally required things like a moving a certain distance at a certain speed (determined by inertial measurements) or a certain change in altitude in order to arm the weapon. These measurements ensured that the weapon had actually been fired on a missile or dropped as a bomb before it could arm.

The environmental sensing device provides one of two basic channels of information that weapons require to arm: indication that the weapon is operating under normal conditions, like flying towards a target or falling onto one. This significantly reduces the risk of unintentional detonation.

There is a second possibility to consider, though, that of intentional detonation by an unauthorized user. A weapon could be stolen, or tampered with in place as an act of terrorism. To address this possibility, a second basic channel of input was developed: intent. For a weapon to detonate, it must be proven that an authorized user has the intent to detonate the weapon.

The implementation of these concepts has varied over time and by weapon type, but from unclassified materials a general understanding of the architecture of these safety systems can be developed. I decided to write about this topic not only because it is interesting (it certainly is), but also because many of the concepts used in the safety design of nuclear weapons are also applicable to other systems. Similar concepts are used, for example, in life-safety systems and robotics, fields where unintentional operation or tampering can cause significant harm to life and property. Some of the principles are unsurprisingly analogous to cryptographic methods used in computer security, as well.

The basic principle of weapons safety is called the strong link, weak link principle, and it is paired to the related idea of an exclusion zone. To understand this, it's helpful to remember the W-25's sealed design. For open weapons, a vault was used to store the pit. In a sealed weapon, the vault is, in a sense, built into the weapon. It's called the exclusion zone, and it can be thought of as a tamper-protected, electrically isolated chamber that contains the vital components of the weapon, including the electronic firing system.

In order to fire the weapon, the exclusion zone must be accessed, in that an electrical signal needs to be delivered to the firing system. Like the bank vaults used for pits, there is only one way into the exclusion zone, and it is tightly locked. An electrical signal must penetrate the energy barrier that surrounds the exclusion zone, and the only way to do so is by passing through a series of strong links.

The chain of events required to fire a nuclear weapon can be thought of like a physical chain used to support a load. Strong links are specifically reinforced so that they should never fail. We can also look at the design through the framework of information security, as an authentication and authorization system. Strong links are strict credential checks that will deny access under all conditions except the one in which the weapon is intended to fire: when the weapon is in suitable environmental conditions, has received an authorized intent signal, and the fuzing system calls for detonation.

One of the most important functions of the strong link is to confirm that correct environmental and intent authorization has occurred. The environmental sensing device, installed in the body of the weapon, sends its authorizing signal when its conditions are satisfied. There is some complexity here, though. One of the key concerns in weapons safety was the possibility of stray electrical signals, perhaps from static or lightning or contact with an aircraft electrical system, causing firing. The strong link needs to ensure that the authorization signal received really is from the environmental sensing device, and not a result of some electrical transient.

This verification is performed by requiring a unique signal. The unique signal is a digital message consisting of multiple bits, even when only a single bit of information (that environmental conditions are correct) needs to be conveyed. The extra bits serve only to make the message complex and unique. This way, any transient or unintentional electrical signal is extremely unlikely to match the correct pattern. We can think of this type of unique signal as an error detection mechanism, padding the message with extra bits just to verify the correctness of the important one.

Intent is a little trickier, though. It involves human input. The intent signal comes from the permissive action link, or PAL. Here, too, the concept of a unique signal is used to enable the weapon, but this time the unique signal isn't only a matter of error detection. The correct unique signal is a secret, and must be provided by a person who knows it.

Permissive action links are fascinating devices from a security perspective. The strong link is like a combination lock, and the permissive action link is the key or, more commonly, a device through which they key is entered. There have been many generations of PALs, and we are fortunate that a number of older, out of use PALs are on public display at the National Museum of Nuclear Science and History here in Albuquerque.

Here we should talk a bit about the implementation of strong links and PALs. While newer designs are likely more electronic, older designs were quite literally combination locks: electromechanical devices where a stepper motor or solenoid had to advance a clockwork mechanism in the correct pattern. It was a lot like operating a safe lock by remote. The design of PALs reflected this. Several earlier PALs are briefcases that, when opened, reveal a series of dials. An operator has to connect the PAL to the weapon, turn all the dials to the correct combination, and then press a button to send to the unique signal to the weapon.

Later PALs became very similar to the key loading devices used for military cryptography. The unique signal is programmed into volatile memory in the PAL. To arm a weapon, the PAL is connected, an operator authenticates themselves to the PAL, and then the PAL sends the stored unique signal. Like a key loader, the PAL itself incorporates measures against tampering or theft. A zeroize function is activated by tamper sensors or manually and clears the stored unique key. Too many failures by an operator to authenticate themselves also results in the stored unique signal being cleared.

Much like key loaders, PALs developed into more sophisticated devices over time with the ability to store and manage multiple unique signals, rekey weapons with new unique signals, and to authenticate the operator by more complex means. A late PAL-adjacent device on public display is the UC1583, a Compaq laptop docked to an electronic interface. This was actually a "PAL controller," meaning that it was built primarily for rekeying weapons and managing sets of keys. By this later era of nuclear weapons design, the PAL itself was typically integrated into communications systems on the delivery vehicle and provided a key to the weapon based on authorization messages received directly from military command authorities.

The next component to understand is the weak link. A strong link is intended to never fail open. A weak link is intended to easily fail closed. A very basic type of weak link would be a thermal fuse that burns out in response to high temperatures, disconnecting the firing system if the weapon is exposed to fire. In practice there can be many weak links and they serve as a protection against both accidental firing of a damaged weapon and intentional tampering. The exclusion zone design incorporates weak links such that any attempt to open the exclusion zone by force will result in weak links failing.

A special case of a weak link, or at least something that functions like a weak link, is the command disable feature on most weapons. Command disable is essentially a self-destruct capability. Details vary but, on the B61 for example, the command disable is triggered by pulling a handle that sticks out of the control panel on the side of the weapon. The command disable triggers multiple weak links, disabling various components of the weapon in hard-to-repair ways. An unauthorized user, without the expertise and resources of the weapons assembly technicians at Pantex, would find it very difficult to restore a weapon to working condition after the command disable was activated. Some weapons apparently had an explosive command disable that destroyed the firing system, but from publicly available material it seems that a more common design involved the command disable interrupting the power supply to volatile storage for unique codes and configuration information.

There are various ways to sum up these design features. First, let's revisit the overall architecture. Critical components of nuclear weapons, including both the pit itself and the electronic firing system, are contained within the exclusion zone. The exclusion zone is protected by an energy barrier that isolates it from mechanical and electrical influence. For the weapon to fire, firing signals must pass through strong links and weak links. Strong links are designed to never open without a correct unique signal, and to fail open only in extreme conditions that would have already triggered weak links. Weak links are designed to easily fail closed in abnormal situations like accidents or tampering. Both strong links and weak links can receive human input, strong links to provide intent authorization, and weak links to manually disable the weapon in a situation where custody may be lost.

The physical design of nuclear weapons is intricate and incorporates many anti-tamper and mechanical protection features, and high explosives and toxic and radioactive materials lead to hazardous working conditions. This makes the disassembly of modern nuclear weapons infamously difficult; a major challenge in the reduction of the nuclear stockpile is the backlog of weapons waiting for qualified technicians to take them apart. Command disable provides a convenience feature for this purpose, since it allows weapons to be written off the books before they can be carefully dismantled at one of very few facilities (often just one) capable of doing so. As an upside, these same properties make it difficult for an unauthorized user to circumvent the safety mechanisms in a nuclear weapon, or repair one in which weak links have failed.

Accidental arming and detonation of a nuclear weapon should not occur because the weapon will only arm on receipt of complex unique signals, including an intent signal that is secret and available only to a limited number of users (today, often only to the national command authority). Detonation of a weapon under extreme conditions like fire or mechanical shock is prevented by the denial of the strong links, the failure of the weak links, and the inherent difficulty of correctly firing a nuclear weapon. Compromise of a nuclear weapon, or detonation by an unauthorized user, is prevented by the authentication checks performed by the strong links and the tamper resistance provided by the weak links. Cryptographic features of modern PALs enhance custodial control of weapons by enabling rotation and separation of credentials.

Modern PALs particularly protect custodial control by requiring keys unknown to the personnel handling the weapons before they can be armed. These keys must be received from the national command authority as part of the order to attack, making communications infrastructure a critical part of the nuclear deterrent. It is for this reason that the United States has so many redundant, independent mechanisms of delivering attack orders, ranging from secure data networks to radio equipment on Air Force One capable of direct communication with nuclear assets.

None of this is to say that the safety and security of nuclear weapons is perfect. In fact, historical incidents suggest that nuclear weapons are sometimes surprisingly poorly protected, considering the technical measures in place. The widely reported story that the enable code for the Minuteman warhead's PAL was 00000000 is unlikely to be true as it was originally reported, but that's not to say that there are no questions about the efficacy of PAL key management. US weapons staged in other NATO countries, for example, have raised perennial concerns about effective custody of nuclear weapons and the information required to use them.

General military security incidents endanger weapons as well. Widely reported disclosures of nuclear weapon security procedures by online flash card services and even Strava do not directly compromise these on-weapon security measures but nonetheless weaken the overall, multi-layered custodial security of these weapons, making other layers more critical and more vulnerable.

Ultimately, concerns still exist about the design of the weapons themselves. Most of the US nuclear fleet is very old. Many weapons are still in service that do not incorporate the latest security precautions, and efforts to upgrade these weapons are slow and endangered by many programmatic problems. Only in 1987 was the entire arsenal equipped with PALs, and in 2004 all weapons were equipped with cryptographic rekeying capability.

PALs, or something like them, are becoming the international norm. The Soviet Union developed similar security systems for their weapons, and allies of the United States often use US-designed PALs or similar under technology sharing agreements. Pakistan, though, remains a notable exception. There are still weapons in service in various parts of the world without this type of protection. Efforts to improve that situation are politically complex and run into many of the same challenges as counterproliferation in general.

Nuclear weapons are perhaps safer than you think, but that's certainly not to say that they are safe.

[1] This "popular fact" comes from an account by a single former missileer. Based on statements by other missile officers and from the Air Force itself, the reality seems to be complex. The 00000000 code may have been used before the locking mechanism was officially placed in service, during a transitional stage when technical safeguards had just been installed but missile crews were still operating on procedures developed before their introduction. Once the locking mechanism was placed in service and missile crews were permitted to deviate from the former strict two-man policy, "real" randomized secret codes were used.

--------------------------------------------------------------------------------
<- newer                                                                older ->