_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss
COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.

I have an MS in information security, more certifications than any human should, and ready access to a keyboard. These are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.

You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.

--------------------------------------------------------------------------------

>>> 2023-11-25 the curse of docker

I'm heading to Las Vegas for re:invent soon, perhaps the most boring type of industry extravaganza there could be. In that spirit, I thought I would write something quick and oddly professional: I'm going to complain about Docker.

Packaging software is one of those fundamental problems in system administration. It's so important, so influential on the way a system is used, that package managers are often the main identity of operating systems. Consider Windows: the operating system's most alarming defect in the eyes of many "Linux people" is its lack of package management, despite Microsoft's numerous attempts to introduce the concept. Well, perhaps more likely, because of the number of those attempts. And still, in the Linux world, distributions are differentiated primarily by their approach to managing software repositories. I don't just mean the difference between dpkg and rpm, but rather more fundamental decisions, like opinionated vs. upstream configuration and stable repositories vs. a rolling release. RHEL and Arch share the vast majority of their implementation and yet have very different vibes.

Linux distributions have, for the most part, consolidated on a certain philosophy of how software ought to be packaged, if not how often. One of the basic concepts shared by most Linux systems is centralization of dependencies. Libraries should be declared as dependencies, and the packages depended on should be installed in a common location for use of the linker. This can create a challenge: different pieces of software might depend on different versions of a library, which may not be compatible. This is the central challenge of maintaining a Linux distribution, in the classical sense: providing repositories of software versions that will all work correctly together. One of the advantages of stable distributions like RHEL is that they are very reliable in doing this; one of the disadvantages is that they achieve that goal by packaging new versions very infrequently.

Because of the need to provide mutually compatible versions of a huge range of software, and to ensure compliance with all kinds of other norms established by distributions (which may range from philosophical policies like free software to rules on the layout of configuration files), putting new software into Linux distributions can be... painful. For software maintainers, it means dealing with a bunch of distributions using a bunch of old versions with various specific build and configuration quirks. For distribution and package maintainers, it means bending all kinds of upstream software into compliance with distribution policy and figuring out version and dependency problems. It's all a lot of work, and while there are some norms, in practice it's sort of a wild scramble to do the work to make all this happen. Software developers that want their software to be widely used have to put up with distros. Distros that want software have to put up with software developers. Everyone gets mad.

Naturally there have been various attempts to ease these problems. Naturally they are indeed various and the community has not really consolidated on any one approach. In the desktop environment, Flatpak, Snap, and AppImage are all distressingly common ways of distributing software. The images or applications for these systems package the software complete with its dependencies, providing a complete self-contained environment that should work correctly on any distribution. The fact that I have multiple times had to unpack flatpaks and modify them to fix dependencies reveals that this concept doesn't always work entirely as advertised, but to be fair that kind of situation usually crops up when the software has to interact with elements of the system that the runtime can't properly isolate them from. The video stack is a classic example, where errant OpenGL libraries in packages might have to be removed or replaced for them to function with your particular graphics driver.

Still, these systems work reasonably well, well enough that they continue to proliferate. They are greatly aided by the nature of the desktop applications for which they're used (Snapcraft's system ambitions notwithstanding). Desktop applications tend to interact mostly with the user and receive their configuration via their own interface. Limiting the interaction surface mostly to a GUI window is actually tremendously helpful in making sandboxing feasible, although it continues to show rough edges when interacting with the file system.

I will note that I'm barely mentioning sandboxing here because I'm just not discussing it at the moment. Sandboxing is useful for security and even stability purposes, but I'm looking at these tools primarily as a way of packaging software for distribution. Sandboxed software can be distributed by more conventional means as well, and a few crusty old packages show that it's not as modern of a concept as it's often made out to be.

Anyway, what I really wanted to complain a bit about is the realm of software intended to be run on servers. Here, there is a clear champion: Docker, and to a lesser degree the ecosystem of compatible tools like Podman. The release of Docker lead to a surprisingly rapid change in what are widely considered best practices for server operations. While Docker images a means of distributing software first seemed to appeal mostly to large scalable environments with container orchestration, it sort of merged together with ideas from Vagrant and others to become a common means of distributing software for developer and single-node use as well.

Today, Docker is the most widespread way that server-side software is distributed for Linux. I hate it.

This is not a criticism of containers in general. Containerization is a wonderful thing with many advantages, even if the advantages over lightweight VMs are perhaps not as great as commonly claimed. I'm not sure that Docker has saved me more hours than it's cost, but to be fair I work as a DevOps consultant and, as a general rule, people don't get me involved unless the current situation isn't working properly. Docker images that run correctly with minimal effort don't make for many billable hours.

What really irritates me these days is not really the use of Docker images in DevOps environments that are, to some extent, centrally planned and managed. The problem is the use of Docker as a lowest common denominator, or perhaps more accurately lowest common effort, approach to distributing software to end users. When I see open-source, server-side software offered to me as a Docker image or--even worse---Docker Compose stack, my gut reaction is irritation. These sorts of things usually take longer to get working than equivalent software distributed as a conventional Linux package or to be built from source.

But wait, how does that happen? Isn't Docker supposed to make everything completely self-contained? Let's consider the common problems, something that I will call my Taxonomy of Docker Gone Bad.

Configuration

One of the biggest problems with Docker-as-distribution is the lack of consistent conventions for configuration. The vast majority of server-side Linux software accepts its configuration through an ages-old technique of reading a text file. This certainly isn't perfect! But, it is pretty consistent in its general contours. Docker images, on the other hand...

If you subscribe to the principles of the 12-factor-app, the best way for a Docker image to take configuration is probably via environment variables. This has the upside that it's quite straightforward to provide them on the command line when starting the container. It has the downside that environment variables aren't great for conveying structured data, and you usually interact with them via shell scripts that have clumsy handling of long or complicated values. A lot of Docker images used in DevOps environments take their configuration from environment variables, but they tend to make it a lot more feasible by avoiding complex configuration (by assuming TLS will be terminated by "someone else" for example) or getting a lot of their configuration from a database or service on the network.

For most end-user software though, configuration is too complex or verbose to be comfortable in environment variables. So, often, they fall back to configuration files. You have to get the configuration file into the container's file system somehow, and Docker provides numerous ways of doing so. Documentation on different packages will vary on which way it recommends. There are frequently caveats around ownership and permissions.

Making things worse, a lot of Docker images try to make configuration less painful by providing some sort of entry-point shell script that generates the full configuration from some simpler document provided to the container. Of course this level of abstraction, often poorly documented or entirely undocumented in practice, serves mostly to make troubleshooting a lot more difficult. How many times have we all experienced the joy of software failing to start, referencing some configuration key that isn't in what we provided, leading us to have to find have the Docker image build materials and read the entrypoint script to figure out how it generates that value?

The situation with configuration entrypoint scripts becomes particularly acute when those scripts are opinionated, and opinionated is often a nice way of saying "unsuitable for any configuration other than the developer's." Probably at least a dozen times I have had to build my own version of a Docker image to replace or augment an entrypoint script that doesn't expose parameters that the underlying software accepts.

In the worst case, some Docker images provide no documentation at all, and you have to shell into them and poke around to figure out where the actual configuration file used by the running software is even located. Docker images must always provide at least some basic README information on how the packaged software is configured.

Filesystems

One of the advantages of Docker is sandboxing or isolation, which of course means that Docker runs into the same problem that all sandboxes do. Sandbox isolation concepts do not interact well with Linux file systems. You don't even have to get into UID behavior to have problems here, just a Docker Compose stack that uses named volumes can be enough to drive you to drink. Everyday operations tasks like backups, to say nothing of troubleshooting, can get a lot more frustrating when you have to use a dummy container to interact with files in a named volume. The porcelain around named volumes has improved over time, but seemingly simple operations can still be weirdly inconsistent between Docker versions and, worse, other implementations like Podman.

But then, of course, there's the UID thing. One of the great sins of Docker is having normalized running software as root. Yes, Docker provides a degree of isolation, but from a perspective of defense in depth running anything with user exposure as root continues to be a poor practice. Of course this is one thing that often leads me to have to rebuild containers provided by software projects, and a number of common Docker practices don't make it easy. It all gets much more complicated if you use hostmounts because of UID mapping, and slightly complex environments with Docker can turn into NFS-style puzzles around UID allocation. Mitigating this mess is one of the advantages to named volumes, of course, with the pain points they bring.

Non-portable Containers

The irony of Docker for distribution, though, and especially Docker Compose, is that there are a lot of common practices that negatively impact portability---ostensibly the main benefit of this approach. Doing anything non-default with networks in Docker Compose will often create stacks that don't work correctly on machines with complex network setups. Too many Docker Compose stacks like to assume that default, well-known ports are available for listeners. They enable features of the underlying software without giving you a way to disable them, and assume common values that might not work in your environment.

One of the most common frustrations, for me personally, is TLS. As I have already alluded to, I preach a general principle that Docker containers should not terminate TLS. Accepting TLS connections means having access to the private key material. Even if 90-day ephemeral TLS certificates and a general atmosphere of laziness have deteriorated our discipline in this regard, private key material should be closely guarded. It should be stored in only one place and accessible to only one principal. You don't even have to get into these types of lofty security concerns, though. TLS is also sort of complicated to configure.

A lot of people who self-host software will have some type of SNI or virtual hosting situation. There may be wildcard certificates for multiple subdomains involved. All of this is best handled at a single point or a small number of dedicated points. It is absolutely maddening to encounter Docker images built with the assumption that they will individually handle TLS. Even with TLS completely aside, I would probably never expose a Docker container with some application directly to the internet. There are too many advantages to having a reverse proxy in front of it. And yet there are Docker Compose stacks out there for end-user software that want to use ACME to issue their own certificate! Now you have to dig through documentation to figure out how to disable that behavior.

The Single-Purpose Computer

All of these complaints are most common with what I would call hobby-tier software. Two examples that pop into my mind are HomeAssistant and Nextcloud. I don't call these hobby-tier to impugn the software, but rather to describe the average user.

Unfortunately, the kind of hobbyist that deploys software has had their mind addled by the cheap high of the Raspberry Pi. I'm being hyperbolic here, but this really is a problem. It's absurd the number of "self-hosted" software packages that assume they will run on dedicated hardware. Having "pi" in the name of a software product is a big red flag in my mind, it immediately makes me think "they will not have documented how to run this on a shared device." Call me old-fashioned, but I like my computers to perform more than one task, especially the ones that are running up my power bill 24/7.

HomeAssistant is probably the biggest offender here, because I run it in Docker on a machine with several other applications. It actively resists this, popping up an "unsupported software detected" maintenance notification after every update. Can you imagine if Postfix whined in its logs if it detected that it had neighbors?

Recently I decided to give NextCloud a try. This was long enough ago that the details elude me, but I think I burned around two hours trying to get the all-in-one Docker image to work in my environment. Finally I decided to give up and install it manually, to discover it was a plain old PHP application of the type I was regularly setting up in 2007. Is this a problem with kids these days? Do they not know how to fill in the config.php?

Hiding Sins

Of course, you will say, none of these problems would be widespread of people just made good Docker images. And yes, that is completely true! Perhaps one of the problems with Docker is that it's too easy to use. Creating an RPM or Debian package involves a certain barrier to entry, and it takes a whole lot of activation energy for even me to want to get rpmbuild going (advice: just use copr and rpkg). At the core of my complaints is the fact that distributing an application only as a Docker image is often evidence of a relatively immature project, or at least one without anyone who specializes in distribution. You have to expect a certain amount of friction in getting these sorts of things to work in a nonstandard environment.

It is a palpable irony, though, that Docker was once heralded as the ultimate solution to "works for me" and yet seems to just lead to the same situation existing at a higher level of configuration.

Last Thoughts

This is of course mostly my opinion and I'm sure you'll disagree on something, like my strong conviction that Docker Compose was one of the bigger mistakes of our era. Fifteen years ago I might have written a nearly identical article about all the problems I run into with RPMs created by small projects, but what surprises me about Docker is that it seems like projects can get to a large size, with substantial corporate backing, and still distribute in the form of a decidedly amateurish Docker Compose stack. Some of it is probably the lack of distribution engineering personnel on a lot of these projects, since Docker is "simple." Some of it is just the changing landscape of this class of software, with cheap single-board computers making Docker stacks just a little less specialized than a VM appliance image more palatable than they used to be. But some if it is also that I'm getting older and thus more cantankerous.

--------------------------------------------------------------------------------

>>> 2023-11-19 Centrex

I have always been fascinated by the PABX - the private automatic branch exchange, often shortened to "PBX" in today's world where the "automatic" is implied. (Relatively) modern small and medium business PABXs of the type I like to collect are largely solid-state devices that mount on the wall. Picture a cabinet that's maybe two feet wide, a foot and half tall, and five inches deep. That's a pretty accurate depiction of my Comdial hybrid key/PABX system, recovered from the offices of a bankrupt publisher of Christian home schooling materials.

These types of PABX, now often associated with Panasonic on the small end, are affordable and don't require much maintenance or space. They have their limitations, though, particularly in terms of extension count. Besides, the fact that these compact PABX are available at all is the result of decades of development in electronics.

Not that long ago, PABX were far more complex. Early PBX systems were manual, and hotels were a common example of a business that would have a telephone operator on staff. The first PABX were based on the same basic technology as their contemporary phone switches, using step-by-step switches or even crossbar mechanisms. They no longer required an operator to connect every call, but were still mostly designed with the assumption that an attendant would handle some situations. Moreover, these early PABX were large, expensive, and required regular maintenance. They were often leased from the telephone company, and the rates weren't cheap.

PABX had another key limitation as well: they were specific to a location. Each extension had to be home-run wired to the PABX, easy in a single building but costly at the level of a campus and, especially, with buildings spread around a city. For organizations with distributed buildings like school districts, connecting extensions back to a central PABX could be significantly more expensive than connecting them to the public telephone exchange.

This problem must have been especially common in a city the size of New York, so it's no surprise that New York Telephone was the first to commercialize an alternative approach: Centrex.

Every technology writer must struggle with the temptation to call every managed service in history a precursor to "the Cloud." I am going to do my very best to resist that nagging desire, but it's difficult not to note the similarity between Centrex service and modern cloud PABX solutions. Indeed, Centrex relied on capabilities of telephone exchange equipment that are recognizably similar to mainframe computer concepts like LPARs and virtualization today. But we'll get there in a bit. First, we need to talk about what Centrex is.

I've had it in my mind to write something about Centrex for years, but I've always had a hard time knowing where to start. The facts about Centrex are often rather dry, and the details varied over years of development, making it hard to sum up the capabilities in short. So I hope that you will forgive this somewhat dry post. It covers something that I think is a very important part of telephone history, particularly from the perspective of the computer industry today. It also lists off a lot of boring details. I will try to illustrate with interesting examples everywhere I can. I am indebted, for many things but here especially, to many members of the Central Office mailing list. They filled in a lot of details that solidified my understanding of Centrex and its variants.

The basic promise of Centrex was this: instead of installing your own PABX, let the telephone company configure their own equipment to provide the features you want to your business phones. A Centrex line is a bit like a normal telephone line, but with all the added capabilities of a business phone system: intercom calling, transfers, attendants, routing and long distance policies, and so on. All of these features were provided by central telephone exchanges, but your lines were partitioned to be interconnected within your business.

Centrex was a huge success. By 1990, a huge range of large institutions had either started their telephone journey with Centrex or transitioned away from a conventional PABX and onto Centrex. It's very likely that you have interacted with a Centrex system before and perhaps not realized. And now, Centrex's days are numbered. Let's look at the details.

Centrex is often explained as a reuse of the existing central office equipment to serve PABX requirements. This isn't entirely incorrect, but it can be misleading. It was not all that unusual for Centrex to rely on equipment installed at the customer site, but operated by the telco. For this reason, it's better to think of Centrex as a managed service than as a "cloud" service, or a Service-as-a-Service, or whatever modern term you might be tempted to apply.

Centrex existed in two major variants: Centrex-CO and Centrex-CU. The CO case, for Central Office, entailed this well-known design of each business telephone line connecting to an existing telco central office, where a switch was configured to provide Centrex features on that line group. CU, for Customer Unit, looks more like a very large PABX. These systems were usually limited to very large customers, who would provide space for the telco to build a new central office on the customer's site. The exchange was located with the customer, but operated by the telco.

These two different categories of service lead to two different categories of customers, with different needs and usage patterns. Centrex-CO appealed to smaller organizations with fewer extensions, but also to larger organizations with extensions spread across a large area. In that case, wiring every extension back to the CO using telco infrastructure was less expensive than installing new wiring to a CU exchange. A prototypical example might be a municipal school district.

Centrex-CU appealed to customers with a large number of extensions grouped in a large building or a campus. In this case it was much less costly to wire extensions to the new CU site than to connect them all over the longer distance to an existing CO. A prototypical Centrex-CU customer might be a university.

Exactly how these systems worked varied greatly from exchange to exchange, but the basic concept is a form of partitioning. Telephone exchanges with support for Centrex service could be configured such that certain lines were grouped together and enabled for Centrex features. The individual lines needed to have access to Centrex-specific capabilities like service codes, but also needed to be properly associated with each other so that internal calling would indeed be internal to the customer. This concept of partitioning telephone switches had several different applications, and Western Electric and other manufacturers continued to enhance it until it reached a very high level of sophistication in digital switches.

Let's look at an example of a Centrex-CO. The State of New Mexico began a contract with Mountain States Telephone and Telegraph [1] for Centrex service in 1964. The new Centrex service replaced 11 manual switchboards distributed around Santa Fe, and included Wide-Area Telephone Service (WATS), a discount arrangement for long-distance calls placed from state offices to exchanges throughout New Mexico. On November 9th, 1964, technicians sent to Santa Fe by Western Electric completed the cutover at the state capitol complex. Incidentally, the capitol phones of the day were being installed in what is now the Bataan Memorial Building: construction of the Roundhouse, today New Mexico's distinctive state capitol, had just begun that same year.

The Centrex service was estimated to save $12,000 per month in the rental and operation of multiple state exchanges, and the combination of WATS and conference calling service was expected to produce further savings by reducing the need for state employees to travel for meetings. The new system was evidently a success, and lead to a series of minor improvements including a scheme later in 1964 to ensure that the designated official phone number of each state agency would be answered during the state lunch break (noon to 1:15). In 1965, Burns Reinier resigned her job as Chief Operator of the state Centrex to launch a campaign for Secretary of State. Many state employees would probably recognize her voice, but that apparently did not translate to recognition on the ballot, as she lost the Democratic party nomination to the Governor's former secretary.

The late 1960s saw a flurry of newspaper advertisements giving new phone numbers for state and municipal agencies, Albuquerque Public Schools, and universities, as they all consolidated onto the state-run Centrex system. Here we must consider the geographical nature of Centrex: Centrex service operates within a single telephone exchange. To span the gap between the capitol in Santa Fe, state offices and UNM in Albuquerque, NMSU in Las Cruces, and even the State Hospital in Las Vegas (NM), a system of tie lines were installed between Centrex facilities in each city. These tie lines were essentially dedicated long distance trunks leased by the state to connect calls between Centrex exchanges at lower cost than even WATS long-distance service.

This system was not entirely CO-based: in Albuquerque, a Centrex exchange was installed in state-leased space at what was then known as the National Building, 505 Marquette. In the late '60s, 505 Marquette also hosted Telepak, an early private network service from AT&T. It is perhaps a result of this legacy that 505 Marquette houses one of New Mexico's most important network facilities, a large carrier hotel now operated by H5 Data Centers. The installation of the Centrex exchange at 505 Marquette saved a lot of expense on new local loops, since a series of 1960s political and bureaucratic events lead to a concentration of state offices in the new building.

Having made this leap to customer unit systems, let's jump almost 30 years forward to an example of a Centrex-CU installation... one with a number of interesting details. In late 1989, Sandia National Laboratories ended its dependence on the Air Force for telephony services by contracting with AT&T for the installation of a 5ESS telephone exchange. The 5ESS, a digital switch and a rather new one at the time, brought with it not just advanced calling features but something even more compelling to an R&D institution at the time: data networking.

The Sandia installation went nearly all-in on ISDN, the integrated digital telephony and data standard that largely failed to achieve adoption for telephone applications. Besides the digital telephone sets, though, Sandia made full use of the data capabilities of the exchange. Computers connected to the data ports on the ISDN user terminals (the conventional term for the telephone instrument itself in an ISDN network) could make "data calls" over the telephone system to access IBM mainframes and other corporate computing resources... all at a blistering 64 kbps, the speed of an ISDN basic rate interface bearer channel. The ISDN network could even transport video calls, by combining multiple BRIs for 384 kbps aggregate capacity.

The 5ESS was installed on a building on Air Force property near Tech Area 1, and the 5ESS's robust support for remote switch modules was fully leveraged to place an RSM in each Tech Area. The new system required renumbering, always a hassle, but allowed for better matching of Sandia's phone numbers on the public network to phone numbers on the Federal Telecommunications System or FTS... a CCSA operated for the Federal Government. But we'll talk about that later. The 5ESS was also equipped with ISDN PRI tie lines to a sibling 5ESS at Sandia California in Livermore, providing inexpensive calling and ISDN features between the two sites.

This is a good time to discuss digital Centrex. Traditional telephony, even today in residential settings, uses analog telephones. Business systems, though, made a transition from analog to digital during the '80s and '90s. Digital telephone sets used with business systems provided far easier access to features of the key system, PABX, or Centrex, and with fewer wires. A digital telephone set on one or two telephone pairs could offer multiple voice lines, caller ID, central directory service, busy status indication for other phones, soft keys for pickup groups and other features, even text messaging in some later systems (like my Comdial!). Analog systems often required as many as a half dozen pairs just for a simple configuration like two lines and busy lamp fields; analog "attendant" sets with access to many lines could require a 25-pair Amphenol connector... sometimes even more than one.

Many of these digital systems used proprietary protocols between the switch and telephones. A notable example would be the TCM protocol used by the Nortel Meridian, an extremely popular PABX that can still be found in service in many businesses. Digital telephone sets made the leap to Centrex as well: first by Nortel themselves, who offered a "Meridian Digital Centrex" capability on their DMS-100 exchange switch that supported telephone sets similar to (but not the same as!) ordinary Meridian digital systems. AT&T followed several years later by offering 5ESS-based digital Centrex over ISDN: the same basic capability that could be used for computer applications as well, but with the advantage of full compatibility with AT&T's broader ISDN initiative.

The ISDN user terminals manufactured by Western Electric and, later, Lucent, are distinctive and a good indication that that digital Centrex is in use. They are also lovely examples of the digital telephones of the era, with LCD matrix displays, a bevy of programmable buttons, and pleasing Bellcore distinctive ringing. It is frustrating that the evolution of telephone technology has seemingly made ringtones far worse. We will have to forgive the oddities of the ISDN electrical standard that required an "NT1" network termination device screwed to the bottom of your desk or, more often, underfoot on the floor.

Thinking about these digital phones, let's consider the user experience of Centrex. Centrex was very flexible; there were a large number of options available based on customer preference, and the details varied between the Centrex host switches used in the United States: Western Electric's line from the 5XB to the 5ESS, Nortel's DMS-100 and DMS-10, and occasionally the Siemens EWSD. This all makes it hard to describe Centrex usage succinctly, but I will focus on some particular common features of Centrex.

Like PABXs, most Centrex systems required that a dialing prefix (conventionally nine) be used for an outside line. This was not universal, "assumed nine" could often be enabled at customer request, but it created a number of complications in the dialplan and was best avoided. Centrex systems, because they mostly belonged to larger customers, were more likely than PABXs to offer tie lines or other private routing arrangements, which were often used by dialing calls with a prefix of 8. Like conventional telephone systems, you could dial 0 for the operator, but on traditional large Centrex systems the operator would be an attendant within the Centrex customer organization.

Centrex systems enabled internal calling by extension, much like PABXs. Because of the large size of some Centrex-CU installations in particular you are probably much more likely to encounter five-digit extensions with Centrex than with a PABX. These types of extensions were usually designed by taking several exchange prefixes in a sequence, and using the last digit of the exchange code as the first digit of the extension. For that reason the extensions are often written in a format like 1-2345. A somewhat charming example of this arrangement was the 5ESS-based Centrex-CU at Los Alamos National Laboratories, which spans exchange prefixes 662-667 in the 505 NPA. Since that includes the less desirable exchange prefix 666, it was skipped. Of course, that didn't stop Telnyx from starting to use it more recently. Because of the history of Los Alamos's development, telephones in the town use these same prefixes, generally the lower ones.

With digital telephones, Centrex features are comparatively easy to access, since they can be assigned to buttons on the telephones. With analog systems there are no such convenient buttons, so Centrex features had to be awkwardly bolted on much like advanced features on non-Centrex lines. Many features are activated using vertical service codes starting with *, although in some systems (especially older systems for pulse compatibility) they might be mapped to codes that look more like extensions. Operations that involve interrupting an active call, like transfer or hold, involve flashing the hookswitch... a somewhat antiquated operation now more often achieved with a "flash" button on the telephone, when it's done at all.

Still, some analog Centrex systems used electrical tricks on the pair (similar to many PABX) to provide a message waiting light and even an extra button for common operations.

While Centrex initially appealed mainly to larger customers, improvements in host switch technology and telephone company practices made it an accessible option for small organizations as well. Verizon's "CustoPAK" was an affordable offering that provided Centrex features on up to 30 extensions. These small-scale services were also made more accessible by computerization. Configuration changes to the first crossbar Centrex service required exchange technicians climbing ladders to resolder jumpers. With the genesis of digital switches, telco employees in translation centers read customer requirements and built switch configuration plans. By the '90s, carriers offered modem services that allowed customers to reconfigure their Centrex themselves, and later web-based self-service systems emerged.

So what became of Centrex? Like most aspects of the conventional copper phone network, it is on the way out. Major telephone carriers have mostly removed Centrex service from their tariffs, meaning they are no longer required to offer it. Even in areas where it is present on the tariff it is reportedly hard to obtain. A report from the state of Washington notes that, as a result particularly of CenturyLink removing copper service from its tariffs entirely, CenturyLink has informed the state that it may discontinue Centrex service at any time, subject to six months notice. Six months may seem like a long time but it is a very short period for a state government to replace a statewide telephone system... so we can anticipate some hurried acquisitions in the next couple of years.

Centrex had always interacted with tariffs in curious ways, anyway. Centrex was the impetus behind multiple lawsuits against AT&T on grounds varying from anti-competitive behavior to violations of the finer points of tariff regulation. For the most part AT&T prevailed, but some of these did lead to changes in the way Centrex service was charged. Taxation was a particularly difficult matter. There were excise taxes imposed on telephone service in most cases, but AT&T held that "internal" calls within Centrex customers should not be subject to these taxes due to their similarity to untaxed PABX and key systems. The finer points of this debate varied from state to state, and it made it to the Supreme Court at least once.

Centrex could also have a complex relationship with the financial policies of many institutional customers. Centrex was often paired with services like WATS or tie lines to make long-distance calling more affordable, but this also encouraged employees to make their personal long-distance calls in the office. The struggle of long-distance charge accounting lead not only to lengthy employee "acceptable use" policies that often survive to this day, but also schemes of accounting and authorization codes to track long distance users. Long-distance phone charges by state employees were a perennial minor scandal in New Mexico politics, leading to some sort of audit or investigation every few years. Long-distance calling was often disabled except for extensions that required it, but you will find stories of public courtesy phones accidentally left with long-distance enabled becoming suddenly popular parts of university buildings.

Today, Centrex is generally being replaced with VoIP solutions. Some of these are fully managed, cloud-based services, analogous to Centrex-CO before them. IP phones bring a rich featureset that leave eccentric dialplans and feature codes mostly forgotten, and federal regulations around the accessibility of 911 have broadly discouraged prefix schemes for outside calls. On the flip side, these types of phone systems make it very difficult to configure dialplan schemes on endpoints, leading office workers to learn a new type of phone oddity: dialing pound after a number to skip the end-of-dialing timeout. This worked on some Centrex systems as well; some things never change.

[1] Later called US West, later called Qwest, now part of CenturyLink, which is now part of Lumen.

--------------------------------------------------------------------------------

>>> 2023-11-04 nuclear safety

Nuclear weapons are complex in many ways. The basic problem of achieving criticality is difficult on its own, but deploying nuclear weapons as operational military assets involves yet more challenges. Nuclear weapons must be safe and reliable, even with the rough handling and potential of tampering and theft that are intrinsic to their military use.

Early weapon designs somewhat sidestepped the problem by being stored in inoperational condition. During the early phase of the Cold War, most weapons were "open pit" designs. Under normal conditions, the pit was stored separately from the weapon in a criticality-safe canister called a birdcage. The original three nuclear weapons stockpile sites (Manzano Base, Albuquerque NM; Killeen Base, Fort Hood TX; Clarksville Base, Fort Campbell KY) included special vaults to store the pit and assembly buildings where the pits would be installed into weapons. The pit vaults were designed not only for explosive safety but also to resist intrusion; the ability to unlock the vaults was reserved to a strictly limited number of Atomic Energy Commission personnel.

This method posed a substantial problem for nuclear deterrence, though. The process of installing the pits in the weapons was time consuming, required specially trained personnel, and wasn't particularly safe. Particularly after the dawn of ICBMs, a Soviet nuclear attack would require a rapid response, likely faster than weapons could be assembled. The problem was particularly evident when nuclear weapons were stockpiled at Strategic Air Command (SAC) bases for faster loading onto bombers. Each SAC base required a large stockpile area complete with hardened pit vaults and assembly buildings. Far more personnel had to be trained to complete the assembly process, and faster. Opportunities for mistakes that made weapons unusable, killed assembly staff, or contaminated the environment abounded.

As nuclear weapons proliferated, storing them disassembled became distinctly unsafe. It required personnel to perform sensitive operations with high explosives and radioactive materials, all under stressful conditions. It required that nuclear weapons be practical to assemble and disassemble in the field, which prevented strong anti-tampering measures.

The W-25 nuclear warhead, an approximately 220 pound, 1.7 kT weapon introduced in 1957, was the first to employ a fully sealed design. A relatively small warhead built for the Genie air-to-air missile, several thousand units would be stored fully assembled at Air Force sites. The first version of the W-25 was, by the AEC's own admission, unsafe to transport and store. It could detonate by accident, or it could be stolen.

The transition to sealed weapons changed the basic model of nuclear weapons security. Open weapons relied primarily on the pit vault, a hardened building with a bank-vault door, as the authentication mechanism. Few people had access to this vault, and two-man policies were in place and enforced by mechanical locks. Weapons stored assembled, though, lacked this degree of protection. The advent of sealed weapons presented a new possibility, though: the security measures could be installed inside of the weapon itself.

Safety elements of nuclear weapons protect against both unintentional and intentional attacks on the weapon. For example, from early on in the development of sealed implosion-type weapons "one-point safety" became common (it is now universal). One-point safe weapons have their high explosive implosion charge designed so that a detonation at any one point in the shell will never result in a nuclear yield. Instead, the imbalanced forces in the implosion assembly will tear it apart. This improper detonation produces a "fizzle yield" that will kill bystanders and scatter nuclear material, but produces orders of magnitude less explosive force and radiation dispersal than a complete nuclear detonation.

The basic concept of one-point safety is a useful example to explain the technical concepts that followed later. One-point safety is in some ways an accidental consequence of the complexity of implosion weapons: achieving a full yield requires an extremely precisely timed detonation of the entire HE shell. Weapons relied on complex (at the time) electronic firing mechanisms to achieve the required synchronization. Any failure of the firing system to produce a simultaneous detonation results in a partial yield because of the failure to achieve even implosion. One-point safety is essentially just a product of analysis (today computer modeling) to ensure that detonation of a single module of the HE shell will never result in a nuclear yield.

This one-point scenario could occur because of outside forces. For example, one-point safety is often described in terms of enemy fire. Imagine that, in combat conditions, anti-air weapons or even rifle fire strike a nuclear weapon. The shock forces will reach one side of the HE shell first. If they are sufficient to detonate it (not an easy task as very insensitive explosives are used), the one-point detonation will destroy the weapon with a fizzle yield.

We can also examine one-point safety in terms of the electrical function of the weapon. A malfunction or tampering with a weapon might cause one of the detonators to fire. The resulting one-point detonation will destroy the weapon. Achieving a nuclear yield requires that the shell be detonated in synchronization, which naturally functions as a measure of the correct operation of the firing system. Correctly firing a nuclear weapon is complex and difficult, requiring that multiple components are armed and correctly functioning. This itself serves as a safety mechanism since correct operation, difficult to achieve by intention, is unlikely to happen by accident.

Like most nuclear weapons, the W-25 received a series of modifications or "mods." The second, mod 1 (they start at 0), introduced a new safety mechanism: an environmental sensing device. The environmental sensing device allowed the weapon to fire only if certain conditions were satisfied, conditions that were indicative of the scenario the weapon was intended to fire in. The details of the ESD varied by weapon and probably even by application within a set of weapons, but the ESD generally required things like a moving a certain distance at a certain speed (determined by inertial measurements) or a certain change in altitude in order to arm the weapon. These measurements ensured that the weapon had actually been fired on a missile or dropped as a bomb before it could arm.

The environmental sensing device provides one of two basic channels of information that weapons require to arm: indication that the weapon is operating under normal conditions, like flying towards a target or falling onto one. This significantly reduces the risk of unintentional detonation.

There is a second possibility to consider, though, that of intentional detonation by an unauthorized user. A weapon could be stolen, or tampered with in place as an act of terrorism. To address this possibility, a second basic channel of input was developed: intent. For a weapon to detonate, it must be proven that an authorized user has the intent to detonate the weapon.

The implementation of these concepts has varied over time and by weapon type, but from unclassified materials a general understanding of the architecture of these safety systems can be developed. I decided to write about this topic not only because it is interesting (it certainly is), but also because many of the concepts used in the safety design of nuclear weapons are also applicable to other systems. Similar concepts are used, for example, in life-safety systems and robotics, fields where unintentional operation or tampering can cause significant harm to life and property. Some of the principles are unsurprisingly analogous to cryptographic methods used in computer security, as well.

The basic principle of weapons safety is called the strong link, weak link principle, and it is paired to the related idea of an exclusion zone. To understand this, it's helpful to remember the W-25's sealed design. For open weapons, a vault was used to store the pit. In a sealed weapon, the vault is, in a sense, built into the weapon. It's called the exclusion zone, and it can be thought of as a tamper-protected, electrically isolated chamber that contains the vital components of the weapon, including the electronic firing system.

In order to fire the weapon, the exclusion zone must be accessed, in that an electrical signal needs to be delivered to the firing system. Like the bank vaults used for pits, there is only one way into the exclusion zone, and it is tightly locked. An electrical signal must penetrate the energy barrier that surrounds the exclusion zone, and the only way to do so is by passing through a series of strong links.

The chain of events required to fire a nuclear weapon can be thought of like a physical chain used to support a load. Strong links are specifically reinforced so that they should never fail. We can also look at the design through the framework of information security, as an authentication and authorization system. Strong links are strict credential checks that will deny access under all conditions except the one in which the weapon is intended to fire: when the weapon is in suitable environmental conditions, has received an authorized intent signal, and the fuzing system calls for detonation.

One of the most important functions of the strong link is to confirm that correct environmental and intent authorization has occurred. The environmental sensing device, installed in the body of the weapon, sends its authorizing signal when its conditions are satisfied. There is some complexity here, though. One of the key concerns in weapons safety was the possibility of stray electrical signals, perhaps from static or lightning or contact with an aircraft electrical system, causing firing. The strong link needs to ensure that the authorization signal received really is from the environmental sensing device, and not a result of some electrical transient.

This verification is performed by requiring a unique signal. The unique signal is a digital message consisting of multiple bits, even when only a single bit of information (that environmental conditions are correct) needs to be conveyed. The extra bits serve only to make the message complex and unique. This way, any transient or unintentional electrical signal is extremely unlikely to match the correct pattern. We can think of this type of unique signal as an error detection mechanism, padding the message with extra bits just to verify the correctness of the important one.

Intent is a little trickier, though. It involves human input. The intent signal comes from the permissive action link, or PAL. Here, too, the concept of a unique signal is used to enable the weapon, but this time the unique signal isn't only a matter of error detection. The correct unique signal is a secret, and must be provided by a person who knows it.

Permissive action links are fascinating devices from a security perspective. The strong link is like a combination lock, and the permissive action link is the key or, more commonly, a device through which they key is entered. There have been many generations of PALs, and we are fortunate that a number of older, out of use PALs are on public display at the National Museum of Nuclear Science and History here in Albuquerque.

Here we should talk a bit about the implementation of strong links and PALs. While newer designs are likely more electronic, older designs were quite literally combination locks: electromechanical devices where a stepper motor or solenoid had to advance a clockwork mechanism in the correct pattern. It was a lot like operating a safe lock by remote. The design of PALs reflected this. Several earlier PALs are briefcases that, when opened, reveal a series of dials. An operator has to connect the PAL to the weapon, turn all the dials to the correct combination, and then press a button to send to the unique signal to the weapon.

Later PALs became very similar to the key loading devices used for military cryptography. The unique signal is programmed into volatile memory in the PAL. To arm a weapon, the PAL is connected, an operator authenticates themselves to the PAL, and then the PAL sends the stored unique signal. Like a key loader, the PAL itself incorporates measures against tampering or theft. A zeroize function is activated by tamper sensors or manually and clears the stored unique key. Too many failures by an operator to authenticate themselves also results in the stored unique signal being cleared.

Much like key loaders, PALs developed into more sophisticated devices over time with the ability to store and manage multiple unique signals, rekey weapons with new unique signals, and to authenticate the operator by more complex means. A late PAL-adjacent device on public display is the UC1583, a Compaq laptop docked to an electronic interface. This was actually a "PAL controller," meaning that it was built primarily for rekeying weapons and managing sets of keys. By this later era of nuclear weapons design, the PAL itself was typically integrated into communications systems on the delivery vehicle and provided a key to the weapon based on authorization messages received directly from military command authorities.

The next component to understand is the weak link. A strong link is intended to never fail open. A weak link is intended to easily fail closed. A very basic type of weak link would be a thermal fuse that burns out in response to high temperatures, disconnecting the firing system if the weapon is exposed to fire. In practice there can be many weak links and they serve as a protection against both accidental firing of a damaged weapon and intentional tampering. The exclusion zone design incorporates weak links such that any attempt to open the exclusion zone by force will result in weak links failing.

A special case of a weak link, or at least something that functions like a weak link, is the command disable feature on most weapons. Command disable is essentially a self-destruct capability. Details vary but, on the B61 for example, the command disable is triggered by pulling a handle that sticks out of the control panel on the side of the weapon. The command disable triggers multiple weak links, disabling various components of the weapon in hard-to-repair ways. An unauthorized user, without the expertise and resources of the weapons assembly technicians at Pantex, would find it very difficult to restore a weapon to working condition after the command disable was activated. Some weapons apparently had an explosive command disable that destroyed the firing system, but from publicly available material it seems that a more common design involved the command disable interrupting the power supply to volatile storage for unique codes and configuration information.

There are various ways to sum up these design features. First, let's revisit the overall architecture. Critical components of nuclear weapons, including both the pit itself and the electronic firing system, are contained within the exclusion zone. The exclusion zone is protected by an energy barrier that isolates it from mechanical and electrical influence. For the weapon to fire, firing signals must pass through strong links and weak links. Strong links are designed to never open without a correct unique signal, and to fail open only in extreme conditions that would have already triggered weak links. Weak links are designed to easily fail closed in abnormal situations like accidents or tampering. Both strong links and weak links can receive human input, strong links to provide intent authorization, and weak links to manually disable the weapon in a situation where custody may be lost.

The physical design of nuclear weapons is intricate and incorporates many anti-tamper and mechanical protection features, and high explosives and toxic and radioactive materials lead to hazardous working conditions. This makes the disassembly of modern nuclear weapons infamously difficult; a major challenge in the reduction of the nuclear stockpile is the backlog of weapons waiting for qualified technicians to take them apart. Command disable provides a convenience feature for this purpose, since it allows weapons to be written off the books before they can be carefully dismantled at one of very few facilities (often just one) capable of doing so. As an upside, these same properties make it difficult for an unauthorized user to circumvent the safety mechanisms in a nuclear weapon, or repair one in which weak links have failed.

Accidental arming and detonation of a nuclear weapon should not occur because the weapon will only arm on receipt of complex unique signals, including an intent signal that is secret and available only to a limited number of users (today, often only to the national command authority). Detonation of a weapon under extreme conditions like fire or mechanical shock is prevented by the denial of the strong links, the failure of the weak links, and the inherent difficulty of correctly firing a nuclear weapon. Compromise of a nuclear weapon, or detonation by an unauthorized user, is prevented by the authentication checks performed by the strong links and the tamper resistance provided by the weak links. Cryptographic features of modern PALs enhance custodial control of weapons by enabling rotation and separation of credentials.

Modern PALs particularly protect custodial control by requiring keys unknown to the personnel handling the weapons before they can be armed. These keys must be received from the national command authority as part of the order to attack, making communications infrastructure a critical part of the nuclear deterrent. It is for this reason that the United States has so many redundant, independent mechanisms of delivering attack orders, ranging from secure data networks to radio equipment on Air Force One capable of direct communication with nuclear assets.

None of this is to say that the safety and security of nuclear weapons is perfect. In fact, historical incidents suggest that nuclear weapons are sometimes surprisingly poorly protected, considering the technical measures in place. The widely reported story that the enable code for the Minuteman warhead's PAL was 00000000 is unlikely to be true as it was originally reported, but that's not to say that there are no questions about the efficacy of PAL key management. US weapons staged in other NATO countries, for example, have raised perennial concerns about effective custody of nuclear weapons and the information required to use them.

General military security incidents endanger weapons as well. Widely reported disclosures of nuclear weapon security procedures by online flash card services and even Strava do not directly compromise these on-weapon security measures but nonetheless weaken the overall, multi-layered custodial security of these weapons, making other layers more critical and more vulnerable.

Ultimately, concerns still exist about the design of the weapons themselves. Most of the US nuclear fleet is very old. Many weapons are still in service that do not incorporate the latest security precautions, and efforts to upgrade these weapons are slow and endangered by many programmatic problems. Only in 1987 was the entire arsenal equipped with PALs, and in 2004 all weapons were equipped with cryptographic rekeying capability.

PALs, or something like them, are becoming the international norm. The Soviet Union developed similar security systems for their weapons, and allies of the United States often use US-designed PALs or similar under technology sharing agreements. Pakistan, though, remains a notable exception. There are still weapons in service in various parts of the world without this type of protection. Efforts to improve that situation are politically complex and run into many of the same challenges as counterproliferation in general.

Nuclear weapons are perhaps safer than you think, but that's certainly not to say that they are safe.

[1] This "popular fact" comes from an account by a single former missileer. Based on statements by other missile officers and from the Air Force itself, the reality seems to be complex. The 00000000 code may have been used before the locking mechanism was officially placed in service, during a transitional stage when technical safeguards had just been installed but missile crews were still operating on procedures developed before their introduction. Once the locking mechanism was placed in service and missile crews were permitted to deviate from the former strict two-man policy, "real" randomized secret codes were used.

--------------------------------------------------------------------------------

>>> 2023-10-22 cooler screens

Audible even over the squeal of an HVAC blower with a suffering belt, the whine of small, high velocity fans pervades the grocery side of this Walgreens. Were they always this loud? I'm not sure; some of the fans sound distinctly unhealthy. Still, it's a familiar kind of noise to anyone who regularly works around kilowatt quantities of commercial IT equipment. Usually, though, it's a racket set aside for equipment rooms and IDF closets---not the refrigerator aisle.

The cooler screens came quickly and quietly. Walgreens didn't seem interested in promoting them. There was no in-store signage, no press announcements that I heard of. But they were apparently committed. I think I first encountered them in Santa Fe, and I laughed at this comical, ridiculous-on-its-face "innovation" in retailing, falsely confident that it would not cross the desert to Albuquerque's lower income levels. "What a ridiculous idea," I said, walking up to a blank cooler. The screens turn on in response to proximity, showing you an image of what is (supposedly) inside of the cooler, but not quickly enough that you don't get annoyed with not being able to just see inside.

I would later find that these were the good days, the first phase of the Cooler Screen's invasion, when they were both limited in number and merely mediocre. Things would become much worse. Today, the Cooler Screens have expanded their territory and tightened their grip. The coolers of Walgreens have gone dark, the opaque, black doors some sort of Kubrickian monolith channeling our basic primate understanding of Arizona Iced Tea. Like the monolith, they are faceless representatives of a power beyond our own, here to shape our actions, but not to explain themselves.

Like the monolith, they are always accompanied by an eery sort of screeching.

Despite my leftist tendencies I am hesitant to refer to "late-stage capitalism." To attribute our modern situation to such a "late stage" is to suggest that capitalism is in its death throes, that Marx's contradiction has indeed heightened and that some popular revolution is sure to follow. What is to say that things can't get worse? To wave away WeWork as an artifact of "late-stage capitalism" is an escape to unfounded optimism, to a version of reality in which things will not spiral further downward.

Still, I find myself looking at a Walgreens cooler that just two years ago was covered in clear glass, admitting direct inspection of which tall-boy teas were in stock. Today, it's an impenetrable black void. Some Walgreens employee has printed a sheet of paper, "TEA" in 96-point Cambria, and taped it to the wall above the door. Taking in this spectacle, of a multi-million dollar effort that took our coolers and made them more difficult to use, of a retail employee's haphazard effort to mitigate the hostility of their employer's merchandising, it is hard not to indulge in that escape. Surely, things can't get much worse. Surely, these must be the latter days.


Gregory Wasson is the sort of All-American success story that you expect from a neighborhood brand like Walgreens Also Known As Duane Reade In New York City. Born in 1958, he honed his business sense working the family campground near Delphi, Indiana. A first-generation college student, he aimed for a sensible profession, studying pharmacy at Purdue. Straight out of college, he scored a job as a pharmacy intern at a Walgreens in Houston.

Thirty years later, he was CEO.

A 2012 Chicago Tribune profile of Wasson ends with a few quick notes. One, "Loves: The desert," could easily go on a profile of myself. Another, "Hobbies: Visiting Walgreens across the country," is uncomfortably close as well. It's not that I have any particular affection for Walgreens, in fact, I've long thought it to very poorly managed, but for reasons unclear to me I cannot seriously consider entering a CVS. I don't know what they get up to, over there under the other red drug store sign. I hear it has something to do with long receipts. I don't want to find out.

I suppose some of Wasson's sensible, farm-and-country upbringing resonates with me as a consumer. It also makes it all the more surprising that he would become one of the principle agents behind Walgreen's most publicly embarrassing misstep to date. There must have been some sort of untoward influence, corruption by exposure to a Bad Element. Somehow, computers got to him.

Arsen Avakian came from Armenia as a Fulbright scholar. With a background the most capitalistic corners of technology (software outsourcing and management consulting), he turned to the food industry and worked in supply chain management systems for years before deciding to strike out on his own. Steering sensibly away from technology, he chose tea. Argo Tea started out as a chain of cafes based in Chicago, but by 2020 had largely shifted focus to a "ready-to-drink premium tea line derived from one of its most popular café beverages." This meant bottled tea, sold prominently in Walgreens.

It seems to be this Walgreens connection that brought Wasson and Avakian together. Wasson retired from Walgreens in 2014, and joined with Avakian and two other tech-adjacent executives to disrupt the cooler door.

Several publications have investigated the origin of Cooler Screens, taking the unquestioningly positive view typical of business reporters that do not bother to actually look into the product. Avakian, researching the branding and presentation of his packaged premium teas, was dismayed at the appearance of retail beverage sections. "Where is the innovation?," he is often quoted as saying, apparently in reference to the glass doors that have long allowed shoppers to see the products that they might decide to buy.

Avakian reportedly observed that people in store aisles would frequently look at their phones. I have a theory to explain this behavior; it has more to do with text messages and Tik Tok and a million other things that distract people milling around in a Walgreens to kill time (who among us hasn't taken up a fifteen minute gap by surveying a Walgreens? Fifteen minutes, perhaps, of waiting for the pharmacy in that very Walgreens to fill a prescription?). To Avakian's eyes, this was apparently a problem to be solved. People distracted from the tea are not, he seems to think, purchasing enough tea. The tea needs to fight back: "How do we make the cooler door a more engaging experience?," Cooler Screens CRO Lindell Bennett said in an interview with a consulting firm called Tinuiti that proclaims in their hero banner that "the funnel has collapsed."

Engagement is something like the cocaine of the computer industry. Perhaps in the future we will look back on it as the folly of quack practitioners, a cure-all for monetization as ill advised as the patent medicines of the 19th century. At the moment, though, we're still in the honeymoon phase. We are cramming engagement into everything to see where it sticks. It is fitting, then, that our cooler screens now obscure the inventory of Coca-Cola. It's crazy what they'll put into things, claiming it a cure for lethargy (of body or of sales). Coca into cola. Screens into coolers.


It's a little hard to tell what the cooler screens do. It comes down to the typical struggle of interpreting VC-fueled startups. Built In Chicago explains that "The company's digital sensors also help brands collect data on how consumers interact with their items." This is the kind of claim that makes me suspicious on two fronts: First, it probably strategically simplifies the nature of the data collected in order to understate the privacy implications. Second, it probably strategically simplifies how that data will be used in order to overstate its commercial value.

The simplest part of the cooler screen play is their use as an advertising medium. There seems to be a popular turn of phrase in the retail industry right now, that the store is a canvas. Cooler Screens' CRO, in the same interview, describes the devices as "a six-foot canvas with a 4K resolution where brands can share their message with a captive audience." I'm not sure that we're really captive in Walgreens, although the constant need to track down a Walgreens correction officer to unlock the cell in which they have placed the hand lotion does create that vibe.

Cooler Screens launched with a slate of advertising partners, basically who you would expect. Nestlé, MillerCoors, and Conagra headlined. The Wall Street Journal, referring to a MillerCoors statement, reported that "a big barrier for MillerCoors is that half of shoppers aren't aware beer is available in drugstores." I find this a little surprising since it is plainly visible next to the other beverages, but, well, these days it isn't any more, so I'm sure there's still a consumer awareness gap to be closed.

The idea of replacing cooler doors with a big television so that you can show advertising is exactly the kind of thing I would expect to take off in today's climate, but doesn't yet have that overpromising energy of AdTech or, I am learning, BevTech. The Cooler Screens are equipped with front-facing sensors, but no cameras facing the outside world. Cooler Screens seems appropriately wary of anything that could attract privacy attention, and refers to its product as "identity-blind." This, of course, makes it a little confusing that they also refer to targeted advertising and even retargeting as consumers approach the cooler.

To resolve this apparent contradiction, Cooler Screens describes its approach as "contextual advertising." They target based not on information about the customer, but on information about the context. The CRO offers an example:

When you think about it within the context of "I'm in front of an ice cream door and I want to buy," you have the ability to isolate the message to exactly what a consumer is focused on at this point in time based on the distance that they are from the door.

Age-old advertising technology would use the context that you are in front of the ice cream door as a trigger to display the ice cream through the door. In the era of the Cooler Screen, though, the ice cream itself is hidden safely out of view while the screen contacts a cloud service to obtain an advertisement that is contextually related to it.

It should be clear by this point that the Cooler Screens as an advertising medium don't really have anything to do with how the items behind them are perceived by consumers. They have to do with how the advertising space is sold. Historically, brands looking to achieve prominence in a retail environment have done so through the set of practices known as "merchandising." Business agreements between brands and retailers often negotiate the physical shelf space that stores will devote to the brand's products, and brands throw in further incentives for retailers to use brand-provided displays and move products to more lucrative positions in the store. As part of the traditionally multi-layered structure of the retail industry, the merchandising of beverage products especially is often managed by the distributor instead of the retailer. This is one way that brands jockey for more display space: the retailer is more likely to take the deal if their staff don't have to do the work.

With Cooler Screens, though, the world of AdTech can entirely disrupt this tie between placing products and placing advertising. Regardless of what is behind the door, regardless of what products the store actually chooses to stock, regardless of the business incentives of the beverage distributor that actually puts things into the coolers, the coolers will display whatever ads they are paid for. Are the cooler screens controlled by a real-time auction system, like many online advertisements? I haven't been able to tell for sure, although several uses of phrases like "online-like advertising marketplace" make me think it is at least the goal.

The first, and I suspect primary, purpose of the Cooler Screens is therefore one of disintermediation and disconnection. By putting a screen in front of the actual shelves, store display space can function as an advertising market completely disconnected from the actual stocked products. It's sort of like the 3D online stores that occupied the time of VR entrepreneurs before Mark Zuckerburg brought us his Metaverse. The actual products in the store aren't the important thing; the money is in the advertising space.

Second, the Cooler Screens do have cameras on the inside. With these, they promise to offer value to the distributor. Using internal cameras they can count inventory of the cooler, providing real-time stock level data and intriguing information on consumer preference. Cooler Screens promises to tell you not only which products are out of stock, but also which products a consumer considers before making their purchase. Reading between the lines here I assume this means the rear-facing cameras are used not only to take inventory but also to perform behavioral analysis of individuals who open the doors; the details here are (probably intentionally) fuzzy.

The idea of reporting real-time inventory data back to distributors is a solid one, and something that retail technology has pursued for years with ceiling mounted cameras, robots, and other approaches that always boil down to machine vision. Whether or not it works is hard to say, the arrival of the Cooler Screens seems to have coincided with a rapid decline in the actual availability of cold beverages, but presumably that has more to do with the onset of COVID and the related national logistical crisis rather than the screens themselves. The screens are, at least anecdotally, frequently wrong in their front-facing display of what is and isn't in stock. Generally they present the situation as being much better than it actually is. That this provides a degree of cover for Walgreens faltering ability to keep Gatorade in stock is probably a convenient coincidence.


Cooler Screens was born of Walgreens, and seems to have benefited from familial affection. Placement of Cooler Screens in Walgreens stores started in 2018, the beginning of a multi-year program to install Cooler Screens in 2,500 stores. This would apparently come at an expense of $200 million covered by Cooler Screens themselves. Cooler Screens was backed by venture funding, including an $80 million round lead by Verizon and Microsoft. Walgreens discussed Cooler Screens as part of their digital strategy, and Cooler Screens used Walgreens as a showcase customer. The Cooler Screens family was not a happy one, though.

The initial round of installations in 2018 reached 10,300 screens in 700 stores. Following this experience, Walgreens seemed to develop cold feet, with the pace of installation slowing along with Walgreens broader participation in the overall joint venture. Walgreens complained of "freezing screens, incorrect product displays, failure to update stock data accurately, and safety concerns such as screens sparking and catching fire."

In statements to the press, Cooler Screens referred to mention of frozen and incorrect displays as "false accusations." I can only take that as anything other than an outright lie if I allow myself to believe that the leadership and legal counsel of Cooler Screens have never actually seen their product in use. Given the general tenor of the AdTech industry, that might be true.

If it has not become clear by this point, the poor performance and reliability of the Cooler Screens is not only a contention by Walgreens but also a firm belief of probably every Walgreens customer with the misfortune of coming across them. In an informal survey of four Albuquerque-area Walgreens that I occasionally use, more than half of the screens are now dark. It varies by location; in one store, there are two not working. In another, there are two working. The cooler screens that still cling to life are noticeably infirm. As best I can remember, animations and video have never played back smoothly, with over a second sometimes passing between frames.

The screens are supposed to show full-size ads (increasingly rare) or turn off (now the norm) when idle, and then as a customer approaches they are supposed to turn on and display a graphical representation of the products in the cooler that is similar to---but much worse than---what you would see if the cooler door was simply transparent. Since they were first installed this automatic transition has been a rocky one. Far from the smooth process shown in Cooler Screens demo videos, the real item as installed here in the desert (which look worse than the ones in the demo videos to begin with) noticeably struggle to update on cue. As you approach they either fail to notice at all or seem to lock up entirely for a few seconds, animations freezing, as they struggle to retrieve the images of stock they should display. What then appears is, more often than not, wrong.

Early on in the Cooler Screens experiment they were wrong in more subtle ways. They would display one product as out of stock when it was, in fact, physically present just behind the door. They would display three other products as in stock when there were none to be found. That was the peak performance the rear-camera-based intelligence would achieve. Today, it seems like the screens' basic information on cooler layout is no longer being maintained. They display the wrong products in the wrong places, sometimes even an entirely wrong category of products.

It's perhaps hard to understand how they work so poorly, unless you have seen any of the other innovations that the confluence of AdTech and digital signage have brought us. There seems to be some widespread problem where designers of digital advertising products completely forget about basic principles of mechanical reliability.

It is ironic, given the name and purpose of the cooler screens, that they are not at all cool. In fact they run very warm, hot to the touch. I cannot be entirely sure of my own senses but in a recent trip to a Walgreens I swear that I could feel the heat radiating from the Cooler Screens as I approached the section, like an evening walk approaching a masonry wall still warm from the day's sun. As a practical matter they are mounted to the outside of standard glass cooler doors. Yes, it is deeply ironic that behind the cooler screens are normal glass doors through which their cameras are allowed to see the contents the way that customers are not, but at least the door provides some insulation. Still, somewhere between the cooler refrigeration and the store air conditioning, the excess thermal output of the new cooler doors is being removed at Walgreens' expense.

I was a bit baffled at how hot they ran (and how loud the cooling fans can be) until I considered the impressive brightness of the displays. Cooler Screens does refer to them as vivid and engaging, and they must have thought that they needed to compete with store lighting to catch attention. They are bright, almost uncomfortably so when you are close up, and the wattage of the backlighting (and attendant heat dissipated) must be considerable. Based on some experience I have with small SoCs in warm environments, I suspect they have a thermal problem. The whole system probably worked fine on a bench, but once manufactured and mounted with one face against an insulated cooler door, heat accumulates to the point that the SoC goes into thermal throttling and gives up on real-time playback of 4K video. The punishing temperature of the display and computer equipment leads to premature failure, and the screens go dark.

At a level of personal observation, the manufacturing quality of the screens also seems poor. The fit and finish is lacking, the design much less refined than the ones Cooler Screens displays in its own marketing material. The problems may be more than skin-deep, based on Walgreens reports of electrical problems leading to fire in more than one case. Cooler Screens contends that these cases were the result of failures on Walgreens part; it can be hard to tell who to blame in these situations anyway. But design and software problems must be the fault of Cooler Screens and, besides, Walgreens doesn't even like them.


Walgreens pulled the plug, or at least tried, early this year. In February, Walgreens terminated the business partnership with Cooler Screens. Only one third of the planned displays had been installed: Walgreens had started to back out years earlier. In 2021, Roz Brewer took over as CEO of Walgreens. According to reporting, she "did not like how the screens looked" and "wanted them out of the stores." According to Cooler Screens themselves, Brewer described them as "'Vegas' in a derogatory way."

I am skeptical of corporations in general and especially of their executives, and I have a natural aversion to the kind of hero worship that brings people to refer to CEOs as "visionary." Still, how validating it is to find someone, anyone, in corporate leadership who sees what I see. Cooler Screens alleges that "when she realized that her opinion on how the doors looked was not enough to get out of the contract... she and her team began to fabricate excuses." As would I! They are so evidently terrible, I would be fabricating excuses in the sense that one gets out of a bad date. "I am sorry about not installing the Cooler Screens on schedule but I have plans tomorrow with someone else who is not you." Perhaps I can install cooler screens in 500 more stores some other time? "Sure, call me, we'll work something out," I say, scrawling 505-FUCK-OFF on an old receipt.

Still, one does not typically start off a first date with a multi-year agreement in which one party commits $200 million in exchange for future revenue. Cooler Screens sued Walgreens, arguing that Walgreens has failed to perform on their 2018 contract by not installing additional screens. They're asking for an injunction to prohibit Walgreens removing of the currently installed units. Walgreens is contending that Cooler Screens failed to perform by installing screens that broke and occasionally caught fire, Cooler Screens retorts that the screens would have worked fine if Walgreens stores were in better condition.

The consumer, as always, is caught in the crossfire. As Cooler Screens continue to fail it seems unlikely that they will be repaired or replaced. As the lawsuit is ongoing, it seems unlikely that they will be removed. We just open every door and look behind it, thinking fondly of a bygone era when the cooler doors were clear and you could see through them. Now they are heavy and loud and uncomfortably warm. In the best case, we get to see a few scattered frames of a Coca Cola animation before they manage to present an almost shelf-like view of products that may or may not be in the cooler behind them.

Hope springs eternal. Earlier this year, Kroger announced the installation of Cooler Screens in 500 more of their stores, the result of a three-year pilot that apparently went better than Walgreens. The screens have claimed Walgreens as their territory, leaving destruction in their wake. They are advancing into the Smith's next.


One of the strangest parts of Cooler Screens, to me, is Cooler Screens insistence that consumers like them. I have never personally seen someone react to Cooler Screens with anything other than hostility. Everyday shoppers make rude remarks about the screens, speaking even in somewhat elevated tones, perhaps to be heard over the fans. Employees look sheepish. Everyone is in agreement that this is a bad situation.

"The retail experience consumers want and deserve," Cooler Screens says on their website. I would admire this turn of phrase if it was intended as a contemptful one. Cooler Screens promise to bring the experience of shopping online, "ease, relevance, and transparency." "Transparency" seems like a poor choice of language when promoting a product that infamously compares poorly to the transparent door it replaces. Relevance, too, is a bold claim given the unreliability of their inventory information. I suppose I don't have anything particularly mean to say about ease, although I have seen at least one elderly person struggle to open the heavy screens.

Still, "90%+ of consumers no longer prefer traditional glass cooler doors." What an intriguing claim! 90%+? How many plus? No longer prefer traditional glass? What exactly does that even mean?

Indeed, Cooler Screens presents a set of impressive numbers based on their market research. 94% of respondents say the screens impacted their shopping positively or neutrally (and the breakdown of positive/neutral in the graphic shows that this isn't even relying on a huge amount of neutral response, a good majority really did say positively). 82% said they found the content on the screens memorable. I certainly do find them memorable, but perhaps not how Cooler Screens intends.

I struggle to reconcile these performance numbers with the reality I have observed. Perhaps Albuquerque is a horrible backwater of Cooler Screens outcomes; I have not thoroughly inspected many out-of-town Walgreens. Maybe there exists, somewhere back East, a sort of Walgreens paradise where the screens are all in working order and actually look good and people like them. Or perhaps the surveys backing this data were only ever collected in the first two days following installation at Walgreens locations adjacent to dispensaries holding free pre-roll promotions. I don't know, because Cooler Screens shares no information on the methodology used to collect these metrics.

What I can tell you is this: customer experience data collected by Cooler Screens seems to reflect some world other than the one in which I exist.

I wish I lived there, the Walgreens must be exceptionally well-stocked. Out here, I am hoping the staff have fabricated crude signs so that I don't have to manually open every door. I am starting to memorize Walgreens shelf plans as an adaptation. I am nodding and appropriately chuckling when a stranger says "remember when you could see through these?" as they fight against retail innovation to purchase one of the products these things were supposed to promote. You cannot say they aren't engaged, in a sense.

--------------------------------------------------------------------------------

>>> 2023-10-15 go.com

Correction: a technical defect in my Enterprise Content Management System resulted in the email having a subject that made it sound like this post would be about the classic strategy game Go. It is actually about a failed website. I regret the error; the responsible people have been sacked. The link in the email was also wrong but I threw in a redirect so I probably would have gotten away with the error if I weren't telling you about it now.


The late 1990s were an exciting time in technology, at least if you don't look too late. The internet, as a consumer service, really hit its stride as major operating systems incorporated network stacks and shipped with web browsers, dial-up consumer ISPs proliferated, and the media industry began to view computers as a new medium for content delivery.

It was also a chaotic time: the internet was very new to consumers, and no one was quite sure how best to structure it. "Walled garden" services like AOL and Compuserve had been the first to introduce most people to internet usage. These early providers viewed the "open" internet of standard protocols to be more commercial or academic, and less friendly to consumers. They weren't entirely wrong, although they clearly had other motives as well to keep users within their properties. Whether for good or for ill, these early iterations of "the internet" as a social concept presented a relatively tightly curated experience.

Launching AOL's bespoke browser, for example, one was presented with a "home page" that gave live content like news headlines, features like IM and search, and then a list of websites neatly organized into categories and selected by human judgment. To make a vague analogy, the internet was less like an urban commercial district and more like a mall: there existed the same general concept of the internet connecting you to services operated by diverse companies, but there was a management company guiding and policing what those services were. There was less crime and vice, but also just less in general.

By the mid-'90s, the dominance of these closed networks was faltering. Dial-up access to "the internet proper" became readily available from incumbent players like telephone companies. Microsoft invested heavily in the Information Superhighway, launching MSN as a competitor to AOL that provided direct access to the internet through a lightly managed experience with some of the friendliness of AOL but the power of the full internet. Media companies tended to prefer the open internet because of the lower cost of access and freedom from constraints imposed by internet operators. There was more crime, but also more vice, and we know today that vice is half the point of the internet anyway [1].

There was a problem with the full-on internet experience, though: where to start? The internet itself is more like a telephone than a television---it doesn't give you anything until you dial a number. Some attacked this problem the same way the telephone industry did, by publishing books with lists of websites. "As easy to use as your telephone book," the description of The Internet Directory (1993) claims, a statement that tells us a lot about the consumer information experience of the time.

From a modern perspective the whole book thing seems nonsensical... to deliver information about the internet, why not use the internet? That's what services like AOL had done with their home pages. On the open internet, anyone could offer users a home page, regardless of their ISP. It was a solid idea at the time: Yahoo built its homepage into a business that stayed solid for years. Microsoft's MSN was never quite the independent commercial success of Yahoo, but has the unusual distinction of being one of the few other homepages that's still around today.

Much like the closed services that preceded them, homepage or portal providers tried to give their users the complete internet experience in one place. That meant that email was a standard offering, as was search. Search is unsurprising but email seems a bit odd, when you could use any old email service. But remember, the idea of using an independent service just for email was pretty much introduced to the masses by GMail. Before Google's breakout success, most people used the email account provided by either their ISP (for example Earthlink and Qwest addresses) or their homepage (Yahoo or Hotmail) [2].

Search quickly became an important factor in homepage success as well, being a much easier experience than browsing through a topic tree. It's no surprise, then, that the most successful homepage companies endure (at least to some degree) as search providers today.

Homepages started out as an internet industry concept, but the prominence of Yahoo and MSN in this rapidly expanding new media was a siren call to major, incumbent media companies. Whether by in-house development or acquisition, they wanted to have their own internet portals. They didn't tend to succeed.

A notable early example is Pathfinder, Time Warner's contender. Pathfinder developed some content in house but mostly took advantage of its shared ownership with Time Magazine to present exclusive news and entertainment. Time Warner put a large team and an ample budget behind Pathfinder, and it utterly failed. Surviving only from '94 to '99, Pathfinder is one of the notable busts of the Dot Com era. It had just about zero impact besides consuming a generous portion of Time Warner's money.

There were other efforts in motion, though. Paul Allen, better remembered today as the owner of several professional sports teams and even more yachts [3], had a side business in the mid-'90s called Starwave. Starwave developed video games and had some enduring impact in that industry through their early massively multiplayer game Castle Infinity. More importantly, though, Starwave was a major early web design firm. Web design, in say '95, was rather different from what it is today. There were no major off-the-shelf content management systems. Websites were either maintained, per-page, by hand, or generated by an in-house CMS. Websites with large amounts of regularly-updated content, typical of news and media companies, presented a real technical challenge and required a staff of engineers. Starwave provided those engineers, and they scored a very big client: the Walt Disney Company.

In 1996, Disney had just acquired ownership of Capital Cities Communications. You probably haven't heard of Capital Cities, but you have heard of their two major subsidiaries, ABC and ESPN. Disney's new subsidiary Walt Disney Television was a cable giant, and one focused on news and sports, two industries with a lot of rapidly updating content. The climate of the time demanded that they become not only major cable channels, but also major websites. Near-live sports scores, even just returns posted shortly after the end of games, were a big innovation in a time when you had to wait for scores to come around on the radio, or for the paper to come the next morning.

Starwave was a successful internet firm, and as was the way for successful internet companies even in the '90s, it had an Exit. Their biggest client, Disney, bought them.

At nearly the same time, Disney took interest in another burgeoning internet company: search engine Infoseek. Infoseek was one of the major search engines of the pre-Google internet, not quite with the name recognition of Ask Jeeves but prominent because of its default status in Netscape Navigator. Disney acquired Infoseek in 1999.

Here I have to take a brief break to disclose that I have lied to you for the sake of simplicity: What I'm about to describe actually started as a joint venture prior to Disney's acquisition of Starwave and Infoseek, but only very shortly. I suspect that M&A negotiations were already in progress when the joint venture was established, so we'll just say that Disney bought these companies and then the rest of this article happened. Okay? I'm sorry. '90s tech industry M&A happened so fast in so many combinations that it's often difficult to tell a tight story.

Disney was far from immune to the homepage mania that brought us Pathfinder. If they were going to have popular websites, they needed a way to get consumers to them, and "type espn.com into your web browser" was still a little iffy as an advertising call to action. A homepage of their own would provide the easiest path for users, and give Disney leverage to build their other internet projects. Disney got a homepage of their own: The Go Network, go.com.

Remember these acquisitions? Yahoo was a popular home page, and Yahoo had a search engine. Well, now Disney had a search engine. They had Starwave, developer of their largest web properties, on board as well. Disney had a plan: they took every internet venture under their corporate umbrella and combined them into what they hoped would be a dot com giant: The Go.com Company.

Disney's venture to combine their internet properties was impressively complete, especially considering their slow pace of change today. Just like Pathfinder's leverage of Time, Disney would use ESPN and ABC as ready sources of first-party content. Their combining of efforts was impressively complete, especially considering Disney's slow pace of online change today. Over the span of 1999, every Disney web property became just part of the go.com behemoth. And go.com would not be behind Yahoo on features: it had search, and you can bet it had email. Perhaps the only major Internet feature it was missing was instant messaging, but it wasn't yet quite the killer app it was in the '00s and Disney is famously allergic to IM (due to the abuse potential) anyway.

In true '90s fashion, go.com even got a volunteer-curated directory of websites in the style of DMOZ. These seem a bit odd today but were popular at the time, sort of the open internet response to AOL's tidy shopping mall.

Pathfinder made it from '94 to '99. Launched in '99, go.com was a slow start but a fast finish. In January of '00, they announced a pivot. "Internet site will quit portal race," the lede of 2000 AP piece starts. Maybe Disney saw the fallout of Pathfinder, in any case by '99 the writing was on the wall for the numerous homepage contenders that hadn't yet gained traction. Part of the reason was Google: Google took a gamble that consumers didn't really want news and a directory all in one place; they just wanted search. For novice internet users, Google might have actually been more approachable than "easy" options like Yahoo, due its extremely minimal design. Most home pages were, well, noisy [4].

Go.com's 21st century strategy would be to focus on entertainment. It might seem pretty obvious that Disney, an entertainment behemoth, should focus its online efforts on entertainment. But it was a different time! The idea of the internet being a bunch of different websites was still sort of uncomfortable, industry wanted to build the website, not just a website. Of course the modern focus on "everything apps," motivated mostly by the more recent success of this "homepage" concept in the form of mobile apps in China, shows that business ideas are evergreen.

Go.com's new focus didn't go well either. Continuing their impressively rapid pace of change for the worse (a true model of "move fast and break things"), go.com suffered a series of scandals. First, the go.com logo was suspiciously similar to the logo of similarly named but older homepage competitor goto.com. A judge agreed the resemblance was more than coincidental and ordered Disney to pay $21.5 million in damages. Almost in the same news cycle, an executive vice president of Infoseek, kept on as a Disney executive, traveled across state lines in pursuit of unsavory activities with a 13 year old. In a tale perhaps literally as old as the internet, said 13 year old was a good deal older than 13 and, even more to the EVP's dismay, a special agent of the FBI.

The widespread news coverage of the scandal was difficult for Disney's famously family friendly image. Newspaper articles quoted anonymous Starwave, Infoseek, and Disney employees describing the "high-flying," "high-testosterone" culture and a company outing to a strip club. "Everyone is going for gold. It's causing people to live in the present and disregard actions that could lead to real harm," one insider opined. The tone of the coverage would have fit right into an article about a collapsed crypto company were it not for a trailing short piece about upstart amazon.com introducing a multi-seller marketplace called "zShops."

The rapid decline seemed to continue. In January 2001, just another year later, Disney announced the end of go.com. They would charge off $800 million in investment and lay off 400. Go.com had been the ninth most popular website but a commercial failure, truly a humbling reminder of the problems of online monetization.

Here, though, things take a strange turn. After go.com's rapid plummet it achieved what we might call a zombie state. Just a couple of months later, in March, Disney announced a stay of execution for go.com. The $800 million had been marked down and the 400 employees laid off, but now that go.com had no staff and no budget to speak of, it just didn't cost that much to run.

Ironically, considering the trademark suit a year earlier, Disney's cost cutting included a complete shutdown of the former Infoseek search engine. Its replacement: goto.com, providing go.com search results under contract. In a Bloomberg piece, one analyst admits "I don't understand it." Another, Jeffrey Vilensky, gets at exactly what brings me to this topic: "People have definitely heard of Go, although there's been so many rounds of changes that people probably don't understand what it is or what to do with it at this point." Well, I'm not sure that Disney did either, because what they did was evidently to abandon it in place.

The odd thing about go.com is that, like Yahoo and MSN, it has survived to the modern age. But not as a homepage. The expenses, low as they were, must have added up, because Disney ended the go.com email service in 2010 and the whole search-and-news homepage content in 2013.

But it's still there: to this day, nearly every Disney website is a subdomain of go.com. Go.com itself, apparently the top level of Disney's empire, is basically nothing. A minimally designed page with a lazily formatted list of all of the websites under it. Go.com used to be a homepage, a portal, a search engine, the capital of Disney's empire. Today, it's tech debt.

Go.com is not quite eternal. As early as 2014 some motion had apparently begun to move away from it. ESPN no longer uses espn.go.com, they now just use espn.com (which for many years had been a 301 redirect to the former). ABC affiliate stations have stopped using the abclocal.go.com domain they used to be awkwardly organized under, but the website of ABC News itself remains abcnews.go.com. I mostly encounter this oddity of the internet in the context of my unreasonable love of themed entertainment; the online presence of the world's most famous theme parks are anchored at disneyland.disney.go.com and disneyworld.disney.go.com.

This is an odd thing, isn't it, in the modern context where domain hierarchies are often viewed as poison to consumer internet user. There are affordances to modernity, disney.com is Disney's main website even though a large portion of the links are to subdomains of disney.go.com. Disney.go.com itself actually redirects to disney.com, reversing the historic situation. Newer Disney websites, like Disney Plus, get their own top-level domains, as do all of the companies acquired by Disney after the go.com debacle.

But go.com is still a critical structural element of the Walt Disney online presence.

So what's up with that? A reading between the lines of Wikipedia and a bit of newspaper coverage suggests one motivation. Go.com had a user profile system that functioned as an early form of SSO for various Disney properties, and it has apparently been a protracted process to back out of that situation. I assume they relied on the shared go.com domain to make cookies available to their various properties. That system was apparently replaced when ESPN shifted to espn.com in 2016, but perhaps it's still in use by the Disney Resorts properties? I won't claim that technologies like OIDC or SAML are straightforward, a large portion of my day job is troubleshooting them, but still, over 20 years should be long enough to make a transition to a cross-domain SSO architecture.

There are rumors that the situation is related to SEO, that Disney fears the loss of reputation from moving their properties to new domains. But when ESPN moved they dismissed that claim, and it doesn't seem that likely given the range of SEO techniques available to handle content moves. Do they worry about consumer behavior? Sure, people don't like it when domain names change [5], but is anyone really typing "disneyland.disney.go.com" into their URL bar in the era of unified search? There are bookmarks for sure, but 20 years is a long timeline to transition via redirects.

I assume it's just been a low priority. The modern reality of Disney and go.com is idiosyncratic and anachronistic, but it doesn't cause many problems. Search easily gets you to the right place, and obvious domain names (like disneyland.com) are redirects. Go.com is an incredible domain name undoubtedly worth millions today but Disney could probably never sell it, there would always be too many concerns around old bookmarks, missed links in Disney marketing materials, and so on.

And so here we are, go.com still the sad shade of a major internet portal. Join with me for a little bit of ceremony, a way that we honor our shared internet history and its particular traditions. Set your homepage to go.com.

[1] One is tempted to make a connection to the largely mythical story that VHS succeeded over Betamax because of its availability to the pornography industry. We know this urban legend to be questionable in part because there were adult films sold on Betamax; not as many as on VHS, but probably for the same reasons there weren't as many Betamax releases of any type. This invites the question: was smut a factor in the victory of the open internet over closed providers? Look forward to a lengthier investigation of this topic on my onlyfans.

[2] The lines here are a bit blurrier than I present them, because most major homepage providers had partnerships with ISPs to sell internet service under their brand. MSN, for example, had some presence as a pseudo-ISP because of Microsoft's use of Windows defaults to market it. This whole idea of defaults being an important aspect of consumer choice for internet homepages is, ironically, once again being litigated in federal court as I write.

[3] This is a joke. Paul Allen was a founder of Microsoft.

[4] Ironically, Google themselves would launch their own home page product in 2005. It was called iGoogle, an incredibly, tragically 2005 name. Its differentiator was customization; but other homepage websites like Yahoo also had customization by that point and it doesn't seem to have been enough to overcome the general tide against this type of website. Google discontinued it in 2013. That's actually still a pretty good lifespan for a Google product.

[5] see, for example, my ongoing complaints about Regrid, a useful service for cadastral data that I sometimes have a hard time physically finding because they are on at least their third name and domain.

--------------------------------------------------------------------------------
                                                                        older ->