_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2023-11-04 nuclear safety

Nuclear weapons are complex in many ways. The basic problem of achieving criticality is difficult on its own, but deploying nuclear weapons as operational military assets involves yet more challenges. Nuclear weapons must be safe and reliable, even with the rough handling and potential of tampering and theft that are intrinsic to their military use.

Early weapon designs somewhat sidestepped the problem by being stored in inoperational condition. During the early phase of the Cold War, most weapons were "open pit" designs. Under normal conditions, the pit was stored separately from the weapon in a criticality-safe canister called a birdcage. The original three nuclear weapons stockpile sites (Manzano Base, Albuquerque NM; Killeen Base, Fort Hood TX; Clarksville Base, Fort Campbell KY) included special vaults to store the pit and assembly buildings where the pits would be installed into weapons. The pit vaults were designed not only for explosive safety but also to resist intrusion; the ability to unlock the vaults was reserved to a strictly limited number of Atomic Energy Commission personnel.

This method posed a substantial problem for nuclear deterrence, though. The process of installing the pits in the weapons was time consuming, required specially trained personnel, and wasn't particularly safe. Particularly after the dawn of ICBMs, a Soviet nuclear attack would require a rapid response, likely faster than weapons could be assembled. The problem was particularly evident when nuclear weapons were stockpiled at Strategic Air Command (SAC) bases for faster loading onto bombers. Each SAC base required a large stockpile area complete with hardened pit vaults and assembly buildings. Far more personnel had to be trained to complete the assembly process, and faster. Opportunities for mistakes that made weapons unusable, killed assembly staff, or contaminated the environment abounded.

As nuclear weapons proliferated, storing them disassembled became distinctly unsafe. It required personnel to perform sensitive operations with high explosives and radioactive materials, all under stressful conditions. It required that nuclear weapons be practical to assemble and disassemble in the field, which prevented strong anti-tampering measures.

The W-25 nuclear warhead, an approximately 220 pound, 1.7 kT weapon introduced in 1957, was the first to employ a fully sealed design. A relatively small warhead built for the Genie air-to-air missile, several thousand units would be stored fully assembled at Air Force sites. The first version of the W-25 was, by the AEC's own admission, unsafe to transport and store. It could detonate by accident, or it could be stolen.

The transition to sealed weapons changed the basic model of nuclear weapons security. Open weapons relied primarily on the pit vault, a hardened building with a bank-vault door, as the authentication mechanism. Few people had access to this vault, and two-man policies were in place and enforced by mechanical locks. Weapons stored assembled, though, lacked this degree of protection. The advent of sealed weapons presented a new possibility, though: the security measures could be installed inside of the weapon itself.

Safety elements of nuclear weapons protect against both unintentional and intentional attacks on the weapon. For example, from early on in the development of sealed implosion-type weapons "one-point safety" became common (it is now universal). One-point safe weapons have their high explosive implosion charge designed so that a detonation at any one point in the shell will never result in a nuclear yield. Instead, the imbalanced forces in the implosion assembly will tear it apart. This improper detonation produces a "fizzle yield" that will kill bystanders and scatter nuclear material, but produces orders of magnitude less explosive force and radiation dispersal than a complete nuclear detonation.

The basic concept of one-point safety is a useful example to explain the technical concepts that followed later. One-point safety is in some ways an accidental consequence of the complexity of implosion weapons: achieving a full yield requires an extremely precisely timed detonation of the entire HE shell. Weapons relied on complex (at the time) electronic firing mechanisms to achieve the required synchronization. Any failure of the firing system to produce a simultaneous detonation results in a partial yield because of the failure to achieve even implosion. One-point safety is essentially just a product of analysis (today computer modeling) to ensure that detonation of a single module of the HE shell will never result in a nuclear yield.

This one-point scenario could occur because of outside forces. For example, one-point safety is often described in terms of enemy fire. Imagine that, in combat conditions, anti-air weapons or even rifle fire strike a nuclear weapon. The shock forces will reach one side of the HE shell first. If they are sufficient to detonate it (not an easy task as very insensitive explosives are used), the one-point detonation will destroy the weapon with a fizzle yield.

We can also examine one-point safety in terms of the electrical function of the weapon. A malfunction or tampering with a weapon might cause one of the detonators to fire. The resulting one-point detonation will destroy the weapon. Achieving a nuclear yield requires that the shell be detonated in synchronization, which naturally functions as a measure of the correct operation of the firing system. Correctly firing a nuclear weapon is complex and difficult, requiring that multiple components are armed and correctly functioning. This itself serves as a safety mechanism since correct operation, difficult to achieve by intention, is unlikely to happen by accident.

Like most nuclear weapons, the W-25 received a series of modifications or "mods." The second, mod 1 (they start at 0), introduced a new safety mechanism: an environmental sensing device. The environmental sensing device allowed the weapon to fire only if certain conditions were satisfied, conditions that were indicative of the scenario the weapon was intended to fire in. The details of the ESD varied by weapon and probably even by application within a set of weapons, but the ESD generally required things like a moving a certain distance at a certain speed (determined by inertial measurements) or a certain change in altitude in order to arm the weapon. These measurements ensured that the weapon had actually been fired on a missile or dropped as a bomb before it could arm.

The environmental sensing device provides one of two basic channels of information that weapons require to arm: indication that the weapon is operating under normal conditions, like flying towards a target or falling onto one. This significantly reduces the risk of unintentional detonation.

There is a second possibility to consider, though, that of intentional detonation by an unauthorized user. A weapon could be stolen, or tampered with in place as an act of terrorism. To address this possibility, a second basic channel of input was developed: intent. For a weapon to detonate, it must be proven that an authorized user has the intent to detonate the weapon.

The implementation of these concepts has varied over time and by weapon type, but from unclassified materials a general understanding of the architecture of these safety systems can be developed. I decided to write about this topic not only because it is interesting (it certainly is), but also because many of the concepts used in the safety design of nuclear weapons are also applicable to other systems. Similar concepts are used, for example, in life-safety systems and robotics, fields where unintentional operation or tampering can cause significant harm to life and property. Some of the principles are unsurprisingly analogous to cryptographic methods used in computer security, as well.

The basic principle of weapons safety is called the strong link, weak link principle, and it is paired to the related idea of an exclusion zone. To understand this, it's helpful to remember the W-25's sealed design. For open weapons, a vault was used to store the pit. In a sealed weapon, the vault is, in a sense, built into the weapon. It's called the exclusion zone, and it can be thought of as a tamper-protected, electrically isolated chamber that contains the vital components of the weapon, including the electronic firing system.

In order to fire the weapon, the exclusion zone must be accessed, in that an electrical signal needs to be delivered to the firing system. Like the bank vaults used for pits, there is only one way into the exclusion zone, and it is tightly locked. An electrical signal must penetrate the energy barrier that surrounds the exclusion zone, and the only way to do so is by passing through a series of strong links.

The chain of events required to fire a nuclear weapon can be thought of like a physical chain used to support a load. Strong links are specifically reinforced so that they should never fail. We can also look at the design through the framework of information security, as an authentication and authorization system. Strong links are strict credential checks that will deny access under all conditions except the one in which the weapon is intended to fire: when the weapon is in suitable environmental conditions, has received an authorized intent signal, and the fuzing system calls for detonation.

One of the most important functions of the strong link is to confirm that correct environmental and intent authorization has occurred. The environmental sensing device, installed in the body of the weapon, sends its authorizing signal when its conditions are satisfied. There is some complexity here, though. One of the key concerns in weapons safety was the possibility of stray electrical signals, perhaps from static or lightning or contact with an aircraft electrical system, causing firing. The strong link needs to ensure that the authorization signal received really is from the environmental sensing device, and not a result of some electrical transient.

This verification is performed by requiring a unique signal. The unique signal is a digital message consisting of multiple bits, even when only a single bit of information (that environmental conditions are correct) needs to be conveyed. The extra bits serve only to make the message complex and unique. This way, any transient or unintentional electrical signal is extremely unlikely to match the correct pattern. We can think of this type of unique signal as an error detection mechanism, padding the message with extra bits just to verify the correctness of the important one.

Intent is a little trickier, though. It involves human input. The intent signal comes from the permissive action link, or PAL. Here, too, the concept of a unique signal is used to enable the weapon, but this time the unique signal isn't only a matter of error detection. The correct unique signal is a secret, and must be provided by a person who knows it.

Permissive action links are fascinating devices from a security perspective. The strong link is like a combination lock, and the permissive action link is the key or, more commonly, a device through which they key is entered. There have been many generations of PALs, and we are fortunate that a number of older, out of use PALs are on public display at the National Museum of Nuclear Science and History here in Albuquerque.

Here we should talk a bit about the implementation of strong links and PALs. While newer designs are likely more electronic, older designs were quite literally combination locks: electromechanical devices where a stepper motor or solenoid had to advance a clockwork mechanism in the correct pattern. It was a lot like operating a safe lock by remote. The design of PALs reflected this. Several earlier PALs are briefcases that, when opened, reveal a series of dials. An operator has to connect the PAL to the weapon, turn all the dials to the correct combination, and then press a button to send to the unique signal to the weapon.

Later PALs became very similar to the key loading devices used for military cryptography. The unique signal is programmed into volatile memory in the PAL. To arm a weapon, the PAL is connected, an operator authenticates themselves to the PAL, and then the PAL sends the stored unique signal. Like a key loader, the PAL itself incorporates measures against tampering or theft. A zeroize function is activated by tamper sensors or manually and clears the stored unique key. Too many failures by an operator to authenticate themselves also results in the stored unique signal being cleared.

Much like key loaders, PALs developed into more sophisticated devices over time with the ability to store and manage multiple unique signals, rekey weapons with new unique signals, and to authenticate the operator by more complex means. A late PAL-adjacent device on public display is the UC1583, a Compaq laptop docked to an electronic interface. This was actually a "PAL controller," meaning that it was built primarily for rekeying weapons and managing sets of keys. By this later era of nuclear weapons design, the PAL itself was typically integrated into communications systems on the delivery vehicle and provided a key to the weapon based on authorization messages received directly from military command authorities.

The next component to understand is the weak link. A strong link is intended to never fail open. A weak link is intended to easily fail closed. A very basic type of weak link would be a thermal fuse that burns out in response to high temperatures, disconnecting the firing system if the weapon is exposed to fire. In practice there can be many weak links and they serve as a protection against both accidental firing of a damaged weapon and intentional tampering. The exclusion zone design incorporates weak links such that any attempt to open the exclusion zone by force will result in weak links failing.

A special case of a weak link, or at least something that functions like a weak link, is the command disable feature on most weapons. Command disable is essentially a self-destruct capability. Details vary but, on the B61 for example, the command disable is triggered by pulling a handle that sticks out of the control panel on the side of the weapon. The command disable triggers multiple weak links, disabling various components of the weapon in hard-to-repair ways. An unauthorized user, without the expertise and resources of the weapons assembly technicians at Pantex, would find it very difficult to restore a weapon to working condition after the command disable was activated. Some weapons apparently had an explosive command disable that destroyed the firing system, but from publicly available material it seems that a more common design involved the command disable interrupting the power supply to volatile storage for unique codes and configuration information.

There are various ways to sum up these design features. First, let's revisit the overall architecture. Critical components of nuclear weapons, including both the pit itself and the electronic firing system, are contained within the exclusion zone. The exclusion zone is protected by an energy barrier that isolates it from mechanical and electrical influence. For the weapon to fire, firing signals must pass through strong links and weak links. Strong links are designed to never open without a correct unique signal, and to fail open only in extreme conditions that would have already triggered weak links. Weak links are designed to easily fail closed in abnormal situations like accidents or tampering. Both strong links and weak links can receive human input, strong links to provide intent authorization, and weak links to manually disable the weapon in a situation where custody may be lost.

The physical design of nuclear weapons is intricate and incorporates many anti-tamper and mechanical protection features, and high explosives and toxic and radioactive materials lead to hazardous working conditions. This makes the disassembly of modern nuclear weapons infamously difficult; a major challenge in the reduction of the nuclear stockpile is the backlog of weapons waiting for qualified technicians to take them apart. Command disable provides a convenience feature for this purpose, since it allows weapons to be written off the books before they can be carefully dismantled at one of very few facilities (often just one) capable of doing so. As an upside, these same properties make it difficult for an unauthorized user to circumvent the safety mechanisms in a nuclear weapon, or repair one in which weak links have failed.

Accidental arming and detonation of a nuclear weapon should not occur because the weapon will only arm on receipt of complex unique signals, including an intent signal that is secret and available only to a limited number of users (today, often only to the national command authority). Detonation of a weapon under extreme conditions like fire or mechanical shock is prevented by the denial of the strong links, the failure of the weak links, and the inherent difficulty of correctly firing a nuclear weapon. Compromise of a nuclear weapon, or detonation by an unauthorized user, is prevented by the authentication checks performed by the strong links and the tamper resistance provided by the weak links. Cryptographic features of modern PALs enhance custodial control of weapons by enabling rotation and separation of credentials.

Modern PALs particularly protect custodial control by requiring keys unknown to the personnel handling the weapons before they can be armed. These keys must be received from the national command authority as part of the order to attack, making communications infrastructure a critical part of the nuclear deterrent. It is for this reason that the United States has so many redundant, independent mechanisms of delivering attack orders, ranging from secure data networks to radio equipment on Air Force One capable of direct communication with nuclear assets.

None of this is to say that the safety and security of nuclear weapons is perfect. In fact, historical incidents suggest that nuclear weapons are sometimes surprisingly poorly protected, considering the technical measures in place. The widely reported story that the enable code for the Minuteman warhead's PAL was 00000000 is unlikely to be true as it was originally reported, but that's not to say that there are no questions about the efficacy of PAL key management. US weapons staged in other NATO countries, for example, have raised perennial concerns about effective custody of nuclear weapons and the information required to use them.

General military security incidents endanger weapons as well. Widely reported disclosures of nuclear weapon security procedures by online flash card services and even Strava do not directly compromise these on-weapon security measures but nonetheless weaken the overall, multi-layered custodial security of these weapons, making other layers more critical and more vulnerable.

Ultimately, concerns still exist about the design of the weapons themselves. Most of the US nuclear fleet is very old. Many weapons are still in service that do not incorporate the latest security precautions, and efforts to upgrade these weapons are slow and endangered by many programmatic problems. Only in 1987 was the entire arsenal equipped with PALs, and in 2004 all weapons were equipped with cryptographic rekeying capability.

PALs, or something like them, are becoming the international norm. The Soviet Union developed similar security systems for their weapons, and allies of the United States often use US-designed PALs or similar under technology sharing agreements. Pakistan, though, remains a notable exception. There are still weapons in service in various parts of the world without this type of protection. Efforts to improve that situation are politically complex and run into many of the same challenges as counterproliferation in general.

Nuclear weapons are perhaps safer than you think, but that's certainly not to say that they are safe.

[1] This "popular fact" comes from an account by a single former missileer. Based on statements by other missile officers and from the Air Force itself, the reality seems to be complex. The 00000000 code may have been used before the locking mechanism was officially placed in service, during a transitional stage when technical safeguards had just been installed but missile crews were still operating on procedures developed before their introduction. Once the locking mechanism was placed in service and missile crews were permitted to deviate from the former strict two-man policy, "real" randomized secret codes were used.

--------------------------------------------------------------------------------

>>> 2023-10-22 cooler screens

Audible even over the squeal of an HVAC blower with a suffering belt, the whine of small, high velocity fans pervades the grocery side of this Walgreens. Were they always this loud? I'm not sure; some of the fans sound distinctly unhealthy. Still, it's a familiar kind of noise to anyone who regularly works around kilowatt quantities of commercial IT equipment. Usually, though, it's a racket set aside for equipment rooms and IDF closets---not the refrigerator aisle.

The cooler screens came quickly and quietly. Walgreens didn't seem interested in promoting them. There was no in-store signage, no press announcements that I heard of. But they were apparently committed. I think I first encountered them in Santa Fe, and I laughed at this comical, ridiculous-on-its-face "innovation" in retailing, falsely confident that it would not cross the desert to Albuquerque's lower income levels. "What a ridiculous idea," I said, walking up to a blank cooler. The screens turn on in response to proximity, showing you an image of what is (supposedly) inside of the cooler, but not quickly enough that you don't get annoyed with not being able to just see inside.

I would later find that these were the good days, the first phase of the Cooler Screen's invasion, when they were both limited in number and merely mediocre. Things would become much worse. Today, the Cooler Screens have expanded their territory and tightened their grip. The coolers of Walgreens have gone dark, the opaque, black doors some sort of Kubrickian monolith channeling our basic primate understanding of Arizona Iced Tea. Like the monolith, they are faceless representatives of a power beyond our own, here to shape our actions, but not to explain themselves.

Like the monolith, they are always accompanied by an eery sort of screeching.

Despite my leftist tendencies I am hesitant to refer to "late-stage capitalism." To attribute our modern situation to such a "late stage" is to suggest that capitalism is in its death throes, that Marx's contradiction has indeed heightened and that some popular revolution is sure to follow. What is to say that things can't get worse? To wave away WeWork as an artifact of "late-stage capitalism" is an escape to unfounded optimism, to a version of reality in which things will not spiral further downward.

Still, I find myself looking at a Walgreens cooler that just two years ago was covered in clear glass, admitting direct inspection of which tall-boy teas were in stock. Today, it's an impenetrable black void. Some Walgreens employee has printed a sheet of paper, "TEA" in 96-point Cambria, and taped it to the wall above the door. Taking in this spectacle, of a multi-million dollar effort that took our coolers and made them more difficult to use, of a retail employee's haphazard effort to mitigate the hostility of their employer's merchandising, it is hard not to indulge in that escape. Surely, things can't get much worse. Surely, these must be the latter days.


Gregory Wasson is the sort of All-American success story that you expect from a neighborhood brand like Walgreens Also Known As Duane Reade In New York City. Born in 1958, he honed his business sense working the family campground near Delphi, Indiana. A first-generation college student, he aimed for a sensible profession, studying pharmacy at Purdue. Straight out of college, he scored a job as a pharmacy intern at a Walgreens in Houston.

Thirty years later, he was CEO.

A 2012 Chicago Tribune profile of Wasson ends with a few quick notes. One, "Loves: The desert," could easily go on a profile of myself. Another, "Hobbies: Visiting Walgreens across the country," is uncomfortably close as well. It's not that I have any particular affection for Walgreens, in fact, I've long thought it poorly managed, but for reasons unclear to me I cannot seriously consider entering a CVS. I don't know what they get up to, over there under the other red drug store sign. I hear it has something to do with long receipts. I don't want to find out.

I suppose some of Wasson's sensible, farm-and-country upbringing resonates with me as a consumer. It also makes it all the more surprising that he would become one of the principle agents behind Walgreen's most publicly embarrassing misstep to date. There must have been some sort of untoward influence, corruption by exposure to a Bad Element. Somehow, computers got to him.

Arsen Avakian came from Armenia as a Fulbright scholar. With a background in the most capitalist corners of technology (software outsourcing and management consulting), he turned to the food industry and worked in supply chain management systems for years before deciding to strike out on his own. Steering sensibly away from technology, he chose tea. Argo Tea started out as a chain of cafes based in Chicago, but by 2020 had largely shifted focus to a "ready-to-drink premium tea line derived from one of its most popular café beverages." This meant bottled tea, sold prominently in Walgreens.

It seems to be this Walgreens connection that brought Wasson and Avakian together. Wasson retired from Walgreens in 2014, and joined with Avakian and two other tech-adjacent executives to disrupt the cooler door.

Several publications have investigated the origin of Cooler Screens, taking the unquestioningly positive view typical of business reporters that do not bother to actually look into the product. Avakian, researching the branding and presentation of his packaged premium teas, was dismayed at the appearance of retail beverage sections. "Where is the innovation?," he is often quoted as saying, apparently in reference to the glass doors that have long allowed shoppers to see the products that they might decide to buy.

Avakian reportedly observed that people in store aisles would frequently look at their phones. I have a theory to explain this behavior; it has more to do with text messages and Tik Tok and a million other things that distract people milling around in a Walgreens to kill time (who among us hasn't taken up a fifteen minute gap by surveying a Walgreens? Fifteen minutes, perhaps, of waiting for the pharmacy in that very Walgreens to fill a prescription?). To Avakian's eyes, this was apparently a problem to be solved. People distracted from the tea are not, he seems to think, purchasing enough tea. The tea needs to fight back: "How do we make the cooler door a more engaging experience?," Cooler Screens CRO Lindell Bennett said in an interview with a consulting firm called Tinuiti that proclaims in their hero banner that "the funnel has collapsed."

Engagement is something like the cocaine of the computer industry. Perhaps in the future we will look back on it as the folly of quack practitioners, a cure-all for monetization as ill advised as the patent medicines of the 19th century. At the moment, though, we're still in the honeymoon phase. We are cramming engagement into everything to see where it sticks. It is fitting, then, that our cooler screens now obscure the inventory of Coca-Cola. It's crazy what they'll put into things, claiming it a cure for lethargy (of body or of sales). Coca into cola. Screens into coolers.


It's a little hard to tell what the cooler screens do. It comes down to the typical struggle of interpreting VC-fueled startups. Built In Chicago explains that "The company's digital sensors also help brands collect data on how consumers interact with their items." This is the kind of claim that makes me suspicious on two fronts: First, it probably strategically simplifies the nature of the data collected in order to understate the privacy implications. Second, it probably strategically simplifies how that data will be used in order to overstate its commercial value.

The simplest part of the cooler screen play is their use as an advertising medium. There seems to be a popular turn of phrase in the retail industry right now, that the store is a canvas. Cooler Screens' CRO, in the same interview, describes the devices as "a six-foot canvas with a 4K resolution where brands can share their message with a captive audience." I'm not sure that we're really captive in Walgreens, although the constant need to track down a Walgreens corrections officer to unlock the cell in which they have placed the hand lotion does create that vibe.

Cooler Screens launched with a slate of advertising partners, basically who you would expect. Nestlé, MillerCoors, and Conagra headlined. The Wall Street Journal, referring to a MillerCoors statement, reported that "a big barrier for MillerCoors is that half of shoppers aren't aware beer is available in drugstores." I find this a little surprising since it is plainly visible next to the other beverages, but, well, these days it isn't any more, so I'm sure there's still a consumer awareness gap to be closed.

The idea of replacing cooler doors with a big television so that you can show advertising is exactly the kind of thing I would expect to take off in today's climate, but doesn't yet have that overpromising energy of AdTech or, I am learning, BevTech. The Cooler Screens are equipped with front-facing sensors, but no cameras facing the outside world. Cooler Screens seems appropriately wary of anything that could attract privacy attention, and refers to its product as "identity-blind." This, of course, makes it a little confusing that they also refer to targeted advertising and even retargeting as consumers approach the cooler.

To resolve this apparent contradiction, Cooler Screens describes its approach as "contextual advertising." They target based not on information about the customer, but on information about the context. The CRO offers an example:

When you think about it within the context of "I'm in front of an ice cream door and I want to buy," you have the ability to isolate the message to exactly what a consumer is focused on at this point in time based on the distance that they are from the door.

Age-old advertising technology would use the context that you are in front of the ice cream door as a trigger to display the ice cream through the door. In the era of the Cooler Screen, though, the ice cream itself is hidden safely out of view while the screen contacts a cloud service to obtain an advertisement that is contextually related to it.

It should be clear by this point that the Cooler Screens as an advertising medium don't really have anything to do with how the items behind them are perceived by consumers. They have to do with how the advertising space is sold. Historically, brands looking to achieve prominence in a retail environment have done so through the set of practices known as "merchandising." Business agreements between brands and retailers often negotiate the physical shelf space that stores will devote to the brand's products, and brands throw in further incentives for retailers to use brand-provided displays and move products to more lucrative positions in the store. As part of the traditionally multi-layered structure of the retail industry, the merchandising of beverage products especially is often managed by the distributor instead of the retailer. This is one way that brands jockey for more display space: the retailer is more likely to take the deal if their staff don't have to do the work.

With Cooler Screens, though, the world of AdTech can entirely disrupt this tie between placing products and placing advertising. Regardless of what is behind the door, regardless of what products the store actually chooses to stock, regardless of the business incentives of the beverage distributor that actually puts things into the coolers, the coolers will display whatever ads they are paid for. Are the cooler screens controlled by a real-time auction system, like many online advertisements? I haven't been able to tell for sure, although several uses of phrases like "online-like advertising marketplace" make me think it is at least the goal.

The first, and I suspect primary, purpose of the Cooler Screens is therefore one of disintermediation and disconnection. By putting a screen in front of the actual shelves, store display space can function as an advertising market completely disconnected from the actual stocked products. It's sort of like the 3D online stores that occupied the time of VR entrepreneurs before Mark Zuckerburg brought us his Metaverse. The actual products in the store aren't the important thing; the money is in the advertising space.

Second, the Cooler Screens do have cameras on the inside. With these, they promise to offer value to the distributor. Using internal cameras they can count inventory of the cooler, providing real-time stock level data and intriguing information on consumer preference. Cooler Screens promises to tell you not only which products are out of stock, but also which products a consumer considers before making their purchase. Reading between the lines here I assume this means the rear-facing cameras are used not only to take inventory but also to perform behavioral analysis of individuals who open the doors; the details here are (probably intentionally) fuzzy.

The idea of reporting real-time inventory data back to distributors is a solid one, and something that retail technology has pursued for years with ceiling mounted cameras, robots, and other approaches that always boil down to machine vision. Whether or not it works is hard to say, the arrival of the Cooler Screens seems to have coincided with a rapid decline in the actual availability of cold beverages, but presumably that has more to do with the onset of COVID and the related national logistical crisis rather than the screens themselves. The screens are, at least anecdotally, frequently wrong in their front-facing display of what is and isn't in stock. Generally they present the situation as being much better than it actually is. That this provides a degree of cover for Walgreens faltering ability to keep Gatorade in stock is probably a convenient coincidence.


Cooler Screens was born of Walgreens, and seems to have benefited from familial affection. Placement of Cooler Screens in Walgreens stores started in 2018, the beginning of a multi-year program to install Cooler Screens in 2,500 stores. This would apparently come at an expense of $200 million covered by Cooler Screens themselves. Cooler Screens was backed by venture funding, including an $80 million round lead by Verizon and Microsoft. Walgreens discussed Cooler Screens as part of their digital strategy, and Cooler Screens used Walgreens as a showcase customer. The Cooler Screens family was not a happy one, though.

The initial round of installations in 2018 reached 10,300 screens in 700 stores. Following this experience, Walgreens seemed to develop cold feet, with the pace of installation slowing along with Walgreens broader participation in the overall joint venture. Walgreens complained of "freezing screens, incorrect product displays, failure to update stock data accurately, and safety concerns such as screens sparking and catching fire."

In statements to the press, Cooler Screens referred to mention of frozen and incorrect displays as "false accusations." I can only take that as anything other than an outright lie if I allow myself to believe that the leadership and legal counsel of Cooler Screens have never actually seen their product in use. Given the general tenor of the AdTech industry, that might be true.

If it has not become clear by this point, the poor performance and reliability of the Cooler Screens is not only a contention by Walgreens but also a firm belief of probably every Walgreens customer with the misfortune of coming across them. In an informal survey of four Albuquerque-area Walgreens that I occasionally use, more than half of the screens are now dark. It varies by location; in one store, there are two not working. In another, there are two working. The cooler screens that still cling to life are noticeably infirm. As best I can remember, animations and video have never played back smoothly, with over a second sometimes passing between frames.

The screens are supposed to show full-size ads (increasingly rare) or turn off (now the norm) when idle, and then as a customer approaches they are supposed to turn on and display a graphical representation of the products in the cooler that is similar to---but much worse than---what you would see if the cooler door was simply transparent. Since they were first installed this automatic transition has been a rocky one. Far from the smooth process shown in Cooler Screens demo videos, the real item as installed here in the desert (which look worse than the ones in the demo videos to begin with) noticeably struggle to update on cue. As you approach they either fail to notice at all or seem to lock up entirely for a few seconds, animations freezing, as they struggle to retrieve the images of stock they should display. What then appears is, more often than not, wrong.

Early on in the Cooler Screens experiment they were wrong in more subtle ways. They would display one product as out of stock when it was, in fact, physically present just behind the door. They would display three other products as in stock when there were none to be found. That was the peak performance the rear-camera-based intelligence would achieve. Today, it seems like the screens' basic information on cooler layout is no longer being maintained. They display the wrong products in the wrong places, sometimes even an entirely wrong category of products.

It's perhaps hard to understand how they work so poorly, unless you have seen any of the other innovations that the confluence of AdTech and digital signage have brought us. There seems to be some widespread problem where designers of digital advertising products completely forget about basic principles of mechanical reliability.

It is ironic, given the name and purpose of the cooler screens, that they are not at all cool. In fact they run very warm, hot to the touch. I cannot be entirely sure of my own senses but in a recent trip to a Walgreens I swear that I could feel the heat radiating from the Cooler Screens as I approached the section, like an evening walk approaching a masonry wall still warm from the day's sun. As a practical matter they are mounted to the outside of standard glass cooler doors. Yes, it is deeply ironic that behind the cooler screens are normal glass doors through which their cameras are allowed to see the contents the way that customers are not, but at least the door provides some insulation. Still, somewhere between the cooler refrigeration and the store air conditioning, the excess thermal output of the new cooler doors is being removed at Walgreens' expense.

I was a bit baffled at how hot they ran (and how loud the cooling fans can be) until I considered the impressive brightness of the displays. Cooler Screens does refer to them as vivid and engaging, and they must have thought that they needed to compete with store lighting to catch attention. They are bright, almost uncomfortably so when you are close up, and the wattage of the backlighting (and attendant heat dissipated) must be considerable. Based on some experience I have with small SoCs in warm environments, I suspect they have a thermal problem. The whole system probably worked fine on a bench, but once manufactured and mounted with one face against an insulated cooler door, heat accumulates to the point that the SoC goes into thermal throttling and gives up on real-time playback of 4K video. The punishing temperature of the display and computer equipment leads to premature failure, and the screens go dark.

At a level of personal observation, the manufacturing quality of the screens also seems poor. The fit and finish is lacking, the design much less refined than the ones Cooler Screens displays in its own marketing material. The problems may be more than skin-deep, based on Walgreens reports of electrical problems leading to fire in more than one case. Cooler Screens contends that these cases were the result of failures on Walgreens part; it can be hard to tell who to blame in these situations anyway. But design and software problems must be the fault of Cooler Screens and, besides, Walgreens doesn't even like them.


Walgreens pulled the plug, or at least tried, early this year. In February, Walgreens terminated the business partnership with Cooler Screens. Only one third of the planned displays had been installed: Walgreens had started to back out years earlier. In 2021, Roz Brewer took over as CEO of Walgreens. According to reporting, she "did not like how the screens looked" and "wanted them out of the stores." According to Cooler Screens themselves, Brewer described them as "'Vegas' in a derogatory way."

I am skeptical of corporations in general and especially of their executives, and I have a natural aversion to the kind of hero worship that brings people to refer to CEOs as "visionary." Still, how validating it is to find someone, anyone, in corporate leadership who sees what I see. Cooler Screens alleges that "when she realized that her opinion on how the doors looked was not enough to get out of the contract... she and her team began to fabricate excuses." As would I! They are so evidently horrible, I would be fabricating excuses in the sense that one does to get out of a bad date. "I am sorry about not installing the Cooler Screens on schedule but I have plans tomorrow with someone else who is not you." Perhaps we can install cooler screens in 500 more stores some other time? "Sure, call me, we'll work something out," I say, scrawling 505-FUCK-OFF on an old receipt.

Still, one does not typically start off a first date with a multi-year agreement in which one party commits $200 million in exchange for future revenue. Cooler Screens sued Walgreens, arguing that Walgreens has failed to perform on their 2018 contract by not installing additional screens. They're asking for an injunction to prohibit Walgreens removing the currently installed units. Walgreens is contending that Cooler Screens failed to perform by installing screens that broke and occasionally caught fire, Cooler Screens retorts that the screens would have worked fine if Walgreens stores were in better condition.

The consumer, as always, is caught in the crossfire. As Cooler Screens continue to fail it seems unlikely that they will be repaired or replaced. As the lawsuit is ongoing, it seems unlikely that they will be removed. We just open every door and look behind it, thinking fondly of a bygone era when the cooler doors were clear and you could see through them. Now they are heavy and loud and uncomfortably warm. In the best case, we get to see a few scattered frames of a Coca Cola animation before they manage to present an almost shelf-like view of products that may or may not be in the cooler behind them.

Hope springs eternal. Earlier this year, Kroger announced the installation of Cooler Screens in 500 more of their stores, the result of a three-year pilot that apparently went better than Walgreens. The screens have claimed Walgreens as their territory, leaving destruction in their wake. They are advancing into the Smith's next.


One of the strangest parts of Cooler Screens, to me, is Cooler Screens insistence that consumers like them. I have never personally seen someone react to Cooler Screens with anything other than hostility. Everyday shoppers make rude remarks about the screens, speaking even in somewhat elevated tones, perhaps to be heard over the fans. Employees look sheepish. Everyone is in agreement that this is a bad situation.

"The retail experience consumers want and deserve," Cooler Screens says on their website. I would admire this turn of phrase if it was intended as a contemptful one. Cooler Screens promise to bring the experience of shopping online, "ease, relevance, and transparency." "Transparency" seems like a poor choice of language when promoting a product that infamously compares poorly to the transparent door it replaces. Relevance, too, is a bold claim given the unreliability of their inventory information. I suppose I don't have anything particularly mean to say about ease, although I have seen at least one elderly person struggle to open the heavy screens.

Still, "90%+ of consumers no longer prefer traditional glass cooler doors." What an intriguing claim! 90%+? How many plus? No longer prefer traditional glass? What exactly does that even mean?

Indeed, Cooler Screens presents a set of impressive numbers based on their market research. 94% of respondents say the screens impacted their shopping positively or neutrally (and the breakdown of positive/neutral in the graphic shows that this isn't even relying on a huge amount of neutral response, a good majority really did say positively). 82% said they found the content on the screens memorable. I certainly do find them memorable, but perhaps not how Cooler Screens intends.

I struggle to reconcile these performance numbers with the reality I have observed. Perhaps Albuquerque is a horrible backwater of Cooler Screens outcomes; I have not thoroughly inspected many out-of-town Walgreens. Maybe there exists, somewhere back East, a sort of Walgreens paradise where the screens are all in working order and actually look good and people like them. Or perhaps the surveys backing this data were only ever collected in the first two days following installation at Walgreens locations adjacent to dispensaries holding free pre-roll promotions. I don't know, because Cooler Screens shares no information on the methodology used to collect these metrics.

What I can tell you is this: customer experience data collected by Cooler Screens seems to reflect some world other than the one in which I exist.

I wish I lived there, the Walgreens must be exceptionally well-stocked. Out here, I am hoping the staff have fabricated crude signs so that I don't have to manually open every door. I am starting to memorize Walgreens shelf plans as an adaptation. I am nodding and appropriately chuckling when a stranger says "remember when you could see through these?" as they fight against retail innovation to purchase one of the products these things were supposed to promote. You cannot say they aren't engaged, in a sense.

--------------------------------------------------------------------------------

>>> 2023-10-15 go.com

Correction: a technical defect in my Enterprise Content Management System resulted in the email having a subject that made it sound like this post would be about the classic strategy game Go. It is actually about a failed website. I regret the error; the responsible people have been sacked. The link in the email was also wrong but I threw in a redirect so I probably would have gotten away with the error if I weren't telling you about it now.


The late 1990s were an exciting time in technology, at least if you don't look too late. The internet, as a consumer service, really hit its stride as major operating systems incorporated network stacks and shipped with web browsers, dial-up consumer ISPs proliferated, and the media industry began to view computers as a new medium for content delivery.

It was also a chaotic time: the internet was very new to consumers, and no one was quite sure how best to structure it. "Walled garden" services like AOL and Compuserve had been the first to introduce most people to internet usage. These early providers viewed the "open" internet of standard protocols to be more commercial or academic, and less friendly to consumers. They weren't entirely wrong, although they clearly had other motives as well to keep users within their properties. Whether for good or for ill, these early iterations of "the internet" as a social concept presented a relatively tightly curated experience.

Launching AOL's bespoke browser, for example, one was presented with a "home page" that gave live content like news headlines, features like IM and search, and then a list of websites neatly organized into categories and selected by human judgment. To make a vague analogy, the internet was less like an urban commercial district and more like a mall: there existed the same general concept of the internet connecting you to services operated by diverse companies, but there was a management company guiding and policing what those services were. There was less crime and vice, but also just less in general.

By the mid-'90s, the dominance of these closed networks was faltering. Dial-up access to "the internet proper" became readily available from incumbent players like telephone companies. Microsoft invested heavily in the Information Superhighway, launching MSN as a competitor to AOL that provided direct access to the internet through a lightly managed experience with some of the friendliness of AOL but the power of the full internet. Media companies tended to prefer the open internet because of the lower cost of access and freedom from constraints imposed by internet operators. There was more crime, but also more vice, and we know today that vice is half the point of the internet anyway [1].

There was a problem with the full-on internet experience, though: where to start? The internet itself is more like a telephone than a television---it doesn't give you anything until you dial a number. Some attacked this problem the same way the telephone industry did, by publishing books with lists of websites. "As easy to use as your telephone book," the description of The Internet Directory (1993) claims, a statement that tells us a lot about the consumer information experience of the time.

From a modern perspective the whole book thing seems nonsensical... to deliver information about the internet, why not use the internet? That's what services like AOL had done with their home pages. On the open internet, anyone could offer users a home page, regardless of their ISP. It was a solid idea at the time: Yahoo built its homepage into a business that stayed solid for years. Microsoft's MSN was never quite the independent commercial success of Yahoo, but has the unusual distinction of being one of the few other homepages that's still around today.

Much like the closed services that preceded them, homepage or portal providers tried to give their users the complete internet experience in one place. That meant that email was a standard offering, as was search. Search is unsurprising but email seems a bit odd, when you could use any old email service. But remember, the idea of using an independent service just for email was pretty much introduced to the masses by GMail. Before Google's breakout success, most people used the email account provided by either their ISP (for example Earthlink and Qwest addresses) or their homepage (Yahoo or Hotmail) [2].

Search quickly became an important factor in homepage success as well, being a much easier experience than browsing through a topic tree. It's no surprise, then, that the most successful homepage companies endure (at least to some degree) as search providers today.

Homepages started out as an internet industry concept, but the prominence of Yahoo and MSN in this rapidly expanding new media was a siren call to major, incumbent media companies. Whether by in-house development or acquisition, they wanted to have their own internet portals. They didn't tend to succeed.

A notable early example is Pathfinder, Time Warner's contender. Pathfinder developed some content in house but mostly took advantage of its shared ownership with Time Magazine to present exclusive news and entertainment. Time Warner put a large team and an ample budget behind Pathfinder, and it utterly failed. Surviving only from '94 to '99, Pathfinder is one of the notable busts of the Dot Com era. It had just about zero impact besides consuming a generous portion of Time Warner's money.

There were other efforts in motion, though. Paul Allen, better remembered today as the owner of several professional sports teams and even more yachts [3], had a side business in the mid-'90s called Starwave. Starwave developed video games and had some enduring impact in that industry through their early massively multiplayer game Castle Infinity. More importantly, though, Starwave was a major early web design firm. Web design, in say '95, was rather different from what it is today. There were no major off-the-shelf content management systems. Websites were either maintained, per-page, by hand, or generated by an in-house CMS. Websites with large amounts of regularly-updated content, typical of news and media companies, presented a real technical challenge and required a staff of engineers. Starwave provided those engineers, and they scored a very big client: the Walt Disney Company.

In 1996, Disney had just acquired ownership of Capital Cities Communications. You probably haven't heard of Capital Cities, but you have heard of their two major subsidiaries, ABC and ESPN. Disney's new subsidiary Walt Disney Television was a cable giant, and one focused on news and sports, two industries with a lot of rapidly updating content. The climate of the time demanded that they become not only major cable channels, but also major websites. Near-live sports scores, even just returns posted shortly after the end of games, were a big innovation in a time when you had to wait for scores to come around on the radio, or for the paper to come the next morning.

Starwave was a successful internet firm, and as was the way for successful internet companies even in the '90s, it had an Exit. Their biggest client, Disney, bought them.

At nearly the same time, Disney took interest in another burgeoning internet company: search engine Infoseek. Infoseek was one of the major search engines of the pre-Google internet, not quite with the name recognition of Ask Jeeves but prominent because of its default status in Netscape Navigator. Disney acquired Infoseek in 1999.

Here I have to take a brief break to disclose that I have lied to you for the sake of simplicity: What I'm about to describe actually started as a joint venture prior to Disney's acquisition of Starwave and Infoseek, but only very shortly. I suspect that M&A negotiations were already in progress when the joint venture was established, so we'll just say that Disney bought these companies and then the rest of this article happened. Okay? I'm sorry. '90s tech industry M&A happened so fast in so many combinations that it's often difficult to tell a tight story.

Disney was far from immune to the homepage mania that brought us Pathfinder. If they were going to have popular websites, they needed a way to get consumers to them, and "type espn.com into your web browser" was still a little iffy as an advertising call to action. A homepage of their own would provide the easiest path for users, and give Disney leverage to build their other internet projects. Disney got a homepage of their own: The Go Network, go.com.

Remember these acquisitions? Yahoo was a popular home page, and Yahoo had a search engine. Well, now Disney had a search engine. They had Starwave, developer of their largest web properties, on board as well. Disney had a plan: they took every internet venture under their corporate umbrella and combined them into what they hoped would be a dot com giant: The Go.com Company.

Disney's venture to combine their internet properties was impressively complete, especially considering their slow pace of change today. Just like Pathfinder's leverage of Time, Disney would use ESPN and ABC as ready sources of first-party content. Their combining of efforts was impressively complete, especially considering Disney's slow pace of online change today. Over the span of 1999, every Disney web property became just part of the go.com behemoth. And go.com would not be behind Yahoo on features: it had search, and you can bet it had email. Perhaps the only major Internet feature it was missing was instant messaging, but it wasn't yet quite the killer app it was in the '00s and Disney is famously allergic to IM (due to the abuse potential) anyway.

In true '90s fashion, go.com even got a volunteer-curated directory of websites in the style of DMOZ. These seem a bit odd today but were popular at the time, sort of the open internet response to AOL's tidy shopping mall.

Pathfinder made it from '94 to '99. Launched in '99, go.com was a slow start but a fast finish. In January of '00, they announced a pivot. "Internet site will quit portal race," the lede of 2000 AP piece starts. Maybe Disney saw the fallout of Pathfinder, in any case by '99 the writing was on the wall for the numerous homepage contenders that hadn't yet gained traction. Part of the reason was Google: Google took a gamble that consumers didn't really want news and a directory all in one place; they just wanted search. For novice internet users, Google might have actually been more approachable than "easy" options like Yahoo, due its extremely minimal design. Most home pages were, well, noisy [4].

Go.com's 21st century strategy would be to focus on entertainment. It might seem pretty obvious that Disney, an entertainment behemoth, should focus its online efforts on entertainment. But it was a different time! The idea of the internet being a bunch of different websites was still sort of uncomfortable, industry wanted to build the website, not just a website. Of course the modern focus on "everything apps," motivated mostly by the more recent success of this "homepage" concept in the form of mobile apps in China, shows that business ideas are evergreen.

Go.com's new focus didn't go well either. Continuing their impressively rapid pace of change for the worse (a true model of "move fast and break things"), go.com suffered a series of scandals. First, the go.com logo was suspiciously similar to the logo of similarly named but older homepage competitor goto.com. A judge agreed the resemblance was more than coincidental and ordered Disney to pay $21.5 million in damages. Almost in the same news cycle, an executive vice president of Infoseek, kept on as a Disney executive, traveled across state lines in pursuit of unsavory activities with a 13 year old. In a tale perhaps literally as old as the internet, said 13 year old was a good deal older than 13 and, even more to the EVP's dismay, a special agent of the FBI.

The widespread news coverage of the scandal was difficult for Disney's famously family friendly image. Newspaper articles quoted anonymous Starwave, Infoseek, and Disney employees describing the "high-flying," "high-testosterone" culture and a company outing to a strip club. "Everyone is going for gold. It's causing people to live in the present and disregard actions that could lead to real harm," one insider opined. The tone of the coverage would have fit right into an article about a collapsed crypto company were it not for a trailing short piece about upstart amazon.com introducing a multi-seller marketplace called "zShops."

The rapid decline seemed to continue. In January 2001, just another year later, Disney announced the end of go.com. They would charge off $800 million in investment and lay off 400. Go.com had been the ninth most popular website but a commercial failure, truly a humbling reminder of the problems of online monetization.

Here, though, things take a strange turn. After go.com's rapid plummet it achieved what we might call a zombie state. Just a couple of months later, in March, Disney announced a stay of execution for go.com. The $800 million had been marked down and the 400 employees laid off, but now that go.com had no staff and no budget to speak of, it just didn't cost that much to run.

Ironically, considering the trademark suit a year earlier, Disney's cost cutting included a complete shutdown of the former Infoseek search engine. Its replacement: goto.com, providing go.com search results under contract. In a Bloomberg piece, one analyst admits "I don't understand it." Another, Jeffrey Vilensky, gets at exactly what brings me to this topic: "People have definitely heard of Go, although there's been so many rounds of changes that people probably don't understand what it is or what to do with it at this point." Well, I'm not sure that Disney did either, because what they did was evidently to abandon it in place.

The odd thing about go.com is that, like Yahoo and MSN, it has survived to the modern age. But not as a homepage. The expenses, low as they were, must have added up, because Disney ended the go.com email service in 2010 and the whole search-and-news homepage content in 2013.

But it's still there: to this day, nearly every Disney website is a subdomain of go.com. Go.com itself, apparently the top level of Disney's empire, is basically nothing. A minimally designed page with a lazily formatted list of all of the websites under it. Go.com used to be a homepage, a portal, a search engine, the capital of Disney's empire. Today, it's tech debt.

Go.com is not quite eternal. As early as 2014 some motion had apparently begun to move away from it. ESPN no longer uses espn.go.com, they now just use espn.com (which for many years had been a 301 redirect to the former). ABC affiliate stations have stopped using the abclocal.go.com domain they used to be awkwardly organized under, but the website of ABC News itself remains abcnews.go.com. I mostly encounter this oddity of the internet in the context of my unreasonable love of themed entertainment; the online presence of the world's most famous theme parks are anchored at disneyland.disney.go.com and disneyworld.disney.go.com.

This is an odd thing, isn't it, in the modern context where domain hierarchies are often viewed as poison to consumer internet user. There are affordances to modernity, disney.com is Disney's main website even though a large portion of the links are to subdomains of disney.go.com. Disney.go.com itself actually redirects to disney.com, reversing the historic situation. Newer Disney websites, like Disney Plus, get their own top-level domains, as do all of the companies acquired by Disney after the go.com debacle.

But go.com is still a critical structural element of the Walt Disney online presence.

So what's up with that? A reading between the lines of Wikipedia and a bit of newspaper coverage suggests one motivation. Go.com had a user profile system that functioned as an early form of SSO for various Disney properties, and it has apparently been a protracted process to back out of that situation. I assume they relied on the shared go.com domain to make cookies available to their various properties. That system was apparently replaced when ESPN shifted to espn.com in 2016, but perhaps it's still in use by the Disney Resorts properties? I won't claim that technologies like OIDC or SAML are straightforward, a large portion of my day job is troubleshooting them, but still, over 20 years should be long enough to make a transition to a cross-domain SSO architecture.

There are rumors that the situation is related to SEO, that Disney fears the loss of reputation from moving their properties to new domains. But when ESPN moved they dismissed that claim, and it doesn't seem that likely given the range of SEO techniques available to handle content moves. Do they worry about consumer behavior? Sure, people don't like it when domain names change [5], but is anyone really typing "disneyland.disney.go.com" into their URL bar in the era of unified search? There are bookmarks for sure, but 20 years is a long timeline to transition via redirects.

I assume it's just been a low priority. The modern reality of Disney and go.com is idiosyncratic and anachronistic, but it doesn't cause many problems. Search easily gets you to the right place, and obvious domain names (like disneyland.com) are redirects. Go.com is an incredible domain name undoubtedly worth millions today but Disney could probably never sell it, there would always be too many concerns around old bookmarks, missed links in Disney marketing materials, and so on.

And so here we are, go.com still the sad shade of a major internet portal. Join with me for a little bit of ceremony, a way that we honor our shared internet history and its particular traditions. Set your homepage to go.com.

[1] One is tempted to make a connection to the largely mythical story that VHS succeeded over Betamax because of its availability to the pornography industry. We know this urban legend to be questionable in part because there were adult films sold on Betamax; not as many as on VHS, but probably for the same reasons there weren't as many Betamax releases of any type. This invites the question: was smut a factor in the victory of the open internet over closed providers? Look forward to a lengthier investigation of this topic on my onlyfans.

[2] The lines here are a bit blurrier than I present them, because most major homepage providers had partnerships with ISPs to sell internet service under their brand. MSN, for example, had some presence as a pseudo-ISP because of Microsoft's use of Windows defaults to market it. This whole idea of defaults being an important aspect of consumer choice for internet homepages is, ironically, once again being litigated in federal court as I write.

[3] This is a joke. Paul Allen was a founder of Microsoft.

[4] Ironically, Google themselves would launch their own home page product in 2005. It was called iGoogle, an incredibly, tragically 2005 name. Its differentiator was customization; but other homepage websites like Yahoo also had customization by that point and it doesn't seem to have been enough to overcome the general tide against this type of website. Google discontinued it in 2013. That's actually still a pretty good lifespan for a Google product.

[5] see, for example, my ongoing complaints about Regrid, a useful service for cadastral data that I sometimes have a hard time physically finding because they are on at least their third name and domain.

--------------------------------------------------------------------------------

>>> 2023-10-09 prolific counterfeiting

I'm working on a side project right now, one of several, which involves telematics devices (essentially GPS trackers with i/o) from a fairly reputable Chinese manufacturer. The device is endlessly configurable and so, like you see with a lot of radios, it has a UART for programming. The manufacturer provided a cable for this purpose, and when I plug it into my laptop running Windows, it appears in the device manager as "DOES NOT SUPPORT WINDOWS 11." What a world.

This is one of those things that I bump into perhaps once a month working on various odds and ends, especially radios. It's the collateral damage of the serial adapter wars.

Counterfeiting is an interesting aspect of the contemporary hardware industry, particularly for programmable microcontrollers sold for higher-level applications. My first exposure to this strange world was probably the ELM327. A company called ELM Electronics developed the ELM327, basically a PIC microcontroller with software they wrote. It provides a serial interface and command set for interacting with OBD-II diagnostic ports. The ELM327 was widely used for serial, USB, and Bluetooth OBD-II readers, and it was a lot more widely used after someone dumped the ROM from one and started selling their own PICs with the same software---cheaper.

Suddenly, online marketplaces were flooded with inexpensive USB and Bluetooth OBD-II readers, all described as ELM32. They were of course counterfeits. For the most part they seemed functional, but the counterfeits did often have issues with unreliable Bluetooth and power consumption behavior. More to the point, the counterfeits were stuck, versionwise. Later ELM327 versions took standard measures to avoid the ROM being read out. Counterfeits often reported fake version numbers to look more in line with the genuine item, but they only had the featureset of the original. Of course this made it annoyingly hard to tell what capabilities an ELM327 device actually had.

The story of the PL-2303 is similar. The Prolific PL-2303 was almost certainly the most popular USB-UART controller on the market. Nearly every USB serial adapter for years was based on the PL-2303, most offboard programming cables for embedded devices used it, and there were even devices that used a PL-2303 internally as a simple way to present a USB interface. The problem is that not that many of these PL-2303s were actually PL-2303s.

I don't think this is quite as clearly a case of cloning as the ELM327. It seems like what happened is that a lot of vendors independently implemented USB-UART chips that implemented the same interface as the PL-2303, based on reverse engineering. That might have made for some sort of defense were it not for the fact that a lot of these "PL-2303 compatibles" were being sold as outright counterfeits---with Prolific part numbers and trademarks.

Prolific responded to counterfeiting almost uniquely aggressively. To be clear, there is some debate about Prolific's motivations at the beginning of the serial adapter war. More skeptical commentators tend to accuse them of outright profiteering through a form of planned obsolescence. I don't necessarily think that Prolific was this clearly malicious, but they clearly had a certain disregard for the interests of their customers. Prolific's anti-counterfeit strategy evolved over a few years from guerrilla to scorched earth.

In the first wave, Prolific started modifying the device drivers they distributed to perform additional tests on PL-2303 chips before they would function. Sometime in the early '10s the first shells landed. "PL-2303" adapters that had worked fine suddenly became unreliable, and it was a crapshoot whether any given USB-serial adapter would function. As a user it was hard to understand why, and so a general view proliferated that USB serial adapters were extremely unreliable.

There was a bit of a weakness in Prolific's strategy, though: the "genuineness" checks were only present in newer versions of the driver. Counterfeit chips could be made to work just by rolling the driver back to a previous version. For several years, it was routine for USB-serial adapters to come with a mini CD containing a very old version of the Prolific driver, and a note explaining that use of the "special" driver was mandatory.

Around 2015, Prolific got more aggressive, introducing a new tactic they would return to again: the "new Windows support" scheme.

Prolific retired the PL-2303 variants they had been selling an introduced a new variant. There were some minor new features, but the main selling point on the new variant was "Windows 8 support"... a bit of a surprise since the original models had been working fine with Windows 8. But Prolific had an interesting idea of what "Windows 8 support" meant: they released a new driver version, distributed by Windows Update, that whitelisted PL-2303 variants based on Windows release. If the system was running Windows 8, the driver just checked if a PL-2303 was a variant that "supported" Windows 8. If it wasn't, the driver left it nonfunctional. Everyone upgrading to Windows 8 would also have to replace all of their PL-2303 devices with newer versions.

I'm not sure exactly what Prolific was thinking. The newer variants had some stronger anti-counterfeiting features, so I'm sure they were trying to motivate users to switch to the newer variants. That's not an easy task, since the function of USB UART chips has been basically the same for decades, so they had to force it.

I'm being charitable to Prolific by describing this scam first as an effort to deter counterfeiting. A lot of commentators figure that Prolific sales numbers were just lagging and they had to find some way to drive people to replace existing equipment. At the end of the day, Prolific probably had both motivations to a degree. It didn't really matter to the end users, the effect was the same. Prolific had invented a new type of suicide battery, just by distributing a new driver.

Of course, Prolific didn't have a way to fix the original workaround. You could still get older PL-2303s to work on Windows 8 by downgrading the driver version to one originally released for Vista or 7. A lot of people did. Forum guides went around on how to disable Windows Update driver selection for Prolific devices. Sellers switched from mini-CDs to slips of paper with a URL to download an old driver version. Things went on much as they had before.

And then Prolific did it again.

With the release of Windows 11, Prolific once again "ended support" for a long list of PL-2303 submodels. Windows Update, on Windows 11, now distributes a version of the Prolific driver that will outright refuse to work with nearly all PL-2303 variants. Everyone is expected to once again buy new USB UART chips.

But the same workaround still works... roll back the driver to an older version. The biggest thing that's changed is that Windows Update has gotten more aggressive in updating drivers. I have an older PL-2303 device, probably counterfeit, that I'm using right now. Every couple of days it stops working until I go into the device manager and uninstall the driver. Once the driver is uninstalled, it works again. Counterintuitive, but what I'm doing is really uninstalling the newer driver version placed by Windows Update so that an older driver version I had installed manually will be selected instead. Later in the day, Windows Update notices the device and puts the newer version back again.

This is a truly ridiculous state of affairs. There exists a wide range of devices, many not that old, that will only work on recent Windows releases with constant user intervention to get the correct driver to be selected. I think that, at this point, basically everyone with occasion to use these Prolific devices knows about this problem and the workaround. It's just sort of the accepted state of affairs in USB UART chips.

That's not entirely true, of course. Prolific does have one major competitor, FTDI. The FTDI chips are more expensive and thus less popular, but usually viewed as more reliable, pretty much entirely due to Prolific driver issues.

FTDI is not without their own complications, though. FTDI rather famously released a driver version that detected counterfeit FTDI chips and actually bricked them. I don't think this has ever been done on such a large scale, and it generated a lot of negative press back in 2014. Perhaps because of this nuclear attack on counterfeits, they seem a lot less common for FTDI branded parts.

There are a couple of interesting aspects of this story. First, I am surprised that Prolific doesn't worry about negative brand reputation impact. Online discussions of USB serial and UART chips often discourage the purchase of Prolific parts, although over the years most people have figured out what's happening and give a bit of backstory on Prolific's anti-counterfeiting efforts, rather than just saying the "drivers are unreliable." FTDI is often recommended as an alternative. Despite the price premium of FTDI, perhaps they are gaining market share off of this whole thing?

Second, Microsoft is distributing a driver update through their infrastructure that makes hardware less likely to work. You'd kind of think that Windows Update would have some policy against drivers that break hardware, but I also don't think Microsoft would have foreseen this situation. Still, it's been like this for almost a decade now... for USB UART chips, driver updates are often the enemy. I'm surprised that Microsoft hasn't taken some kind of action to reduce user impact, but Microsoft has its own anti-counterfeiting battles and probably takes a sympathetic view of Prolific's actions.

I'd like to say that this is a bit of history, but driver problems with USB UART chips are still common today. I have a radio programming cable I bought in the last four months with a counterfeit PL-2303 that won't work without a driver downgrade. Forum threads still give workaround instructions and recommend FTDI over Prolific. USB serial cables still come with a slip of paper with a URL for a PDF with a link to download an old version of the Prolific drivers. Life goes on.

Weird situation though, isn't it? Did anyone ever think out the militarization of driver updates? Did anyone anticipate that getting a successful serial connection would get even harder as time went on?

--------------------------------------------------------------------------------

>>> 2023-10-03 overheard overhead

Let's talk about overhead paging. The concept goes by various names: paging, public address, even intercom, although the accuracy of the latter term can be questionable. It's probably one of the aspects of business telephone systems that gets the most public attention, on account of the many stories (both true and mythical) of the exploits of people who have figured out the paging extension at a given WalMart.

Some form of public address is about as old as telephony, but voice paging is a relatively new innovation. Early telephone systems relied on the microphone, more properly called the transmitter, in a telephone to create the talking current. The amount of power produced by the transmitter was very small... small enough that the volume level of telephone calls would degrade over long loops, even with the very high efficiency of the telephone receiver (speaker).

Perhaps counter-intuitively, telephone technology predates useful sound reinforcement or amplification technology. This is perhaps not so surprising, though, when you consider that telephones basically predate electronics as we think of them today, although it's more accurate to say that telephones represent one of the earliest forms of electronic technology. Early telephone engineers struggled greatly to develop amplifiers that could be used to extend the range of telephone calls. The construction of the first transcontinental telephone line during the first half of the 1910s was enabled by the invention of Lee de Forest's "Audion," the first practical sound frequency amplifier.

There was of course a problem, in that de Forest worked not for the Bell System but for his own competing company. After validating the technology, AT&T employed subterfuge to obtain ownership of the Audion technology, after construction of the First Transcontinental had begun and not that long before it was finished. Despite AT&T's guile we can thank de Forest for enabling everything from truly long-distance telephony to much of modern music [1].

Although vacuum tubes like the Audion were suitable for signal amplification, it would be some time further before audio amplifiers were developed that could produce enough power to drive a large speaker. Until that time, the electrical methods developed for the telegraph and telephone were also quite useful in the improvement of bell signal systems.

The electrically-sounded bell is surprisingly old, a direct consequence of the invention of the electromagnet in 1823. Over the following years a number of arrangements were devised for electromagnets to sound bells, and just as quickly electrical bells connected to switches were employed as a method of signaling across distances. The doorbell is a simple and common example that, in many homes, still uses essentially 19th century technology. It's hard to say when exactly, but it was probably well before the turn of the 20th century that electrical bells were used as a way to signal events in large buildings. In some cases bells displaced steam whistles for industrial timekeeping, but their smaller size and quieter operation made them suitable for environments like stores and schools, where bells became a common way to not only signal shift times but summon a manager or principal to the front office.

Electromagnetic bells were prominently used as a signaling device in telephones, and the fact that telephone wires could drive bells was a convenient one. It was common for bells to be integrated into telephone and especially interphone (internal-only telephone) systems. A store, for example, might have an interphone system that allowed the sales manager of each department to call to the telephone operator---but also bells, running from the same wiring, that sounded throughout the store to signal a manager that they were wanted on the phone. In apartment buildings, interphones were installed to connect the front door to various apartments---this is a surprisingly old technology, one that emerged naturally considering that the same wiring installed for door bells or buzzers could also support a telephone connection. Western Electric catalogs for interphone systems almost always discussed both the ways that existing bell wiring could be reused for interphones and how interphone systems could be extended with bells.

To understand these systems in the retail environment, it's useful to talk a bit about department stores. The department stores of the mid-century had more to them than what we're used to today. It's not necessarily that they were larger---the floor space of a WalMart Super Center is formidable. But in the 1930s and 1940s, undistinguished tilt-up warehouses leased from investment banks had not yet been perfected. Department stores were more substantial edifices, usually with multiple stories and a great, if not grand, staircase. Ceilings were relatively low, and interior walls far more common. These large stores were challenging environments for communication and logistics. They bred many building-scale logistical technologies like cash railways, cash balls, mail chutes, and pneumatic tubes. Most of these have not survived to the modern era, and the fate of the pneumatic tube is looking increasingly grim as robotics companies elbow into the space with solutions that are more expensive and less reliable, but require less architectural planning.

Bell systems became very popular in department stores, especially in the coded form. In a coded bell system, sequences of chimes indicate different messages. By convention, these are often written as two numbers separated by a hyphen. 2-3, for example, would indicate two chimes, a pause, and three more chimes. Other coding schemes existed as well but the n-n seems to have been ubiquitous in department stores. Unfortunately, despite being quite sure I remember finding a catalog from one of the manufacturers of these systems, I cannot figure out what company it was now. I was able to find various discussions of these systems online that confirm my recollection of how they worked; maybe I had read these originally and imagined the photos.

In any case, a typical system of the early 20th century looked something like this: at the telephone operator's position, there was a panel with a grid of holes. When a call was received with a question for a specific department, the operator would insert a peg or plug into the hole matching the code number for the sales manager in that department. The correct bell sequence would be automatically rung, repeating every so often, until the plug was removed.

Originally, these systems used physical bells placed throughout the store. As telephone technology advanced, though, it offered new ways of distributing sound. Likely the first public address system was announced in 1910 by Automatic Electric. The "Automatic Annunciator," as it was called, had to function on the most basic amplification technology and so must not have been very loud, but this didn't prevent a few large-scale installations in outdoor venues. In the two decades after, though, amplifiers improved rapidly.

Before long multiple voice address systems made it to market. The large-scale public address system perhaps reached its apex when Altec Lansing, a Western Electric spinoff best known for its theater speakers, responded to Cold War civil defense demands with Giant Voice. In the early 1960s, Giant Voice installations provided voice address across entire towns. They were so common in military applications that the term "Giant Voice" is still used on some military bases, decades after Federal Signal and Whelen assumed that market.

Voice public address systems seem to have become common in department stores by the 1940s, as companies like Magnavox and Altec Lansing had made speakers and powerful amplifiers relatively inexpensive. Voice address had the upside of flexibility, but intelligibility still wasn't that great, and overhead announcements seemed gauche in the polished environment of the department store. Coded bell systems made a transition: instead of bells distributed throughout the building, many sounded a single chime equipped with a magnetic pickup. The sound of the chime, quite clear compared to a microphone, was then amplified by the public address system.

This same concept, of a "chime" signal source directed into an audio address system, is still common today. Many school bell systems, for example, are now really an electronic tone generator connected to a voice address system. Of course the use of a tubular bell or vibratory chime struck by solenoid as a signal generator would be hard to find today.

Coded bell systems would come closer to telephone systems with time. From the 1930s, for example, Automatic Electric offered a coded bell system as an optional feature on their popular PABXs. Telephone calls could be dialed not just to phones but also to coded bell numbers, which would be rung out on a distributed bell system. Into the 1960s this feature was included on increasingly capable telephone systems.

While there was an obvious relationship, telephone and public address systems stayed mostly separate. Automatic Electric's involvement in the industry, for example, didn't last long. By the 1950s, public address was associated with companies like Bogen and Crown, not Western Electric and Automatic Electric (nor Northern Electric or any of the other Electrics, for that matter). The high power levels required for PA in large buildings required a different kind of electronics expertise.

Large-area audio systems are usually "constant-voltage" systems, with 70v being most common. Unlike a typical audio amplifier that produces only the voltage required to drive a four or eight ohm speaker, constant-voltage systems output audio at a high voltage which is decreased to a more appropriate coil driving range by a transformer in each speaker. There are a few advantages to constant-voltage audio: first, the higher voltage results in a lower current, and thus smaller wire gauge, for equivalent power. Second, most constant-voltage speakers have multiple transformer taps that allow individual speakers to be adjusted to be louder or quieter. Finally, the transformers in a constant-voltage audio system provide impedance matching, so an arbitrary number of speakers can be connected to the amplifier in parallel without the need for accommodations like a Strauss transformer.

A more traditional approach is to have one or a few amplifiers in the range of a few hundred watts that drive all of the speakers in a business. In more modern installations you will find "distributed" amplifiers, which might involve signal-level audio lines to transformers in different closets but today more often distribute the audio over IP. They have the advantage of more flexibility and less power-level wiring required. Companies like QSC make sophisticated system where multiple channels of audio and control information are distributed by IP, allowing individual zones of speakers to be switched between different audio streams under network control.

There are a few ways of connecting the input to a PA system. Most PA amplifiers offer some variation on a two-input scheme. One input, sometimes called "program," is connected to a background music source like an audio player or the streaming appliance provided by a background music service like Mood Media. The other input, often called "page," has a special property: when an input is detected on the page input, the program input will either be muted or ducked (attenuated to be quieter). The page input thus overrides the program input, perfect for making occasional announcements over the overhead speakers.

But where does the page input come from? It could be a microphone, and you do see this in some smaller businesses. But in most larger environments, the phone system is an obvious way to provide paging. Many business PABXs have a line-level paging output as a standard or optional feature. When a dedicated paging feature isn't available, there are specialized devices that act as an extension phone that immediately answers and provides a line-level output. The line-level output from the phone system goes to the page input of the PA amplifier, and you now have an overhead paging feature.

Increasingly common today are all-IP paging systems. Some of these use more audio-centric approaches based around UDP streams or RTP, but the telephone industry prefers a SIP-based solution in which paging speakers are essentially just weird-shaped phones. These have the advantage that the paging speakers can be PoE devices installed by the networking contractor, without the need to bring in an audio systems integrator. Despite these ostensibly being IP phones and pretty much always having the ability to act as a SIP endpoint, they usually also accept multicast RTP streams. For that matter, most IP desk phones can also accept a multicast RTP stream for paging purposes. Finally, telephone and distributed audio systems are converging.

This whole thing has really been burying the lede, though. If the overhead paging is an output from the phone system, how do you "call" it? What most people want to know are the paging codes at the WalMart.

Well, it's hard to say exactly, because it depends on the specific phone system and the way it's configured. In some cases the paging output of the phone system is just a special extension number. Even when it's not, there can often be an extension assigned as a convenience.

Nearly all business phones will have some number of buttons, variously called line keys, BLFs, multifunction keys, etc. depending on the manufacturer and system. On modern phone systems the idea is pretty much the same: they can either be assigned to a line appearance (a "line" on the phone with which you can place a call) or to a feature, like a speed dial, voice mail, or paging. In a lot of offices where paging is used, one of the line keys will be assigned for paging, and all you need to do is pick up the phone and press that button.

In stores, though, that's usually a little too easy. Stores often have phones out on the floor where they're not very well secured, and people are tempted to experiment with them. This leads to a certain degree of obfuscation.

Sometimes paging will be a simple extension. Internet commentary and experience suggest that "4444" is common in some chain stores. Another approach is to assign paging to a feature code, a somewhat vaguely defined concept that generally means a number starting with * or #, like #968. These are just a matter of configuration, although some PABXs might limit your choices to one or the other. There are undoubtedly some chain stores that exercise complete central control of telephony (perhaps Target?), but this doesn't seem to be all that common. The way you page varies from store to store, depending on the phone system installed, corporate practice at the time, and the whims of the integrator that installed it.

Overhead paging is sort of dying out in retail, as a lot of stores have gone to radio systems. Fred Meyers, of my native Portland, seems to have been an early adopter of fully-addressable radio systems. By the late 2000s, they were issuing DECT handsets to nearly all employees and making full use of DECTs surprisingly good support for large systems. More recently, Walgreens is adopting an IP-based wireless communications platform called Theatro (basically a competitor to Vocero focusing on retail rather than healthcare). Oddly, some of the largest retailers just stick to handheld radios on MURS channels.

There's still a bit of an overlap to our main topic here, though. It's surprisingly common for older retail overhead speaker systems to have a radio receiver of some sort on a page input to support "push for help" buttons. Many of the older products of this type are really just MURS radios that transmit a prerecorded message whenever the button is pressed. Of course, this means you don't even need to find a phone and figure out the extension to make your own announcement... you just need to figure out the MURS channel and squelch code.

[1] Incidentally, similar work on vacuum tubes was being performed by Irving Langmuir, who was ahead of de Forest on manufacturing techniques but a bit behind him on the electronic principles. Langmuir's research into electrical physics formed the basis of later work on lightning by E. J. Workman, the most prominent historical president of my alma mater New Mexico Tech. Workman founded a laboratory to research atmospheric phenomenon, operated by Tech and adjacent to the observatory that briefly employed me. It is the closest thing I have ever seen in real life to a mad scientist's mountaintop lair, being equipped to literally draw down lightning. It is named the Langmuir Laboratory in Langmuir's honor. This satisfies my target of relating everything I write to some piece of New Mexico trivia.

--------------------------------------------------------------------------------
<- newer                                                                older ->