_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss
COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.

I have an MS in information security, more certifications than any human should, and ready access to a keyboard. These are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.

You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.


>>> 2023-09-10 the Essex GWEN site

Programming note: this post is in color. I will not debase myself to the level of sending HTML email, so if you receive Computers Are Bad by email and want the benefit of the pictures, consider reading this online instead (the link is at the top of the email).

In the aftermath of a nuclear attack, United States military and government policy focuses on one key goal: retaliation. Nuclear policy has long been based on the concept of a credible deterrent, often referred to as mutually assured destruction. It is surprising to some that the technical history of the Cold War is so deeply intertwined with the history of telecommunications technology, but it's obvious in this context: a fundamental part of the nuclear deterrent is a robust, nationwide communications system. For destruction to be mutually assured, we must have confidence that the national command authority will be able to order a nuclear strike under all post-attack conditions.

The post-attack environment is difficult for communications infrastructure. Well, it's difficult for most everything, but communications systems were the topic of extensive research and engineering. During the height of the Cold War, most AT&T long distance facilities were hardened against the blast forces expected from a nuclear attack on a nearby city. Some AT&T facilities, particularly critical to the operation of AUTOVON and other military services that AT&T offered, received more extensive protective measures. Gamma burst detectors would automatically close ventilation dampers in reaction to a nuclear detonation. Redundant, independent utilities and stockpiled supplies would support an underground crew for weeks. The telephone system was more robustly defended than you might think.

The telephone system faces specific challenges in nuclear survival, though, that by the 1980s raised questions about the value of all these efforts. We have previously discussed the rise of the EMP. In that context, I mentioned that the impacts of EMP on military and communications equipment was, in general, not very seriously considered until well after nuclear attack on Japan. Only in the the late '60s did military research into EMP survivability begin on a large scale, much of it under the auspices of the Air Force Weapons Laboratory and its Nuclear Effects Directorate at Kirtland Air Force Base here in Albuquerque [1]. In a YouTube video, I lecture on the history of this work. Among the civilian equipment subjected to the various EMP simulators at Kirtland were telephone equipment furnished by Mountain States Telephone and Telegraph. Telephone systems, including rather substantial lengths of open wire, were also exposed to atmospheric nuclear detonations at the Nevada National Security Site.

Foreshadowing later research into EMP, these tests usually showed surprisingly little effect on the equipment under test. Nonetheless, telephone systems were probably the most vulnerable infrastructure: the many miles of open wire and cables were particularly likely to accumulate large potentials in a nuclear-induced EM field, and the amplifiers and filters used on long distance lines could be damaged by these surges. AT&T shielded equipment against RF and fitted extensive surge protection (important as a precaution against lightning anyway), but it was unclear if even the most hardened telephone systems would be fully functional after a nuclear attack.

Radio is an obvious alternative that comes without the hazards of long wires, but it too struggles with post-attack conditions. Multiple nuclear detonations would cause significant medium-term disturbance of the ionosphere and kick up huge amounts of fallout. These effects create a combination of poor atmospheric propagation and high attenuation that would make HF radio much less effective post-attack. The military and even executive branch agencies had reserve HF communications systems (and continue to today), but once again, it was unclear how effective these would be in the runup to a nuclear reprisal.

During the late '70s and early '80s, a series of trials were performed at Kirtland AFB to investigate the performance of different types of radio systems under high-altitude EMP. Low-frequency systems, like broadcast AM radio, tend to propagate by "groundwave." These long waves actually diffract as they pass over the ground and obstacles, and this way they can "hug the earth" well past the horizon. Researchers at Kirtland found that groundwave was minimally affected by high-altitude conditions, and so a low-frequency system could reliable offer long-range communication post-attack.

Unfortunately, groundwave propagation cannot typically span the nation. The reliable range of the proposed 150-175 kHz system (below the commercial AM band) was only around 200 miles. By 1980, computer technology (particularly as demonstrated by Western Union) offered a solution: routing. If radio "relays" were built about every 200 miles, they would be able to repeat messages from coast to coast.

The antennas required for LF operation were large, and nuclear reprisal commands could often come from mobile equipment. As a solution, each radio relay could be equipped with a VHF radio. A message originator, such as a Looking Glass aircraft or even the presidential motorcade, would transmit a message on VHF to be received by the nearest relay and then distributed throughout the system.

The military seldom met a survivable communications system it didn't love, and in 1981 GTE (General Telephone and Electronics) was contracted to construct the Ground-Wave Emergency Network: GWEN.

Illustration of a GWEN site

In theory, GWEN would consist of 240 sites across the country. Each would be equipped with a 300' LF antenna mast (and corresponding buried wire ground plane), a VHF/UHF receiver, and equipment enclosures for radio and computer equipment to receive, process, and transmit messages. GWEN could handle both voice and data traffic, with the data capability used mostly for text messages---already common in military communications due to the large AUTODIN routed telex system. GWEN used packet-switching techniques pioneered in AUTODIN to balance reliability and redundancy, re-transmitting messages by another route in the event of a failure.

Due to their number and often remote locations (a good thing for survival), GWEN sites were unattended and surprisingly compact. Despite being equipped with an LF transmitter as powerful as 3kW and a diesel standby generator, GWEN sites needed only a few small equipment enclosures besides the large main mast. You would have driven by one without paying it much attention.

GWEN lived a short and ignominious life. The GWEN program was shut down by congress in 1994 with only 58 of the sites built. There were several reasons for the cancellation; likely the biggest was simply questions about the system's necessity. More advanced research into EMP conducted in the '80s and '90s tended to show that the effects of nuclear EMP were less than anticipated, and an attack wouldn't be likely to cause as widespread telecoms damage as GWEN planners had feared. Sen. Harry Reid, to whom I so often return, called GWEN a "wasteful dinosaur" in comments to the press.

There were stranger reasons as well. GWEN was among the first battles in a quiet war that continues today: that over the safety of radio frequency radiation. Activists, some motivated by broad opposition to the Cold War but most by the concern that LF radiation would injure them, organized against construction of further GWEN sites by the turn of the '90s. In Kanab, UT, residents launched a significant media campaign against the construction of a GWEN site, leading to a city council resolution opposing the project. Many GWEN sites under construction were protested, with civil disobedience leading to arrests in a few cases.

Health concerns around GWEN presented an important challenge for the then relatively new National Environmental Policy Act, or NEPA. Activists opposed to GWEN, organized through newsletters like the "No-GWEN News," used the NEPA process to delay GWEN work until additional environmental analysis could be completed. This effort culminated in 1990, when Congress suspended the entire GWEN program until the Environmental Impact Statement was revised to better address health concerns. Construction resumed when a new EIS showed no adverse impact on public health. A 1993 independent report by the National Research Council again confirmed the finding. But the 1990 suspension seems to have been the beginning of the end for the program, solidifying opposition both among activists and congresspeople.

Public awareness of health concerns around GWEN and the related ELF program is perhaps best memorialized by S06E02 of The X-Files, titled "Drive." This memorable episode dramatized fears that low-frequency radio waves were a force not fully understood while bringing writer/producer Vince Gilligan together with actor Bryan Cranston, which would bring the story around to Albuquerque once again when Gilligan called on this experience to cast Cranston in Breaking Bad.

The more conspiratorial vein of GWEN opposition persists to this day. Contemporary online research into GWEN inevitably takes one to the edges of reason, with blog posts and Reddit threads drawing a direct through-line from mind control by GWEN to mind control by 5G. Some believe that the GWEN program was never really canceled, continuing today out of some black budget. Another blogger contends that the true purpose of GWEN was to control the weather. In either case, the harmful effects of GWEN might be counteracted by the careful placement of crystals.

In our present era I have come across an individual who placed a series of crystals throughout the state of New Mexico in order to create a protective bubble of energy that would counteract 5G. I wonder if they knew about GWEN? Oddly enough, opposition to GWEN locally seems to have been minimal, perhaps a result of the number of Albuquerque residents employed by the same installation that created the system. Southeastern Utah was a locus of activism, though. This was an enduring pattern: when the Air Force and Army fired hundreds of missiles from Green River, Utah into New Mexico's White Sands Missile Range, the bulk of the opposition came from the vicinity of the launch pad, not the target.

GWEN did not have the glory of becoming some clandestine program. Instead, it was quietly shut down, with most sites left as they were. Fortunately, many GWEN relays would find a second life. When the Department of Transportation proposed to expand the Coast Guard's NDGPS system inland in 1999, the forgotten GWEN sites offered conveniently ready-made towers and enclosures. Half of the constructed GWEN sites were converted to NDGPS use in the early '00s, including the original at Kirtland AFB. GWEN had significant enough coverage of the nation that only a handful of inland NDGPS sites required the construction of a new radio mast. If nothing else can be said for GWEN, its skeletal remains produced a great savings to the taxpayer.

NDGPS would not last long either. Increasingly obsolete in comparison to the FAA's WAAS, the decision to retire NDGPS was made in 2015, and by 2020 all NDGPS sites had been removed.

And that brings us to Bakersfield, California.

An expanse of empty desert

Well, not really, or at least not quite. The NDGPS site called Bakersfield was variably called either Fenner or Essex by the GWEN system, and it's located about halfway between those two small settlements in the desert of southern California. Near the intersection of CA 66 with National Trails Highway (as these suggest, this segment is a historic alignment of Route 66), the Essex site has very little by way of neighbors. Today, it has very little of anything else, either.

Aerial view of the Essex site

I visited the Essex GWEN site recently having heard that it was demolished but hoping for some interesting remains. For whatever reason, probably its (relative) proximity to Los Angeles, Essex is one of the better known GWEN sites. Wikipedia, for example, illustrates the articles on both GWEN and NDGPS with photos of the NDGPS installation at Essex. Treasure those photos, because what they depict is long gone. The demolition contractor did an admirable job on the cleanup, and you could easily pass by without knowing there was anything near the size of a 300' radio mast in this clear spot in the desert.

Mysterious metal rod in a hole in the ground

Several depressions in the ground might reflect the locations of guy anchors for the tower, in some of them what I believe to be remnants of the buried ground screen are visible.

A... thing, of unknown sort.

The most substantial artifact on the site is hard to identify but looks to my eyes like it could have been part of the hardware of an antenna. It is made of aluminum, has regular holes drilled in it, and gives the impression that it used to project from a plastic housing.

Coaxial cable and tire

The surest indication this is a former radio site comes from a pile of coaxial cable, occasionally tangled in the desert scrub. This is rather light cable and was probably connected to one of the receive antennas. There is fifty to a hundred feet of it scattered about. It may have been pulled out of a buried conduit connecting shelters.

A utility pole

The site was removed only recently, and the electric utility apparently kept the service in good shape up to that point. At the edge of the site, a utility pole provided three-phase service that once went into an underground conduit.

Old buildings, old junk, etc.

Despite my comments on its isolation, this GWEN site does have a neighbor: it appears to be an old service station, but its history is clearly linked to the airfield found just a short distance to the north.

A sheet metal shed

It seems to have been moved, but a sheet-metal shed adjacent to the old service station is recognizably the generator shed from a Civil Air Administration route beacon. The orange-painted roof, for visibility to aircraft, is just holding onto some paint. Originally this would have had a tower adjacent, and possibly even would have sat on an arrow-shaped orange concrete foundation.

Aerial view of the airfield

The Mojave Desert was extensively used for military training during World War II, and the remains of temporary military installations abound. The CAA-type beacon hut leads me to assume that there was an airfield at this location built by the Civil Air Administration, but it was clearly redeveloped by the Army as the Essex Army Airfield. This airfield supported the Essex Divisional Camp of the Desert Training Center, through which millions of men passed on their way to the second World War. It's only one mile north of the Essex GWEN site, the kind of coincidence that is common in the Mojave desert: it is barren of life but surprisingly dense with history.

Remains of a taxiway

The airfield is surprisingly large, reportedly built to accommodate A-20s. The actual runway, a steel mat down the center, is long gone. The dirt-concrete blend taxiways on the side have fared better, and I think a light aircraft with backcountry gear and an adventurous pilot could still set down on the eastern taxiway. I don't recommend the attempt.

Overall the site is very overgrown. The access road is clear in satellite imagery but, as is often the case, far less clear from the ground. We had trouble finding it but after a few passes navigated around scrub, through washouts, and over a berm onto a taxiway. Only as we left did we realize where the actual access road met the airfield; we had just driven up a coincidentally road-like wash.

Old tiedown and metal barrel

12 tie-downs surround the airfield, arranged in a curious asterisk pattern at each end. They're probably the best enduring parts of the airfield. I looked over these peripheral areas carefully, hoping to find artifacts of the apparently fairly busy military operation here, and ideally the original foundation of the CAA beacon. Instead I found a 55-gallon drum, a plastic jerry can of recent origin, and a large desert tortoise enjoying the sun. The latter is a rare find, but not the one I expected.

Remains of the runway

The runway, never paved, is filling in with bushes. The middle has started to wash out. The airfield will be visible from above for a century, but in a matter of a decade it might be hard to make out on the ground.

The Defense Baseline Route, a long-distance telephone line quickly strung along the length of the West Coast as a wartime exigency, passes only a few miles West. This is one of the segments still standing, although most of the DBR has been removed. Just to the south, the Santa Fe Railroad's transcontinental line makes for Los Angeles. Essex exists because of the railroad; its water tank apparently supplied not only passing steam locomotives but the army camp.

There's a lot out here, but you have to look carefully for it.

[1] This organization went through a succession of reorganizations and names over years, including the Philips Laboratory. Today, it's mostly the Directed Energy directorate of the larger Air Force Research Laboratory.


>>> 2023-09-03 plastic money

You will sometimes hear someone say, in a loose conceptual sense, that credit cards have money in them. Of course we know that that isn't the case; our modern plastic card payment network relies on online transactions where the balance tracking and authorization decisions happen within a financial institution that actually has the money (whether it's your money or credit). There is an alternate approach though, one which has historically been associated with terms like "epurse" in the technology industry: what if the balance tracking and authorization decisions were actually made inside of the card?

Ten years ago this proposal might have seemed more absurd in the United States (amusing since, of course, the technology to facilitate this is much older). Payments in the US were made using magnetic cards with two tracks totally hundreds of bits. Debit cards could contain a challenge value used to verify the PIN, but even then the decision of whether or not to accept a PIN offline was made entirely by the payment terminal. Fraud related to these cards became a problem quickly enough that offline processing of card transactions (for example by the use of a "kerchunker" impression machine) became very rare, and all transactions were conducted online. Even then cards were vulnerable to duplication and this was a fairly common form of fraud.

Europeans, though, had been using smart card technology dating back to as early as the '80s in France. These cards had an onboard microcontroller that could make decisions and even run applications. Inside of the card there is nonvolatile storage that can retain a cryptographic key, allowing the card to participate in a cryptographic challenge-response process that made duplication very difficult. Even better, PIN verification was performed inside the card, meaning that even a malicious terminal could not accept an invalid PIN during an offline transaction.

And today, that's the most widespread application of smart card technology: cryptographic challenge-response authentication. The technique is ubiquitous both in payment systems and in access control and ID verification, spanning a wide gamut of capabilities from DESfire keycards to the United States Government's behemoth of an identity credential standard, PIV.

It's sort of interesting that these less ambitious applications of smart cards are about as far as they've gotten in the United States. Their capabilities are much greater than modern applications suggest. Smart cards were, from the very beginning, conceived as much more powerful multi-application devices that were capable of enough internal accounting logic to implement true stored value cards, or SVCs. Cards which "contained" money in a balance that could be debited and credited fully offline, just by a terminal communicating with the card.

First, bit of history of the smart card. One of the reasons that smart cards have made relatively little inroads in the US is their European origin. Nearly all of the development of smart card technology happens in European companies companies like Gemplus (Netherlands) and Axalto (France), today merged into Gemalto, part of French defense conglomerate Thales. Not to be understated either is the German company Giesecke+Devriant. Many early developments happened within the French Bull group as well, which through merger into Honeywell continues to make related products. Identity technology vendor Morpho, later Safran Morphotrust, today Idemia, forms the backbone of the TSA and Border Patrol's ubiquitous travel surveillance from their headquarters in the suburbs of Paris. They are further accused of providing identification technology to Chinese government agencies for purposes of oppression. Identity is a sticky business.

These companies have a long-running relationship with secure identity. G+D has long been a major international center of expertise in currency manufacturing and security, and the US Federal Reserve System for example relies on G+D equipment to detect counterfeits. Gemalto became one of the primary vendors in secure digital identity technology, and Thales carries this on today, providing major components of the US federal government's USAccess/HSPD-12 scheme.

It all began with payphones. Well, that's not true, there were plenty of developments in smart card technology before they were applied to payphones, but payphones introduced the technology to the French masses in 1986. France also pioneered chip-based online transactions, with a nationwide ATM network based on smart cards in 1988 and ubiquitous issuance of a precursor to EMV in 1993. We have to be careful to differentiate online and offline systems, though. One of the confusing issues around SVCs is their functional similarity to online transaction cards using an integrated chip for authentication purposes. To understand more clearly, let's take a closer look at one of the most common SVC applications in the US: the laundry card.

Laundromats are conceptually simple; each machine needs a coin acceptor (often limited to quarters only) and a coin vault. In practice, it's a little more complex. Most customers don't walk in with enough quarters any more, so the laundromat has to provide a change machine. Change machines, being stocked with hundreds of dollars in coins and bills, are an attractive target for theft. Besides, they aren't that reliable. Emptying the coin vaults on each machine daily is a time sink for staff, especially when the risk of theft requires multiple staff members as a precaution. Wouldn't it be easier if laundromat payments were electronic?

Today there are a number of ways to achieve that end, most of them worse than the old system of rolls of quarters, involving some combination of QR codes, smartphones, Bluetooth, and "The Cloud." These approaches were a nonstarter in the '00s, though. Wireless networking was in its infancy, the cost of putting network connectivity in every machine was very high. A solution was needed that allowed the billing devices in machines to be offline, operating totally independently. Any case of offline payment terminals calls for SVCs.

So in many laundromats even today, there is a device on the wall somewhere called a value transfer machine, or VTM. Actually the term VTM is a trademark of one of the major vendors of these systems, ESD, but it's such a good generic term that I will disregard their claim and use it across vendors. At the VTM, a customer either inserts their smartcard or presses a button indicating that they want to purchase one. The VTM accepts a payment by either cash or payment card, and then "transfers" that "value" to the inserted smart card---or a new one dispensed from an internal stack. Pricing details vary, but smart cards aren't as cheap as anyone would like and so it's common to charge a few dollars for a new card. Customers are encouraged to keep their card for the long term.

What happens internally? A very simple implementation suffices to explain the concept. On the smart card, there is a value in nonvolatile memory that represents the amount of money on the card. When you add money, the VTM increments that value. When you insert the smart card into a laundry machine and start a cycle, the billing device in the machine (usually a drop-in replacement for a coin acceptor with the same electrical interface) decrements that value. And there you have it: the card is just like cash, representing value on its own, with no online operations required.

Of course you can see the problems with this scheme: couldn't anyone just write a bigger number to the card? The earliest implementations tried to prevent this with simple password schemes or very elementary cryptography, and results were poor. The French payphone system of the late '80s, for example, was known to be vulnerable to duplication of cards and so naturally a black market emerged.

The history of early SVCs, mostly of the '80s and '90s, being vulnerable to at least duplication if not outright forgery gave them a poor reputation for security that persists to this day. It doesn't need to be that way, though, and excepting some obsolete systems still in use it isn't. If we can make the blockchain work we can certainly make SVCs work (admittedly this somewhat self-defeating argument presages the failure of SVCs to catch on for general purpose use). The problem with early SVC systems was the limited computational capabilities of the smart card, no match for the high complexity of strong cryptographic algorithms. Smart card technology advanced, though.

The term "smart card" is not very precisely defined but tends to refer to any card with an Integrated Circuit Chip (ICC) compliant with one of several specifications for physical and electrical interface, mostly ISO 7816 for contact operation and ISO 14443 for contactless operation. It's important to understand that while the term "smart card" is most often used to refer to contact operation, that's not a limitation of the technology. Historically some cards implemented contact and non-contact operation by having two separate chips, but that method is well obsolete. Modern smart cards, especially payment cards, are usually dual-interface cards where the same ICC is capable of communicating the same logical protocol over either the contact interface ("insert") or the noncontact interface ("tap"). Since the noncontact interface is compatible with NFC, smartphones are able to use their secure element to run an application similar to the one that runs on EMV cards.

If these cards are so smart, what do they actually do? Well, that part has varied a great deal over time. The earliest smartcards, developed in the '70s, were essentially memory and nothing else. Later on, though, smart card software evolved to multi-application cards in which a smart card operating system provides services and manages the selection of applications.

Perhaps the most famous smart card operating system is Java Card, a platform that allows smartcards to run constrained Java applets. Java Card was developed by French conglomerate Schlumberger, whose identity and card division spun out to form a major part of Gemplus (now part of Thales). Besides supporting very constrained devices, Java Card was designed for the high-security applications typical of smart cards. It provides full-featured cryptography up to ECC on modern devices, but more importantly enforces security isolation of applets and their communications and memory.

Java Card is particularly widely known because of its role in the "Java Ring," a chunky fashion accessory that presents a Java Card environment in the onewire-based "iButton" form factor. iButtons are a topic for their own post one day, being surprisingly widely used in a couple of niches where their improved durability over ICC-type smart cards is an advantage.

Java Card is also widely used, being one of the most common operating environments on practical smart cards. There is a good chance that you have more than one Java Card environment on your person at this moment. Discussing the full scope of Java Card applications requires a bit more rambling on the smart card as a physical object, though.

If you are a dweeb about identity documents, you have probably read into ISO 7810. This standard describes a set of physical form factors for identification documents. Most notable is the ID-1 form factor, which is widely used for payment cards, driver's licenses, and in general any standard-sized wallet card. Size ID-3 from the same standard is the norm for passports. But then there's an apparent oddity, size ID-000, a small 25x15mm card with a notch out of one corner. Sound familiar? ISO 7810 ID-000 is the physical description of a conventional SIM card.

SIM cards are just smart cards. Big reveal, I know! GSM was standardized by an organization out of Paris in the same time period that France Telecom adopted smart cards for payphone payment. When looking for a transportable means of authenticating the phone owner, it was an obvious choice. SIM cards no longer conform is ISO 7810 in most cases (having migrated to the smaller micro and nano formats), but continue to be compliant with ISO 7816 for electrical and protocol compatibility. It is no coincidence that SIM cards are often shipped in an ISO 7810 ID-1 compliant carrier, since these make personalizing and testing in the factory easy to do with standard smart card interfaces.

ISO 7816, the standard for smart cards specifically, describes the physical position and layout of the contacts on the ICC. It also describes an electrical interface [1] and logical protocol for communication with smart cards. Smart card communication is based on APDUs, or application protocol data units, packets exchanged between the reader and card. APDUs can indicate a standardized cross-vendor operation code, or a proprietary operation specific to some application on the card. This is a little network protocol used within the confines of the card slot, and smart card applications specify which APDU commands must be supported by cards.

The abstraction of the fairly well-defined APDU protocol creates a healthy degree of separation between smart card uses and implementations. This is all to say that the software running on smart cards often varies by vendor, even within a common application. Java Card is very common, but not universal, for both SIM cards and EMV payment cards. It competes with "native" operating systems like MULTOS. These native operating systems tend to leave more memory and processor time for applications because of the lack of a bytecode interpreter (yes, Java Cards actually run a very constrained JVM), but usually lead to application development in C which is less appealing than "weird constrained Java" to many organizations.

As you might imagine given this range of applications, security expectations for smart cards are high. In fact, the modern concept of a "secure element" largely originates with smart cards, and many secure elements in things not shaped even remotely like cards continue to use the ISO 7816 logical interface and Java Card. The SIM card is really just a portable secure element, capable of running multiple applications with nonvolatile storage, and in some countries (mostly European) they have been used for broader identification and authentication purposes. Smart cards are expected to be resistant to both electronic and physical tampering. Smart cards were historically a common form factor for cryptographic secure elements, being used to protect key material of sensitivity ranging from satellite TV scramble codes to military communications equipment---although for reasons of both durability and not having been invented overseas, the US NSA has historically preferred more homegrown form factors for cryptographic elements.

Putting this all together, you can probably see that it is indeed possible to build a reasonably secure stored value smart card system. All increment and decrement operations can be cryptographically authenticated. Unique secret keys, "burned in" to cards as part of personalization and not readable from outside of the secure element, can be used to authenticate the card and prevent duplication. While it is conceptually possible to duplicate stored value cards through laboratory analysis, the cost is unlikely to be less than the value cap imposed by the SVC service.

In the '90s, SVCs started to catch on. A marquee implementation went on display to the world in 1996: at the Summer Olympics, held in Atlanta, three banks partnered with the Olympic committee and businesses to offer an SVC payment system. It was particularly appealing to international visitors: debit and credit cards rarely worked overseas in 1996, and tourists in the US for the duration of the Olympics could hardly be expected to open US bank accounts. SVCs provided a convenient alternative to cash. Visitors could buy them in fixed denominations with cash or travelers cheque, and value could be reloaded at kiosks around the Olympics sites. The SVC nature of the system allowed the offline payment terminals to be deployed to area businesses relatively cheaply, without a requirement for a phone line like the credit card terminals of the era.

The Olympics SVCs were manufactured by the usual suspects: Gemplus, Schlumberger, and G+D. The cards ran a cryptographic application generally based on the existing French payment card system, a precursor to EMV that was focused on supporting offline use-cases. The Olympics experiment was mostly considered a success, with few technical problems. The banks involved were apparently underwhelmed at the number of cards issued, and it was speculated at the time that they were perhaps more popular with collectors than users. One can imagine that the SVC technology, entirely new to locals and visitors not from Western Europe (and, to be fair, some from Australia), faced some challenges in gaining consumer confidence.

SVCs became a standard feature of the Olympics for a few years, making their last appearance (as far as I can tell) in 2002 at Salt Lake City. This was reportedly a very limited system based on magnetic stripe cards, and so I assume that it was not an SVC system at all but just a gift card system with the heritage of the 1996 and 1998 SVCs. It is likely impossible to design magstripe SVCs that are not vulnerable to trivial duplication, I know of only one method and it is experimental (characterization of weak permanent magnetic fields acquired by the magstripe during the manufacturing process, which seem to be unique enough to differentiate individual cards).

SVCs saw other experimental applications at the same time. The University of Michigan deployed a smart card SVC in 1996 as well, allowing students to load funds and spend them on campus and at nearby businesses. This type of program became fairly popular at large universities, but beware a terminological challenge: many universities still refer to their student ID payment card program as a stored value card for historic reason, but none that I'm aware of today actually are. With universal acceptance of payment cards, it is far more cost effective to make an arrangement with a bank and processor to encode student IDs as Visa or MasterCard cards. They then function as specialty prepaid cards with whitelisted merchants and purchase types, a service readily available from the prepaid card issuance industry.

Another nascent application of SVCs in the US were welfare programs like SNAP and WIC, implemented through a system called Electronic Benefits Transfer or EBT (EBT replaced the physical "stamps" in "food stamps"). Once again, while a few states adopted SVCs and may even still call their EBT cards SVCs, every example that I know of today is processed on the Visa or MasterCard network as a prepaid card.

Why is it that SVCs gained so little traction for payments in the US? A 1999 Spectrum article rounds up the state of SVCs at the time, optimistically opening that the contents of your wallet "might be replaced by just two or three smartcards." One look at the typical wallet will show that this hasn't gone as hoped. The true promise of multi-application cards, that you could have your government ID, payment card, health information, etc. all as applications on a single physical card, is virtually nonexistent in practice. Outside of specialty systems like PIV, the multi-application capability of smart cards is mostly only used to interact with different kinds of payment networks. Perhaps the most common smart card application in the US is called "CHASE VISA" and it is basically the reference EMV application with the name changed [2]. If there's even a single other application on the card, it's probably for interaction with an EFT network.

It's fairly easy to see why this happened: different applications are issued by different organizations. The thought of your driver's license and credit card being one physical object almost certainly induces nightmares of having your credit card number stolen and then having to interact with the DMV (or, as it is pronounced in the New Mexico vernacular, the MVD). The practical logistics of multi-application cards are difficult to manage, and the cards are cheap enough that it's easier for everyone to keep different applications separate.

What of payment cards, though? Smart cards for payments are now the norm even in the backwards United States [3], but stored value systems are harder to find today than they were in 1999. Spectrum elaborates, after discussing the popularity of smart card systems (broadly defined) in Europe:

Why has it taken so much longer for smartcards to take off in the United States? In the first place, some of these cultural and political drivers are absent. The country has an excellent telecommunications infrastructure. There is no governmental or centralized mandate in any of the traditional application areas of smartcards. But the industry is evolving. The activities of Europay, MasterCard, and Visa (EMV) in developing specifications for financial-transaction cards will have a major impact on the U.S. market and the rest of the world. Nonetheless, it is felt that a smartcard will have to be able to handle several applications for the technology to gain widespread acceptance in the United States.

EMV sure did have an impact in the US, even if it took a solid decade. Multi-application cards seem dead in the water from a practical perspective; even though many more sophisticated smart card systems (like MULTOS) are designed for remote issuance and updating of applications. Anyone who has had access to their office and email at the mercy of the USAccess/HSPD-12 PIV scheme can attest that its Thales-built remote personalization system is... not exactly ready for the average consumer.

Besides, the telecommunications point is not to be underestimated. By the time SVC technology competed in America, telephone connected payment card terminals were already becoming the norm (mostly from American Verifone, although French Ingenico was a major player). Rates of telephone service and, not long after, internet service in the US were very high. These factors made offline systems much less attractive: merchants unhappy with the risk of offline processing of non-chip credit cards were just moving to online processing, not to smart cards.

The lack of a standards body to set the direction is also undeniably a factor. The introduction of EMV took as long as it did in large part because of the fragmentation of the payment card industry; different components of the market had different objectives and there was no one to push them along. To be fair, US payment card issuers cope with fraud better than most overseas observers seem to give them credit for. The inconvenience of card fraud is relatively low; I recently had credit card information stolen (how, I can only speculate) and there was no action involved on my part beyond responding to a text message alert and receiving a new card in the mail. Because of the card information update service the processors provide to qualified merchants, I haven't even had to reenter my card information on any subscriptions. A lot of effort has been put into smoothing over the fraud that occurs, even if it does seem that one of my cards is used fraudulently every two years or so.

Despite the lackluster adoption of SVCs in the US, they have a few strongholds, both here and abroad. First, although a somewhat minor detail, I cannot help but note the military overseas SVC program that my career once incidentally involved. The EagleCash system, operated by the Department of the Treasury, provides SVCs to members of the armed forces (sometimes branded NavyCash or Armed Forces EZPay due to variants of the program rules). The cards are mostly used in overseas military installations and aboard ships, situations in which offline processing can remain a big advantage. EagleCash was considered the most prominent deployment of SVCs in the US, and probably still is. EagleCash reaches nearly a billion dollars in annual turnover, mostly in the Middle East.

Much more widespread, though, are transit cards. Many transit systems globally use some sort of SVC for fare payment, under different names in different cities. Prominent US examples include Clipper (SF Bay area), Ventra (Chicago), MetroCard (New York City), and SmarTrip (national capital region). Overseas, Oyster (London), Rav-Kav (Israel), and Octopus (Hong Kong) are well-known. Many of these systems were pioneering when implemented, and some remain pioneering payments technologies today.

Many early US systems, such as Clipper, were implemented at least in part by the Cubic Corporation. If that sounds like an ominous defense contractor, it is. Cubic produces a wide range of C4ISR systems for the US military, but because of its location (in the Bay Area) and early involvement in transportation technologies, Cubic became a major US vendor in transit fare collection systems. The nearly identical fare gates of BART and the DC Metro, for example, were early models designed and built by Cubic (BART and DC Metro are twin projects in many ways). They originally used magstripe tickets, and I have read that they were controlled by PDP computers although I am unsure if this factual or just confusion with the better documented use of PDP/8E computers to drive the train arrival signs and announcements.

Cubic came back in the '00s with noncontact SVC payment systems, which are now widely deployed in major US cities and many overseas systems. Of the systems I listed above, most had Cubic as at least a member of the consortium that implemented them, if not as the prime contractor. Oyster, for example, was implemented by Cubic alongside EDS, now a division of HP perhaps best remembered for the political career of its founder Ross Perot.

How do these systems work exactly? Offline systems simplify payment networks in some ways, but also add complexity, which is often apparent in transit systems that combine offline terminals (for example in buses) and online terminals (for example at train platforms). I will walk through a description of the operation of Clipper, with which I am most familiar. The details vary from system to system depending on architectural decisions made during the original system design and modernization efforts that have been performed since, so details vary. For example, some newer systems especially abandon offline operation almost entirely and have even in-vehicle terminals perform online transactions via either public or municipal LTE networks.

A Clipper card can be purchased from a number of vendors, either first-party ticket windows in certain stations or private convenience stores that have opted to participate in the program. These stores can also add value to an existing Clipper card, from cash or a payment card transaction, by entering the value to add into a device very much like a credit card terminal (it is, running custom software as many do) and then tapping the card to it to allow the write operation to complete. Similarly, cards can be purchased or value added via vending machines at stations, which usually require the card to be tapped twice: once to read the current value and determine eligibility to add value (there is a value cap, for example), and a second time to write the added value.

Because these transactions involve writing to the card, the new value is available immediately. It can be spent on fixed terminals like station fare gates, but also on vehicle terminals as in buses. Either way the terminal reads the value from the card, determines eligibility, and writes the new (decreased) value back to the card.

Things get a little bit strange, though, when you consider one of the most common user patterns in the modern era: you can create an account online and associate the card with your account, and then you can add value online. This is convenient, but confronts the offline nature of the system. You add value to the card, but there's no way to write the new value to it.

The solution, or at least partial solution, to this problem looks something like this: fare payment terminals have to receive a streaming log of value-add operations so that they can apply them the next time they are presented with the relevant card. Online systems, like vending machines and fare gates in train stations, find out about online value-adds almost immediately. If you mostly use a train, the operation is almost completely transparent, as you add value online and it is written to your card next time you pass through a fare gate.

For the offline terminals in buses, though, things aren't so smooth. These terminals operate fully offline while the vehicle is on route. At the end of the day, as vehicles are stored in yards, the payment terminals connect to a local area wireless network (traditionally 802.11a). They upload on-vehicle transaction logs for reporting, but also download logs of online transactions. If you add value to a card with a zero (or near zero) value and then try to board a bus, it is likely that you will be rejected: the value-add hasn't been written to the card yet, and the bus terminal hasn't been told about it. The transit operator often sets an expectation of one business day for online value adds to be available if your first trip is an offline terminal.

It may be that Bay Area transit operators are transitioning to online vehicle terminals to address this problem, it wouldn't surprise me as IP connectivity in transit vehicles is becoming the norm for multiple reasons. But, of course, in an environment where all devices are online the value of SVCs as a technology is greatly reduced. At some point the SVC nature of the system becomes more vestigial than anything else, although it can provide valuable fault tolerance.

The case of transit is more complex than just incrementing and decrementing, though. Passes (including automatically "earned" passes in many systems) and transfer discounts between operators can make fare logic surprisingly complex, and that's before considering the many rail systems that charge fare per zone traveled. To accommodate this kind of fare tabulation logic, transit SVCs typically store a history of the most recent transactions and cumulative counters of different types of transactions. This allows the system to compute distance-based fare (by comparing the current and previous transaction for their location), offer transfer discounts (by comparing the last two or three transactions to a table of discounts between operators), and automatically change to passes when they become most economical (by checking registers of accumulated fare per operator per time period).

Put together these programs can make fare calculation very complex, which is one of the advantages of computerized fare collection with usage history: the software can ensure that the fare paid is optimal, in the sense of being the lowest fare the customer is eligible for. Prior to these systems features like transfer discounts often went unused because of the added complexity of presenting a ticket and payment or determining validation procedures between transit agencies.

Even transit agencies are moving away from SVCs as IP connectivity to vehicles becomes more affordable and more common. Centralized systems, while they require network infrastructure, can be more flexible and more user-friendly.

It doesn't look like SVCs have much of a future. Despite being the dream of the '90s, they have gone the way of, well, so many other dreams of the '90s technology industry. By the time the ingredients for SVCs to succeed became widely available, they were somewhat of a solution in search of a problem. Network connectivity was spreading rapidly for other reasons, online processing of payments offered other advantages, there just weren't that many reasons to go the SVC route.

Smart cards are an important part of payments infrastructure today because of the EMV standard, and they continue to have applications in both their traditional form factor and embedded variants. Despite the power available from multi-application cards, MIFARE with its simple cryptographically protected read/write operation is more common in practice. So pour one out for SVCs, or more accurate to the tradition, put $10 on a laundry card, put it in a drawer, and move to a different city.

[1] The topic of electrical interface is actually slightly confusing because the standard describes 5v, 3v, and 1.8v logic levels. Modern cards are nearly always 1.8v, but fully compliant readers need to detect and provide the correct operating voltage to the card. This complexity is one of the factors that has lead to occasional security vulnerabilities in smart cards around supply voltage.

[2] Most payment card terminals query the name of the selected application and display it. Often it is only "VISA" or "MASTERCARD" but a few issuers customize their card loads to brand the application name. Just a bit of trivia.

[3] Outside of certain more niche applications like cardlock fuel cards, which are broadly compatible with payment cards for ease of implementation but don't seem to be interested in making the move to chip-and-whatever.


>>> 2023-08-19 meanwhile elsewhere

I had meant to write something today, but I'm just getting over a case of the COVID and had a hard time getting to it. Instead I did the yard work, edited and uploaded a YouTube video, and then spewed out a Cohost thread as long as a blog post. So in lieu of your regularly scheduled content, I'd like to link you to the Cohost thread on the Monticello AT&T microwave site (complete with pictures!) and the YouTube version of the same (complete with many more pictures, in that it's a video, but not as well written!).

I'll return next time with, probably, something about smart cards.


>>> 2023-08-07 STIRred AND SHAKEN

In a couple of days, I pack up my bags to head for DEFCON. In a rare moment of pre-planning, perhaps spurred by boredom, I looked through the schedule to see what's in store in the world of telephony. There is a workshop on SS7, of course [1], plenty of content on cellular, but as far as I see nothing on the biggest topic in telecom security: STIR/SHAKEN.

I can venture a guess as to why: STIR/SHAKEN is boring. So here we go!

The Nature of Circuit Switching

Understanding today's robocalling problem requires starting a long time ago. Taking you all the way back to the invention of the telephone would be a little gratuitous, but it is useful to start our discussion with the introduction of direct distance dialing in 1951. In that year, the first long-distance call was completed based only on the customer dialing a number. Over the following decades direct distance dialing became more common and fewer telephone users had to speak to an operator to have a long-distance call established. Today, it's universal.

Handling dial calls over long distance trunks is a bit complicated, though. For local calls, handling was relatively simple. The other customer was connected to the same exchange that you were, so the exchange just needed to be able to detect your dialing and select the correct local loop corresponding to the number you dialed. Step-by-step (SxS) switches have been handling this problem since the turn of the 20th century. For long distance calls, though, the recipient will not be on the same switch---they'll be on a foreign exchange.

To establish a long-distance call, your exchange needs to connect (via a trunk line) to the exchange of the person you are dialing. Most of the time there is no direct connection available between the two exchanges, so they need to call each other through a tandem switch, sort of a telephone exchange for telephone exchanges.

If you are familiar with the general concept of pulse dialing and SxS switches, you might already see why is tricky. When you dial on an SxS switch, the dial in your telephone moves in a sort of lockstep with the selector mechanism in the telephone switch. Each digit is regarded separately. The switch doesn't know any "history" of your dialing: each selector switch just knows that you reached it somehow and it is responsible for connecting your call to the next step in the switch architecture. It doesn't know what digits you dialed before, and it won't know what digits you dialed after. It just advances one step for each pulse, as long as it's connected.

The SxS switch is the main origin of the film and television myth of the 60-second telephone trace. "Hawaii Five-0" (the original) is one of the few television shows I know of to accurately depict the drama of a telephone trace on an SxS switch, with the cops trying to keep the killer on the phone as an exchange technician with a clipboard rushes around the aisles of switch frames writing down the positions of the selector switches. "Tracing" a call really involved tracing, backtracking through the switch from the local loop of the callee to the local loop of the caller, one selector at a time.

The calling process between local exchange switches and tandem switches is sort of similar, except that direct dialing requires some form of memory. When you dial a number, your local exchange switch matches it against patterns and determines that it needs to be sent to a tandem switch. It connects your call through a trunk to the tandem, but you've already stopped dialing, so there are no pulses for the tandem to use. For the tandem switch to know what to do, your local exchange switch must store the number you dialed, and then dial it again over the trunk... essentially, you dial your local exchange switch, your exchange switch dials the tandem, and the tandem (assuming it has a connection to the destination) dials the local exchange switch on the other end.

There are a surprising number of distinct ways this has been implemented but I will use the general term outpulsing, most descriptive of a scheme where each switch sends pulses to the next just like your rotary phone. In practice, a more common system (prior to digital signaling) was "multifrequency" or MF signaling. MF is not to be confused with dual-tone multifrequency or DTMF, which is based on MF but different.

This communication between switches, required for long distance calls and today even on local calls for functions like local number portability, is known as signaling. Outpulsing and MF are examples of in-band signaling, where the signaling is sent through the same path as the voice conversation it's setting up. Today, in-band signaling has largely been replaced with out-of-band signaling, such as SS7. In these schemes, switches have separate data connections that they use to carry signaling that may be associated with a call, or may be non-call-associated and purely a data transaction.

This fundamental model of the telephone system has not changed. To connect a call from one location to another, a circuit must be established through multiple switches. To establish the circuit, each switch in the path signals to the next switch to indicate where it is trying to reach. This is sort of like the packet switching schemes more familiar to the computing industry, but there is an important difference, a result of the circuit-switching nature of the telephone system: a phone call need only go one direction. As each switch speaks to the next they are establishing a circuit, which will remain established and carry voice both directions. Each switch in the path only needs to know the destination, none need to know the source, as the connection that direction is already open.

It's sort of like Tor. The basic concept of Tor, onion routing, is that with encryption of signaling information each node only needs to know the node before it and the node after. In the telephone system, each switch does know the final destination of the call (there is no encryption to protect this information from switches earlier in the path), but only one switch knows the origin of the call: the caller's local exchange switch. The other switches just know which inbound trunk the call arrived on.

Well, it's not really quite that simple, but to be honest, it's almost that simple. Over time signaling between switches has expanded to convey more and more information. The most important payload is the destination number. But something that at least looks like source information has been added not once, but twice.

First, there is the matter of billing. The telephone system is concerned first with establishing phone calls, but a close second is billing for them. As calls pass between carriers, even within the Bell System with its separate operating companies, carriers further along in the path need to know who they should bill for carriage. When you place a long distance call (assume with me that you pay for long distance), equipment on your local exchange switch records the call's source, destination, and duration in order to bill you for the time. But there may be other carriers involved in connecting the call, and they want their share too.


Other carriers in the calling path will also record the call's details so that they can bill the originating carrier for their fractional portion of the rate (this sub-billing of long distance calls is one of the reasons telephone rate regulation is complex). In the era of manually connected long-distance calls, the operators at the different exchanges would speak with each other and give, among other things, an account identifier for the source of the call. After all, in a manual exchange it is the operator who does the signaling. Direct distance dialing requires automatic operation, though, so signaling had to be enhanced to convey the billing information. The solution is called ANI, or Automatic Number Identification.

ANI is basically what it sounds like: signaling is extended to include a number for the calling party, so that each exchange on the way can record it as part of the billing record (they already know the destination number and the duration, since they have to participate in establishing and maintaining the call). But it's very important to understand the origin of ANI. A lot of people hear of ANI and assume it to be a system that tells you the phone number of the caller. That's only really correct in sort of an incidental way. The real purpose of ANI is to give a billing account for the calling party, and telephone companies, for convenience, used telephone numbers to keep their customer ledgers.

ANI does not necessarily identify the caller, it only identifies who should be billed for the call [2]. Notably, the ANI concept does not span the diverse nature of the telephone network today. VoIP gateways and, in general, anyone carrying calls by means other than the POTS or the Plain Old Telephone System have other ways of identifying customers and tracking and billing for calls. This obviously includes VoIP, but also includes most cellular calls today as carriers have transitioned to GSM and now IP architecture [3].

Internally, these systems don't use ANI at all, instead using the accounting features of their own protocols. When delivering calls to the POTS, they may provide the same ANI for every call (just to identify that they are the carrier to be billed) or an ANI that isn't a dialable number but just a phone number they have held to use as a billing ID. It may reflect their internal accounting organization much more than it does the calling party.

Here's something that's key to understand: ANI is a feature of the conventional POTS telephone system, originating with electromechanical switches and present in today's TDM and packet telephone switches. It is not a feature of the telephone system at large. Consider that most European countries never used ANI, instead using one of a couple of other approaches to the same problem, and the dominant form of telephony in the US today (cellular) is derived from European technology.

VoIP and cellular carriers are inconsistent about how they use and handle ANI, and provide ANI on calls into the POTS only for compatibility with POTS billing systems. ANI on calls from POTS to cellular or VoIP carriers may be totally discarded, depending on the specific setup. It's often frustratingly inconsistent; like some others I run a VoIP line that intentionally captures ANI information since it can be useful to understand how payphones are set up (besides the calling number, ANI identifies the type of caller, including whether or not it's a payphone). I specifically hunted for a VoIP provider that conveys ANI and still find that it's missing on a lot of calls. One likely factor is that a surprising number of surviving payphones are now cellular, and cellular carriers generally don't use ANI.


And then there is another service, similar in concept but very different in its purpose: Caller ID. Caller ID is intended to show the recipient of a call the identity, or at least phone number, of the caller. The "name" part of Caller ID (called CNAM) has always been a bit of a bust outside of POTS carriers, for reasons that could fill out another post. But it is quite reliable in providing a phone number.

Here's the problem: the CID is neither required nor expected to correspond to the origin of the call. There are lots of reasons for this, but consider a common one: in a large business, customer service may be performed by multiple call centers. When customer service calls you, they want the CID to give the main toll-free number you should use to reach customer service, not the specific call center that made the call. There are about a million variations of this same idea, that all come down to large organizations wanting to be able to call you from many places while still having their intended inward number appear on CID.

There are other scenarios as well. It's not unusual for companies with a lot of telephone traffic to have arrangements with multiple long-distance carriers and use whichever is cheapest for a specific call. They may not even have inward phone numbers with these carriers, and even when they do they don't want the CID number to be different depending on which carrier the call happens to be routed through. The same scenario exists in more modern systems; Google Fi uses multiple distinct cellular carriers and assigns "ghost numbers" to identify their customers on each of them. When a Google Fi customer makes a call, the CID should be that customer's primary phone number, not an internal number used for US Cellular provisioning.

All of this is to explain why the CID value on a telephone call is whatever the caller says it should be. Well, assuming the caller is something like a commercial customer with the technical ability to provide a CID value. This isn't really a bad thing, CID was never intended to tell you where the call was from... it was intended to tell you who the call was from, and how to call them back. The reality of the phone network is that that won't always match the origin of the call.

I think this whole thing is easier to explain by analogy to a technology more of us know the intimate details of, email. The CID value on a phone call is analogous to the "Reply-To" in an email, telling you how to return an email (or call) to the person. The ANI is analogous to a "Received" header, telling you some information about one hop in the process but not necessarily the first hop, and not necessarily enough to identify the originator.

Everything gets even more complicated when you consider the diversity of carriers. There are telephone carriers using multiple distinct technologies with their own signaling and billing infrastructure, and then there are other countries to contend with. Foreign countries often don't even have telephone numbers of the same length as North American numbers (or a fixed length at all for that matter), and so billing for international calls has always been a special case.

Spam, Robocalls, Scams, Etc.

This all works fine for the purposes of the telephone system. I mean, at least for a long time, it did. But have you noticed what's up with email lately? It seems that, given an open communications system, people will inevitably develop something called a "cryptocurrency" and badly want to make sure that you get in on something called an "ICO." The general term for this phenomenon is "spam," and the fact that it is only one letter away from "scam" is meaningful as the line between mere unsolicited advertising and outright crime is often razor thin.

In the email system, this problem has been elegantly solved by a system of ad-hoc, inconsistent, often-wrong heuristic classifiers glued to a trainwreck of different cryptographic attestation and policy metadata schemes that still haven't solved the problem. It is, perhaps, no surprise that the phone system is taking a generally similar approach.

Let's discuss a few differences between email spam mitigation and telephone robocall mitigation. The spam problem is arguably a bit harder for email because of the absolutely dizzying number of possible counterparties: every mail server out there. In the telephone system, the counterparties are limited to other telephone carriers, but that's still a very long list, to an extent that would probably surprise you. The FCC reports that there are 931 conventional telephone carriers and 1,787 interconnected VoIP carriers, and that's just the US. Most problematic traffic originates from overseas, where these numbers become far larger.

In the world of email, spam is nonetheless mitigated in part by completely blocking traffic from mail servers known to primarily originate spam. This is facilitated by a system of blacklists maintained by various companies and industry groups. One might wonder why the telephone system doesn't do the same? There are two things to consider.

First, the telephone industry does. The FCC maintains a blacklist of telephone carriers that have been found to take inadequate measures to prevent abuse (or more often intentionally facilitate abuse), and directs other carriers to block all traffic from them. One could argue that the FCC is overly conservative in their process for adding carriers to this list, but there are reasons.

Second, we have to consider that the telephone network is considerably deeper than the email network. What I mean by this is that, although email was designed to facilitate multi-hop routing, it is rare for email to actually pass through multiple distinct organizations en route (it often passes through multiple distinct MTAs, but these are all devices or service providers used by one of the two organizations involved). In the telephone system, multi-carrier routing is common, and all but universal for international traffic. The inevitable result is mixing of genuine and abusive traffic from the same carrier.

So why don't telephone carriers just block traffic from carriers that they receive robocalls from? For one, they are legally prohibited from doing so. This is under the doctrine of common carriage. If telephone carriers were allowed to pick and choose which carriers they would accept calls from, larger carriers would be able to use this as negotiating leverage to obtain unreasonably favorable terms from smaller carriers. This was a very real problem in the telephone system before common carriage rules were implemented, and it is a problem in the internet today, leading to the ongoing debate over "net neutrality."

But there are also practical reasons. Blocking traffic is known to come at risk. Anyone who has administered mail servers with a significant number of users will know that blacklists are far from foolproof and some popular blacklists are very prone to listing mailservers that originate any spam whatsoever, even from a single compromised user account. Institutional mail administration can seem to be roughly half trying to get back off of blacklists, and shore up outbound spam detection to avoid the next incident... but as we know, heuristic spam detection doesn't work very well.

The problem is even more acute in telephony, as savvy telephone spam operations intentionally get their traffic mixed with genuine traffic, ensuring that a carrier cannot block the origin wholesale without losing calls their customers actually wanted to receive. The typical strategy is to originate calls in a foreign country with relatively lax telephone regulation, India has long been a top choice since it offers both a loose regulatory environment and plenty of English speakers. A telephone spam operation need only find a commercial telephony provider with poor oversight and interconnection to a major national telephone carrier, and then the robocalls are being introduced from a carrier with millions of customers generating genuine traffic. These often arrive through low-cost international gateway service that route calls via VoIP, popular not only to scammers but everyday users seeking lower international calling rates.

The core problem is mixing: telephone carriers receive abusive traffic mixed in with genuine traffic, and they have few ways to determine what's what. Some would suggest that foreign-originating calls with US CID numbers are inherently suspect, but did you know that India has an inexpensive labor market and a tremendous number of English speakers? When I said that customer service call centers need to have the correct inbound number appear on CID, a lot of those call centers are overseas! There are several other reasons as well for foreign calls to legitimately come with US CID numbers.

To really sort out spam, carriers need a consistent, reliable way of determining what carrier a call actually came from. Not just the carrier that handed the call to them, but the carrier that originated it. Hey, email has a thing sort of like that, DKIM. What if someone did DKIM for telephones?

Someone did. It's called STIR/SHAKEN.


STIR/SHAKEN stands for Secure Telephony Identity Revisited/Signature-based Handling of Asserted information using toKENs. Yes, it is a tortured acronym. STIR and SHAKEN are actually two separate standards, but they fit together closely. STIR comes from an RFC and describes headers for VoIP traffic. SHAKEN comes from two industry groups and describes how to encode the same headers into SS7 messages. In other words, STIR/SHAKEN are the same logical protocol defined for VoIP and POTS, respectively.

STIR/SHAKEN describes a cryptographic attestation, which is attached to a call by a carrier (ideally the originating carrier) and signed with a private key belonging to that carrier. Through the magic of public-key cryptography, subsequent carriers that handle the call can verify the STIR header. In practice, STIR is based on JWTs---a STIR header is basically just a JWT with a few standard fields. Those fields are the destination phone number, the source phone number, and the type of attestation.

There are three types of STIR/SHAKEN attestations, called A, B, and C. An A attestation is a statement from the original carrier that the call came from one of their customers and the carrier knows that they are entitled to use the STIR "from" telephone number attached (which must match the CID number). A B attestation states that the originating carrier knows the call came from one of their customers, but they don't have knowledge of the customer's entitlement to the source number. A C attestation is the fallback---it states that the originating carrier got the call from somewhere, but they don't know anything about it.

Keys for STIR/SHAKEN are distributed through a public-key infrastructure very similar to that used for TLS. Certificate authorities issue certificates to telephone carriers that give the carrier's public key and identifying information. That way, any STIR/SHAKEN attestation can be verified to have originated with a specific carrier as a legal entity. You can now know exactly which carrier a call came from.

STIR/SHAKEN immediately solves one problem. For any call with an A or B attestation, the originating carrier is known. If the call is abusive, you know exactly which carrier should get in trouble. It also takes a step towards solving another problem: the CID number should match the STIR/SHAKEN number, and if there is an A attestation you know that the carrier is promising the customer is really entitled to that phone number (e.g. they pay the carrier to hold that number for their inbound calls).

Unfortunately, there isn't currently a way to link a phone number directly to a STIR/SHAKEN certificate. That is, an A attestation is a promise from the carrier that the customer is authorized to use the phone number, but there's no way to actually check if that carrier is in a position to make that claim about the phone number in question. There is a system in development to address this issue (that basically provides a database to correlate phone numbers with STIR/SHAKEN carrier certificates), but it's also not that big of a problem in practice as any carrier issuing an improper type A attestation can easily be identified and shamed (actually, FCC policy is that they will be fined and, probably more significantly, their certificate will be revoked).

STIR/SHAKEN is a huge step forward because it facilitates two things:

FCC mandates for STIR/SHAKEN require not only that carriers attach attestations to calls, but also that they validate the attestations and block calls where the attestation is invalid or belongs to a blacklisted carrier.

The bad news

So why are there still spam calls?

Unfortunately, STIR/SHAKEN is far from universal. The FCC made STIR/SHAKEN implementation mandatory for US telephone carriers as of June 30, 2022. That was over a year ago, but the FCC issued numerous exemptions to small and rural carriers with difficulty affording the required equipment (remember that telephone switches can have fifty year service lives and there is some very old equipment still in use), and besides, the FCC mandate applied only to the United States.

A May 3, 2023 report from TransNexus estimates that only a bit over a quarter of phone calls terminating in the US bear STIR/SHAKEN attestations. Fortunately more and more carriers are adopting STIR/SHAKEN, but despite the "mandatory" deadline there is still a long ways to go. Many have criticized the FCC for being far too slow in enforcing attestations, but to be fair, the FCC is acutely sensitive to the fact that rural and small-market telephone carriers are often barely above water, and suddenly imposing costly requirements could lead to a minor crisis as smaller telephone carriers run out of money.

STIR/SHAKEN is also imperfect. TransNexus finds that calls with a type B attestation are actually more likely to be robocalls than those with no attestation at all. In a way, this makes sense, as these calls are apparently coming from carriers who do not keep track of customer entitlement to phone numbers. The problem is that that's a rather common situation, for example because of customers using multiple carriers for outbound calls to get optimal rates. There is a silver lining here, though. Those carriers placing attestations on robocalls are putting themselves at risk, as those attestations are tools for action against them.

That's an interesting aspect of STIR/SHAKEN: by forcing carriers to sign the calls they hand on, it gives them a level of responsibility and liability for the contents of those calls. This has introduced a sort of KYC system for telephone carriers. Around the time of the mandatory STIR/SHAKEN rollout, a VoIP termination provider I had used for years suddenly demanded that I send copies of my passport, incorporation documents, and FCC filings. Carriers signing calls are getting more cautious about the kinds of customers they will accept, and recent FCC enforcement actions will probably accelerate this trend. It's a bit unfortunate in that the barrier to entry for hobby VoIP operations is getting higher and higher, but, well, that's just like email.

And that's sort of the point. The world of telephony spam mitigation is very comparable to the world of email spam mitigation, but a couple of decades behind. Carriers have already begun to introduce extensive heuristic spam detection for SMS, but the industry and FCC have been hesitant to go that route for telephone calls. Experience with SMS might be a reason why; I used to work for a company that sent a lot of SMS and we constantly struggled with carriers blocking our appointment and medication reminder messages, even getting to the point of "burning" a short-link domain name because of a major carrier blocking all messages that contained it without explanation. Heuristic detection really is imperfect, and while SMS might have relaxed reliability expectations people want phone calls to work every time.

So instead, the telephone industry is going the cryptographic attestation route. Email has done this as well. But we have to temper our expectations: extensive heuristic detection, blacklisting, and cryptographic attestation schemes have failed to completely tame the phenomenon of spam email. Telephone spammers are in good company: their colleagues in the email industry have kept it going, despite huge effort in opposition, for almost thirty years.

But the telephone industry clearly needs to move faster if they expect to reach even the level of success email providers have. Unfortunately, "Faster" and "The FCC" are not famously friendly. Many jump to the conclusion that the telecom industry is complicit in the situation, but it's a little more complex. Some major telecom industry associations actively support STIR/SHAKEN, and in general most telecom industry associations have lobbied the FCC to move more quickly on the robocall issue and to allow carriers greater latitude to take their own actions to mitigate the problem.

It's hard to clearly lay blame in this situation. For the FCC's part, it has moved extremely slowly, extending STIR/SHAKEN deadlines almost indefinitely until the federal legislature passed the TRACE act to force their hands. The telecom industry continues to acuse the FCC of lethargy in its response to the problem. At the same time, some of the largest telephone carriers have been some of the most resistant to implementation, arguing that it's unreasonable to impose the enormous cost on them.

This argument gains a bit of weight when you consider that many in the industry are skeptical of STIR/SHAKEN as a technical approach; it was developed by organizations that are mostly controlled by telecom equipment and software vendors rather than telecom carriers. The carriers seem to feel that STIR/SHAKEN is an inadequate approach to the problem with a severe case of design-by-committee, and the design of STIR/SHAKEN and the FCC's regulations around it are both unclear when applied to common real-world situations.

If you want a single source of the robocall problem, perhaps it is this: the telecom industry is fiercely profit motivated. Carriers stand to save money by not implementing STIR/SHAKEN, telecom equipment and software vendors stand to make money by forcing carriers to do so. Whether or not it actually addresses the problem is largely orthogonal to this basic dynamic.

[1] SS7 is very interesting, but I often complain that the security community has an excessive focus on it considering the rarity of actual exploitation of SS7. People talk about how you shouldn't use SMS 2FA because of problems with SS7; that's total nonsense. You shouldn't use SMS 2FA because a thirteen year old will con your carrier into giving them access to your account.

[2] There is a bit of nuance here. It is possible to subscribe to ANI service on a trunk, which is usually done by businesses. It's also common for PSAPs, 911 call centers, to have ANI service as a way to determine the origin of calls. Both of these have become far less common as ANI has become less reliable. The modern E911 standard is a result of the fact that ANI is not capable of providing reliable caller identification.

[3] For the most part, Verizon is the only cellular carrier that still has traditional TDM telephone switches. Their days are presumably numbered now that Verizon has retired their legacy 3G service, which for historic reasons was far more based on traditional (American) telephone technology than AT&T's.


>>> 2023-07-29 Free Public WiFi

Remember Free Public WiFi?

Once, many years ago, I stayed on the 62nd floor of the Westin Peachtree Plaza in Atlanta, Georgia. This was in the age when the price of a hotel room was directly correlated with the price of the WiFi service, and as a high school student I was not prepared to pay in excess of $15 a day for the internet. As I remember, a Motel 6 that was not blocks away but within line of sight ended up filling the role. But even up there, 62 floors from the ground, there was false promise: Free Public WiFi.

I am not the first person to write on this phenomenon, I think I originally came to understand it as a result of a 2010 segment of All Things Considered. For a period of a few years, almost everywhere you went, there was a WiFi network called "Free Public WiFi." While it was both free and public in the most literal sense, it did not offer internet access. It was totally useless, and fell somewhere between a joke, a scam, and an accident of history. Since I'm not the first to write about it, I have to be the most thorough, and so let's start out with a discussion of WiFi itself.

The mid-2000s were a coming of age era for WiFi. It had become ubiquitous in laptops, and the 2007 launch of the iPhone established WiFi as a feature of mobile devices (yes, various phones had offered WiFi support earlier, but none sold nearly as well). Yet there weren't always that many networks out there. Today, it seems that it has actually become less common for cafes to offer WiFi again, presumably as LTE has reached nearly all cafe customers and fewer people carry laptops. But in the 2010s, genuinely free, public WiFi had become far more available in US cities.

Some particularly ambitious cities launched wide-area WiFi programs, and for a brief time "Municipal WiFi" was a market sector. Portland, where I grew up, was one of these, with a wide-area WiFi network covering the house I grew up in for a couple of years. Like most the program didn't survive to see 2020. Ironically, efforts to address the "digital divide" have lead to a partial renaissance of municipal WiFi. Many cities now advertise free WiFi service at parks, libraries, and other public places. I was pleased to see that Mexico City has a relatively expansive municipal WiFi service, probably taking advantage of the municipal IP network they have built out for video surveillance and emergency phones.

The 2000s, though, were different. "Is there WiFi here?" was the sort of question you heard all the time in the background. WiFi was seen as a revenue source (less common today, although the hotel industry certainly still has its holdouts) and so facility-offered WiFi was often costly. A surprising number of US airports, for example, had either no WiFi or only a paid service even through the 2010s. I'm sure there are still some like this today, but paid WiFi seems on the way out [1], probably as a result of the strong competition it gets from LTE and 5G. The point, though, is that back in 2006 we were all hungry for WiFi all the time.

We also have to understand that the 802.11 protocol that underlies WiFi is surprisingly complex and offers various different modes. We deal with this less today, but in the 2000s it was part of computer user consciousness that WiFi came in two distinct flavors. 802.11 beacon packets, used to advertise WiFi networks to nearby devices, include a flag that indicates whether the network operates in infrastructure mode or ad-hoc mode.

A network in infrastructure mode, basically the normal case, requires all clients to communicate with the access point (AP). When two clients exchange traffic, the AP serves as an intermediary, receiving packets from one device and transmitting them to the other. This might at first seem inefficient, but this kind of centralization is very common in radio systems as it offers a simple solution to a complex problem. If a WiFi network consists of three devices, an AP and two clients (A and B), we know that clients A and B can communicate with the AP because they are maintaining an association. We don't know if A and B can communicate with each other. They may be on far opposite sides of the AP's range, there may be a thick concrete wall between A and B, one device may have very weak transmit power, etc. Sending all traffic through the AP solves this problem the same way a traditional radio repeater does, by serving as an intermediary that is (by definition for an AP) well-positioned in the network coverage area.

The other basic WiFi mode is the ad-hoc network. In an ad-hoc network, devices communicate directly with each other. The main advantage of an ad-hoc network is that no AP is required. This allowed me and a high school friend to communicate via UnrealIRCd running on one of our laptops during our particularly engaging US Government/Economics class (we called this "Governomics"). The main disadvantage of ad-hoc networks is that the loss of a central communications point makes setup and routing vastly more complicated. Today, there is a much better established set of technologies for distributed routing in mesh networks, and yet ad-hoc WiFi is still rare. In the 2000s it was much worse; ad-hoc mode was basically unusable by anyone not ready to perform manual IP address management (yes, link local addresses existed and we even used them for our IRC client configurations, but most people evidently found these more confusing than helpful).

In general, ad-hoc networks are a bit of a forgotten backwater of consumer WiFi technology. At the same time, the promise of ad-hoc networks featured heavily in marketing around WiFi, compelling vendors to offer a clear route to creating and joining them. This has allowed some weird behaviors to hang around in WiFi implementations.

Another thing about WiFi networks in the 2000s, and I swear this is all building to a point, is that the software tools for connecting to them were not very good. On Windows, WiFi adapter vendors distributed their own software. Anyone with a Windows laptop in, say, 2005 probably remembers Dell QuickSet Wireless, Intel PROSet/Wireless (this actually how they style the name), and Broadcom WLAN Utility. The main thing that these vendor-supported wireless configuration utilities shared was an astounding lack of quality control, even by the standards of the time. They were all terrible: bizarre, intrusive, over-branded UX on top of a network configuration framework that had probably never worked reliably, even in the original developer's test environment.

Perhaps realizing that this hellscape of software from hardware companies was undoubtedly having a negative impact on consumer perception of Windows [2], Microsoft creaked into action. Well, this part is kind of confusing, in a classically Microsoft way. Windows XP had a built-in wireless configuration management utility from the start, called Wireless Zero Configuration. The most irritating thing about the vendor utilities was that they were unnecessary; most of the time you could just uninstall them and use Wireless Zero and everything would work fine.

Wireless Zero was the superior software too, perhaps because it had fewer features and was designed by someone with more of the perspective of a computer user than a wireless networking engineer. Maybe I'm looking on Wireless Zero with rose-colored glasses but my recollection is that several people I knew sincerely struggled to use WiFi. The fix was to remove whatever garbage their network adapter vendor had provided and show them Wireless Zero, where connecting to a network meant clicking on it in a list rather than going through a five-step wizard.

So why did the vendor utilities even exist? Mostly, I think, because of the incredible urge PC vendors have to "add value." Gravis, in the context of "quick start" operating systems, gives a good explanation of this phenomenon. The problem with being a PC vendor is that all of the products on the market offer a mostly identical experience. For vendors to get any competitive moat bigger than loud industrial design (remember when you badly wanted a Vaio for the looks?), they had to "add value" by bolting on something they had developed internally. These value-adds were, almost without exception, worthless garbage. And wireless configuration utilities were just another example, a way for Intel to put their brand in front of your face (seemingly the main concern of Intel R&D to this day) despite doing the same thing everyone else did.

There was a second reason, as well. While it was a good fit for typical consumer use, Wireless Zero was not as feature-complete as many of the vendor utilities were. Until the release of Vista and SP3, Wireless Zero was basically its own proprietary solution just like the vendor utilities. There was no standard API to interact with wireless configuration on XP/SP1/2, so if a vendor wanted to offer anything Zero couldn't do, they had to ship their whole own Product. Microsoft's introduction of a WiFi config API in Vista (and basically backporting it to SP3) was a big blow to proprietary wireless utilities, but it probably had less of an impact than the general decline of crapware in Vista and later.

This is not to say that they're gone. A surprising number of PCs still ship with some kind of inane OEM software suite that offers a half-baked wireless configuration utility (just a frontend on the Windows API) alongside the world's worst backup service, a free trial offer for a streaming service you haven't heard of but represents the death throes of a once great national cable network, and something that tells you if your PC is "healthy" based on something about the registry that has never and will never impact your life??? God how is the PC industry still like this [3].

I think I have adequately set the stage for our featured story. In the late 2000s, huge numbers of people were (a) desperately looking for a working WiFi network even though they were in a place like an airport that should clearly, by civilized standards, have a free one; (b) using Wireless Zero on XP/SP1/2; and (c) in possession of only a vague understanding of ad-hoc networks which were nonetheless actively encouraged by WiFi vendors and their software.

Oh, there is a final ingredient: Wireless Zero had an interesting behavior around ad-hoc networks. It's the kind of thing that sounds like an incredibly bad decision in retrospect, but I can see how Microsoft got there. Let's say that, for some reason and some how, a consumer uses ad-hoc WiFi. It was ostensibly possible, not even really that hard, to use ad-hoc WiFi to provide internet access in a home (from e.g. a USB DSL modem, still common at the time). It's just that the boxes you had to check were enough clicks deep in the network control panel that I doubt many people ever got there.

One of the problems with ad-hoc WiFi, though, is that ad-hoc networks can be annoying to join. You've got to enter the SSID and key, which is already bad enough, but then you're going to be asked if it's WEP or WPA or WPA2 and then, insult on injury, if the WPA2 is in TKIP or AES mode. For ad-hoc networks to be usable something had to broadcast beacons, and without an AP, that had to be the first computer in the network.

So, now that you have your working ad-hoc setup complete with beacons, you might want to take your laptop, unplug it from the DSL modem, and take it somewhere else. Maybe you go on a trip, use the WiFi at a hotel (probably $15 a day depending on your WORLD OF HYATT status), then come back home and plug things back in the way they were. You would expect your home internet setup to pick up where you left off, but people didn't have as many devices back then and especially not as many always-on. Your laptop, de facto "host" of the ad-hoc network, may be the only network participant up and running when you want to connect a new device. So what does it need to do? Transmit beacons again, even though the network configuration has changed a few times.

The problem is that it's really hard for a system in an ad-hoc network to know whether or not it should advertise it. Wireless Zero didn't really provide any way to surface this decision to the user, and the user probably wouldn't have understood what it meant anyway. So Microsoft took what probably seemed, in the naivety of the day, to be a reasonable approach: once a Windows XP machine had connected to an ad-hoc network, it "remembered" it the same way it did the "favorite" networks, for automatic reconnection. Assuming that it might just be the first device in the ad-hoc network to come up, if the machine had a remembered ad-hoc network and wasn't associated with anything else, it would transmit beacons.

Put another way, this behavior sounds far more problematic: if a Windows XP machine had an ad-hoc network favorited (which would be default if it had ever connected to one), then when it wasn't connected to any other WiFi network, it would beacon the favorited ad-hoc network to make it easier for other hosts to connect. Ad-hoc networks could get stuck in there, a ghost in Wireless Zero.

You can no doubt see where this goes. "Free Public WiFi" was just some ad-hoc network that someone created once. We don't know why; most people seem to go to ill intent but I don't think that's necessary. Maybe some well-meaning cafe owner had an old computer with a USB DSL modem they used for Business and decided to offer cafe WiFi with the hardware they already owned. The easiest way (and probably only way, given that driver support for infrastructure mode AP behavior on computer WiFi adapters remains uneven today) would be to create an ad-hoc network and check the right boxes to enable forwarding. But who knows, maybe it was someone intercepting traffic for malicious purposes, maybe it was someone playing a joke, all we really know is that it happened sometime before 2006 when I find the first public reference to the phenomenon.

Whoever it was, they were patient zero. The first Windows XP machine to connect became infected, and when its owner took it somewhere else and didn't connect to a WiFi network, it helpfully beaconed Free Public WiFi. Someone else, seeing such a promising network name, connected. Frustrated by the lack of Hotmail access, they disconnected and moved on... but, unknowingly, they were now part of The Ad-Hoc Network.

The phenomenon must have spread quickly. In 2007, a wire service column of security tips (attributed to the Better Business Bureau, noted information security experts) warns that "this network may be an ad-hoc network used by hackers hunting for credit card information, Social Security numbers and account passwords." Maybe! Stranger things have happened! I would put good money on "no" (the same article encourages using a VPN, an early link in a chain that leads to the worst YouTube content today).

By 2008-2009, when I think I had reached a high level of owning a laptop and using it in strange places, it was almost universal. "Free Public WiFi" enchanted me as a teenager because it was everywhere. I could hardly open my laptop without seeing it there in the Wireless Zero list. Like the Morris worm, it exploited a behavior so widespread and so unprotected that I think it must have burned through a substantial portion of the Windows XP laptop fleet.

"Free Public WiFi" would reach an end. In Service Pack 3, as part of the introduction of the new WLAN framework, Microsoft fixed the beacon behavior. This was before the era of forced updates, though, and XP was particularly notorious for slow uptake of service packs. "Free Public WiFi" was apparently still widespread in 2010 when NPR's mention inspired a wave of news coverage. Anecdotally, I think I remember seeing it into 2012. One wonders: is it still around today?

Unfortunately, I always have a hard time with large-scale research on WiFi networks. WiGLE makes a tantalizing offer of an open data set to answer this kind of question but the query interface is much too limited and the API has a prohibitively low quota. Maxing out my API limits every day I think it'd take over a month to extract all the "Free Public WiFi" records so that I could filter them the way I want to. Perhaps I should make a sales inquiry for a commercial account for my enterprise blogging needs, but it's just never felt to me like WiGLE is actually a good resource for the security community. They're kind of like hoarders, they have an incredible wealth of data but they don't want to give any of it up.

I pulled the few thousand records I'm allowed to get today from WiGLE and then changed tracks to WifiDB, which is much less known than WiGLE but actually makes the data available. Unfortunately WifiDB has a much lower user count, and so the data is clearly impacted by collection bias (namely the impressive work of one specific contributor in Phoenix, AZ).

Still, I can find instances of ad-hoc "Free Public WiFi" spanning 2006 to as late as 2018! It's hard to know what's going on there. I would seriously consider beaconing "Free Public WiFi" today as a joke, but it may be that in 2018 there was still some XP SP2 laptop in the Phoenix area desperately hoping for internet access.

WifiDB data, limited though it is, suggests that The Ad-Hoc Network peaked in 2010. Why not a crude visualization?

2006    1   |
2007    0   
2008    39  |||||
2009    82  |||||||||
2010    93  ||||||||||
2011    20  |||
2012    2   |
2013    0
2014    1   |
2015    5   ||
2016    3   |
2017    2   |
2018    1   |

That 2006 detection is the first, which lines up with NPR's reporting, but could easily also be an artifact of WifiDB's collection. And 2018! The long tail on this is impressive, but not all that surprising. XP had a real reputation for its staying power. There are surely still people out there that hold that XP was the last truly good Windows release---and honestly I might be one of them. Every end-of-life announcement for XP triggered a wave of complaints in the industry rags. In 2018, some niche versions of XP (e.g. POSReady) were still under security support!

Most recent observations of "Free Public WiFi" are actually infrastructure-mode networks. It's an amusing outcome that "Free Public WiFi" has been legitimized over time. In Bloomington, Indiana I think it's actually the public WiFi at a government building. Some office buildings and gas stations make appearances. "Free Public WiFi" is probably more likely to work today than not... but no guarantee that it won't steal your credit card. Pay heed to the Better Business Bureau and take caution. Consider using a VPN... how about a word from our sponsor?

Postscript: I have been uploading some YouTube videos! None of them are good, but check it out. I'm about to record another one, about burglar alarms.

[1] Paid WiFi still seems alive and well at truck stops. Circumstances on a recent cross-country trip lead to me paying an outrageous sum, something like $20, for one day of access to a nationwide truck stop WiFi service that was somewhere between "completely broken" and "barely usable to send an email" at the three successive TAs I tried at. My original goal of downloading a several-GiB file was eventually achieved by eating at a restaurant proximate to a Motel 6. Motel 6 may be the nation's leading municipal WiFi operator.

[2] Can we think of another set of powerful hardware vendors consistently dragging down the (already questionably seaworthy) Windows ecosystem by shipping just absolute trash software that's mandatory for full use of their hardware? Companies that are considered major centers of computer innovation yet distribute a "driver" as an installer for an installer that takes over a minute just to install the installer? Someone with the gall to call their somehow even less stable release branch "ADRENALINE EDITION"?

[3] I used to have a ThinkPad with an extra button that did nothing because Lenovo decided not to support the utility that made it do things on Vista or later. This laptop was sold well after the release of Vista and I think shipped with 7. That situation existed on certain ThinkPad models for two generations. Things like this drive you to the edge of the Apple Store I swear, and Lenovo isn't as bad as some.

                                                                        older ->