_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss
COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.

I have an MS in information security, more certifications than any human should, and ready access to a keyboard. These are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.

You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.

--------------------------------------------------------------------------------

>>> 2024-07-13 the contemporary carphone

Cathode Ray Dude, in one of his many excellent "Quick Start" videos, made an interesting observation that has stuck in my brain: sophisticated computer users, nerds if you will, have a tendency to completely ignore things that they know are worthless. He was referring to the second power button present on a remarkably large portion of laptops sold in the Windows Vista era. Indeed, I owned at least two of these laptops and never gave it any real consideration.

I think the phenomenon is broader. As consumers in general, we've gotten very good at completely disregarding things that don't offer us anything worthwhile, even when they want to be noticed. "Banner blindness" is a particularly acute form of this adaptation to capitalism. Our almost subconscious filtering of our perception to things that seem worth the intellectual effort allows a lot of ubiquitous features of products to fly under the radar. Buttons that we just never press, because sometime a decade ago we got the impression they were useless.

I haven't written for a bit because I've been doing a lot of traveling. Somewhere in the two thousand miles or so we covered, my husband gestured vaguely at the headliner of our car. "what is that button for?" He was referring to a button that I have a learned inability to perceive: the friendly blue "information" button, right next to the less friendly red "SOS" button. Most cars on the US market today have these buttons, and in Europe they're mandatory (well, at least the red one, but I suspect the value-add potential of the blue on is not one that most automakers would turn down). And there's a whole story behind them.

It all started in 1996 at General Motors. Wikipedia tells us that it actually started with a collaboration of General Motors (GM), Electronic Data Systems (EDS), and Hughes Electronics. That isn't incorrect, but misses the interesting point that EDS and Hughes were both subsidiaries of GM at the time. GM was a massive company, full of what you might call vim and vigor, and it happened to own both a major IT services firm (EDS) and a major communications technology company (Hughes). It was sort of inevitable that GM would try integrating these into some sort of sophisticated car-technology-communications platform. They went full-steam ahead on this ambitious project, and what they delivered is OnStar.

But first, a brief tangent into corporate history. I won't say much about GM because I am not an automotive history person at all, but I will say a bit about EDS and Hughes. Hughes was obviously the product of a notable and often enigmatic figure of history, Howard Hughes. GM owned Hughes Electronics because Howard Hughes had cleverly placed his business ventures under the ownership of a massive and brazen tax shelter called the Howard Hughes Medical Institute. Howard Hughes died without leaving a will or any successor as trustee of HHMI, putting HHMI into an awkward legal and organizational struggle as it abruptly pivoted from "Howard Hughes' personal tax scheme" to "independent foundation that incidentally owned a major defense contractor." HHMI ultimately made the decision to turn Hughes' business empire into an endowment, and sold Hughes Aircraft. General Motors was the high bidder. A set of confusing details of the Hughes amalgamation, like the fact that Hughes Aircraft didn't own all of the Hughes aircraft, lead to the whole thing becoming the Hughes Electronics subsidiary of GM.

After GM essentially stripped it for parts, Hughes Electronics lives on today under the name DirecTV. The satellite internet company that actually markets under the name Hughes is, oddly enough, one of the parts that GM stripped off. Hughes Communications became part of EchoStar, operator of Hughes Electronics competitor DISH Network, which then spun Hughes Communications out, and then bought it back again. You can't make this stuff up. The point is that the strange legacy of Howard Hughes, the HHMI, and GM's ownership of Hughes Aircraft mean that the name "Hughes" is now sort of randomly splashed across the satellite communications industry. It's sort of like how Martin Marietta still paves freeways in Colorado.

Electronic Data Systems is not quite as interesting, but it was run by two-time minor presidential candidate Ross Perot, so that's something. GM dumped EDS almost immediately after launching OnStar. EDS eventually became part of Hewlett Packard, which, by that time, had become a sort of retirement home for enterprise technology companies. It more or less survives today as part of various large companies that you've never heard of but have nonetheless secured 9-digit contracts to do ominous things for the Department of Defense.

What a crowd, huh? It's a good thing that nothing strange and terrible happened to General Motors in approximately 2008.

So anyway, OnStar. OnStar was, basically, a straightforward evolution of the carphone backed by a concierge-like telephone service center. In that light, it's an unsurprising development: the carphone was just on its way out in the mid-'90s, falling victim to increasingly portable handheld phones. Hughes, by its division Hughes Network Systems, was an established carphone manufacturer but seems to have had few or no offerings in the mobile phone space [1]. To Hughes long-timers, OnStar was probably an obvious way to preserve the popularity of carphones: build them into the car at the factory, with factory-quality finish.

GM had their own goals. Ironically, it is in large part due to GM's efforts that built-in telephony is so common (and yet so ignored) in cars today. The situation was much different in 1996: OnStar was a new offering, only available from GM. It had the promise of competitive differentiation from other automakers, but for that to work, GM would have to differentiate it from the carphones widely available on the aftermarket. This tension, the conflict between "we built a carphone in at the factory" and "carphones are going out of style," probably explains why OnStar marketing focused on safety and security.

"General Motors has come up with the ultimate safety system" lead a '96 newspaper article. Marketing materials prominently positioned roadside assistance, automatic emergency calls on airbag deployment, remote door unlock, and locating stolen cars. These were features that your average carphone couldn't offer, because they required closer integration with the vehicle itself. OnStar was more than a carphone, it was a telematics system.

"Telematics" is one of those broad, cross-discipline concepts that we don't really talk about any more because it's become so ubiquitous as to be uninteresting. Like Cybernetics, but without a tantalizing but lost historical promise in Chile. Telematics has often been more or less synonymous with "putting phones into cars," but is more broadly concerned with communications technology as it applies to moving vehicles. There is a particular emphasis on the vehicle part, and telematics has always been interested in vehicle-specific concerns like positioning, navigation, and the collection of real-time data.

Telematics was already a developed field by the '90s, although the high cost and large size of communications equipment made it less universal than it is today. OnStar would lead one of the biggest changes in the modern automotive industry: the extension of telematics from commercial and industrial equipment to consumer automobiles. In doing so, it would introduce select GM drivers to an impressive set of benefits, almost a form of ambient computing. It would also start a cascade of falling dominoes that lead, rather directly, to a remarkable lack of privacy in modern vehicles and getting an email that something mysterious is wrong with your car two to four hours after the tire pressure light comes on. The computer gives and takes.

And what of EDS? EDS provided the other half of OnStar's differentiation from a mere carphone. OnStar was not only integrated into your vehicle, it was backed by a team of Service Advisors with training and tools to use that integration. The OnStar equipment included a GPS receiver, still a fairly cutting-edge technology at the time, and continuously provided your location to the OnStar service center in Michigan. Advisors had access to maps and travel directories and the ability to dispatch tow trucks and emergency responders. They could even send a limited set of remote commands to OnStar vehicles. The infrastructure to support this modern telematic call center was built by EDS, and the staff of human advisors provided a friendly face and a level of flexibility that was difficult to achieve by automation alone.

Besides emergencies and roadside assistance, the advisors could solve one of the most formidable problems in automotive technology: navigation. When GM's advertising and press coverage strayed from emergency assistance, they focused on concierge-like services focused around navigation. OnStar could direct you to gas or food. They could not only reserve a hotel room, but get you to the hotel. If you have seen the wacky turn-by-turn navigation technology that proliferated in the late 20th century, you might wonder how exactly that worked. Did an advisor stay awkwardly on the line? No, of course not, that would be both awkward and costly. They read out driving directions, which the OnStar equipment recorded for playback.

I really wish I could find a complete description of the user experience, because I suspect it was bad. The basic idea of recording spoken guidance and playing it back for reference is a common feature in aviation radios, but that's mostly for dealing with characteristically terse and fast-talking ground controllers, and usually consists of a short playback buffer that always starts from the beginning. Given the technology available, I suspect the OnStar approach was similar, but just with a... longer playback buffer. Thinking about listening through the directions over and over again to find one turn gives me anxiety, but it was 1996.

Technology advanced like it always does, and by the mid '00s at least some GM vehicles had the ability to display turn-by-turn instructions, provided by OnStar, as the driver needed them. Fortunately there are videos from this era, so I know that the UX was... better than expected, but strange. It's odd to see an LCD-matrix radio display, with no promise of navigation features, start displaying large turn arrows and distances after an OnStar call. One of the interesting things about OnStar is that the "human in the loop" nature of OnStar features makes it sort of a transitional stage between cassette tapes and Apple CarPlay. OnStar allowed human operators and remote computer systems to do the hard parts, allowing cars to behave in a way that seemed very ahead of their time.

One of the interesting things about OnStar, given the constant mention of satellites in its marketing, was the lack of actual satellite communications. Hughes, a satellite technology company, was involved. Articles about OnStar coyly refer to satellite technology, or say it's "powered by satellites." Of course, OnStar cost $22.50 a month in 1996, and $22.50 a month didn't entitle you to so much as look at a satellite phone in 1996. The satellite technology was limited to the GPS receiver; all voice communications were cellular. AMPS, specifically. The first several generations of OnStar, into the early '00s, relied on AMPS.

Telematics, telemetry, and the applications we now call "IoT" often struggle with the realities of communications networks. AMPS, often just referred to as "analog," was the first cellular communications standard to reach widespread popularity. For over a decade, everything cellular used AMPS. Then CDMA and GSM and even, may we all shed a tear, iDEN took over. These were digital standards with improved capacity and capabilities. It was inevitable that they would replace AMPS, and with the short lifespan of a consumer cellular phone, devices without support for digital networks naturally faded away... except for a bunch of them. OnStar and burglar alarms are two famous AMPS-retirement scandals. The deactivation of AMPS networks in 2008 left cars and alarm communicators across the country unable to communicate, and prompted a series of replacement programs, lawsuits, trade-in deals, lawsuits, and more lawsuits that are influential on how cellular networks are retired today (meaning: as rarely as possible).

The obsolescence of OnStar equipment in older vehicles by AMPS retirement left a black mark on OnStar's history that still hangs over it today. It was, I think, a vanguard of the larger impacts of fast-changing technology being integrated into cars. While vehicles have indeed become more reliable over time, there is an ever-present anxiety that new cars are more like consumer electronics, built for a three-year replacement cycle. The forced retirement of half a million OnStar buttons is probably one of the most visible examples of automotive equipment failing due to industry change rather than age.

In 2022, 2G cellular service was retired in the United States. With it went another generation of OnStar-equipped vehicles. For a combination of reasons, though, both a more conservative approach to 2G retirement in the cellular industry and likely GM's planning further ahead, only two model years were impacted.

Incidentally, Ford also had an offering very much like OnStar, called RESCU and introduced in 1996 as well. It was pretty universally agreed by automotive journalists at the time that RESCU was more primitive than OnStar and amounted mostly to a knee-jerk "we also have one of those" response to GM's launch. RESCU is perhaps worth mentioning, though, for its contribution to the lineage of Ford's SYNC platform, at least in the form of gratuitous all-caps.

In 2002, GM offered OnStar for licensing to other automotive manufacturers. Subarus, among others, began to sprout blue buttons in the overhead. But what had happened to competitive differentiation? Well, automotive technology tends to go through two phases: First, it differentiates. Second, it's mandated. The originating manufacturer can make quite a bit of money off of both.

In 1995, a year prior to the launch of OnStar, the National Highway Transportation Safety Administration (NHTSA) was already investigating the possibility of an Automated Collision Notification (ACN) system. ACN would automatically call 911 in the event of a dangerous crash, improving driver safety. As far as I can tell, GM is not the origin of the ACN concept. NHTSA's work on ACN started with the National Automated Highway System (NAHS), an ambitious technology development program launched in 1991 that imagined a very different self-driving car from the ones that we see today. The NAHS involved mesh networking between automated vehicles to form "platoons," close-following cars (for fuel efficiency) that synchronized their control actions. The mesh network would extend to road-side signaling systems, and would lead eventually to the end of traffic signals as cars automatically negotiated intersection time slots.

The NAHS never came to be and probably never will, but the NHTSA's retro-futuristic graphics of '90s sedans linked by blue waves echo through my childhood like they do through the pages of Popular Mechanics and the academic literature on self-driving. Or, they did, until a new generation of Silicon Valley companies coopted self-driving for their own purposes. This is not an entirely fair take on the history, I am certainly applying rose-colored (or is it cerulean blue?) glasses to the NAHS, but I think it is hard to argue that there has not been a loss of ambition in our vision of the self-driving future. For one, we stopped drawing blue waves on everything.

Anyway, GM may not have created ACN. If anyone, I think that honor might fall on Johns Hopkins University. But they sure did get involved: by 1996, the year OnStar launched, Delco Electronics was building ACN prototypes for NHTSA. Delco Electronics, a division of GM (Delco's history is closely intertwined with that of Hughes in this period, parts of Delco were and would be parts of Hughes and vice-versa). Over the following years, GM really jumped in: OnStar was ACN, and ACN should be mandatory.

Here's the thing: it's never really worked. The move of introducing a technology and then pushing for it to become mandatory is a fairly well-known one in the automotive industry, and to its credit, has lead to numerous safety advancements in consumer cars and no doubt a meaningful reduction on fatalities (to its discredit, it is often cited as one of the reasons for steeply rising prices on new cars).

Universal OnStar has come tantalizingly close. Europe mandated "eCall," functionally identical to ACN, in 2018. I'm not sure how directly GM was involved, but there are GM patents in the licensing pool required to implement eCall, so it's at least more than "not at all." But despite its increasing presence, ACN isn't required in the US. Automakers aren't even consistent on whether it's standard or a paid add-on.

GM is still hacking away at this one... as recently as last year, GM was taking federal grants to study ACN and propose standards. In collaboration with CDC, GM developed a system called AACN that uses accelerometer data to predict the severity of injury to occupants and difficulty of rescue. It's installed in newer OnStar vehicles, and Ford has even licensed it for Ford SYNC, but the data rarely goes anywhere at all... 911 PSAPs that receive the calls from ACN systems aren't equipped to receive the extra metadata; extensions to E911 to facilitate AACN data exchange are another thing that GM is actively involved in.

GM really seems to have put a 30-year effort into mandating OnStar in the US, but they just can't get it over the finish line. In the mean time, OnStar has stopped mattering.

GM's program to license OnStar to other automakers was short-lived. I'm not sure exactly why, but GM also gave up on their "OnStar For My Vehicle" aftermarket product. Even as OnStar continued to gain features, its ambition waned. I think that the problem was simple: by the mid-2000s, putting a phone into a car was becoming pretty easy. Besides, the "connected car" offered too many advantages for any automaker to turn down. Can you imagine the benefits of storing location history for the entire fleet of vehicles you've sold? You can sell that to the insurance industry! You know GM did, of course they did, and of course it's the subject of an ongoing class-action lawsuit.

OnStar just stopped being special. I was actually a little surprised to notice that the blue button in the overhead of my modern Subaru isn't an OnStar button; Subaru stopped licensing OnStar in '06. It's just another manifestation of Subaru StarLink, a confusing menagerie of vaguely-telematic features that are mostly built on contract by Samsung. Once the car has an LTE modem for remote start and maintenance telemetry and selling your driving habits to LexisNexis, throwing in a button that makes a phone call is hardly an engineering achievement.

You know, sometimes it feels like smartphones can only incidentally make phone calls. With the move to VoLTE, it's not even really a deeply-embedded functionality any more. "Phone" is just an application on the thing that, for reasons of habit, we call a "phone."

The legacy of OnStar is much the same: of course your car can make phone calls, GM shoved a carphone in the trunk in 1996 and it's still in there somewhere. It's just one of a million things modern vehicle telematics do, and frankly, it's one of the least interesting ones. Ironically, GM is taking the carphone back out: in 2022, GM discontinued the OnStar telephone service. It's no longer possible to have a phone number assigned to your car and use OnStar for routine calls. Everyone uses an app on their phone for that.

[1] I am excluding here their satellite phones, although they were surprisingly advanced for the mid-'90s and probably would have competed well with cellular phones if the service wasn't so costly.

--------------------------------------------------------------------------------

>>> 2024-06-08 dmv.org

The majority of US states have something called a "Department of Motor Vehicles," or DMV. Actually, the universality of the term "DMV" seems to be overstated. A more general term is "motor vehicle administrator," used for example by the American Association of Motor Vehicle Administrators to address the inconsistent terminology.

Not happy with merely noting that I live in a state with an "MVD" rather than a "DMV," I did the kind of serious investigative journalism that you have come to expect from me. Of These Fifty United States plus six territories, I count 28 DMVs, 5 MVDs, 5 BMVs, 2 OMVs, 2 "Driver Services," and the remainder are hard to describe succinctly. In fact, there's a surprising amount of ambiguity across the board. A number of states don't seem to formally have an agency or division called the DMV, but nonetheless use the term "DMV" to describe something like the Office of Driver Licensing of the Department of Transportation.

Indeed, the very topic of where the motor vehicle administrator is found is interesting. Many exist within the Department of Transportation or Department of Revenue (which goes by different names depending on the state, such as DTR or DFA). Some states place driver's licensing within the Department of State. One of the more unusual cases is Oklahoma, which recently formed a new state agency for motor vehicle administration but with the goal of expanding to other state customer service functions... leaving it with the generic name of Service Oklahoma.

The most exceptional case, as you'll find with other state government functions as well, is Hawaii. Hawaii has deferred motor vehicle administration to counties, with the Honolulu CSD or DCS (they are inconsistent!) the largest, alongside others like the Hawaii County VRL.

So, the point is that DMV is sort of a colloquialism, one that is widely understood since the most populous states (CA and TX for example) have proper DMVs. Florida, third most populous state, actually has a DHSMV or FLHSMV depending on where you look... but their online services portal is branded MyDMV, even though there is no state agency or division called the DMV. See how this can be confusing?

Anyway, if you are sitting around on a Saturday morning searching for the name of every state plus "DMV" like I am, you will notice something else: a lot of... suspicious results. guamtax.com is, it turns out, actually the website of the Guam Department of Revenue and Taxation. dmvflorida.org is not to be confused with the memorable flhsmv.gov, and especially not with mydmvportal.flhsmv.gov. You have to put "portal" in the domain name so people know it's a portal, it's like how "apdonline.com" has "online" in it so you know that it's a website on the internet.

dmvflorida.org calls itself the "American Safety Council's Guide to the Florida Department of Motor Vehicles." Now, we have established that the "Florida Department of Motor Vehicles" does not exist, but the State of Florida itself seems a little confused on that point, so I'll let it slide. But that brings us to the American Safety Council, or ASC.

ASC is... It's sort of styled to sound like the National Safety Council (NSC) or National Sanitation Foundation (NSF), independent nonprofits that publish standards and guidance. ASC is a different deal. ASC is a for-profit vendor of training courses. Based on the row of badges on their homepage, ASC wants you to know not only that they are "Shopper Approved," "Certifiably Excellent (The Stats To Prove It)," they have won a "5-Star Excellence Award" (from whom not specified), and that the Orlando Business Journal included their own John Comly on its 2019 list of "CEOs of the Year."

This is the most impressive credential they have on offer besides accreditation by IACET, an industry association behind the "continuing education units" used by many certifications, and which is currently hosting a webinar series on "how AI is reshaping learning landscapes from curriculum design to compliance." This does indeed mean that, in the future, your corporate sexual harassment training will be generated by Vyond Formerly GoAnimate based on a script right out of ChatGPT. The quality of the content will, surprisingly, not be adversely affected. "As you can see, this is very important to Your Company. Click Here [broken link] to read your organization's policy."

In reality, ASC is a popular vendor of driver safety courses that businesses need their employees to take in order to get an insurance discount. Somewhere in a drawer I have a "New Mexico Vehicle Operator's Permit," a flimsy paper credential issued to state employees in recognition of their having completed an ASC course that consisted mostly of memorizing that "LOS POT" stands for "line of sight, path of travel." Years later, I am fuzzy on what that phrase actually means, but expanding the acronym was on the test.

We can all reflect on the fact that the state's vehicle insurance program is not satisfied with merely possessing the driver's license that the state itself issues, but instead requires you to pass a shorter and easier exam on actually driving safely. Or knowing about the line of sight and the path of travel, or something. I once took a Motorcycle Safety Foundation course that included a truly incomprehensible diagram of the priorities for scanning points of conflict at an intersection, a work of such information density that any motorcyclist attempting to apply it by rote would be entirely through the intersection and to the next one before completing the checklist. We were, nonetheless, taught as if we were expected to learn it that way. Driver's education is the ultimate test of "Adult Learning Theory," a loose set of principles influential on the design of Adobe Captivate compliance courses, and the limitations of its ability to actually teach anyone anything.

This is all a tangent, so let's get back to the core. ASC sells safety courses and... operates dmvflorida.org?

Here's the thing: running DMV websites is a profitable business. Very few people look for the DMV website because they just wanted to read up on driver's license endorsements. Almost everyone who searches for "<state name> DMV" is on the way to spending money: they need to renew their license, or their registration, or get a driving test, or ideally, a driver's ed course or traffic school.

The latter are ideal because a huge number of states have privatized them, at least to some degree. Driver's ed and traffic school are both commonly offered by competitive for-profit ventures that will split revenue in exchange for acquiring a customer. I would say that dmvflorida.org is a referral scam, but it's actually not! it's even better: it's owned by ASC, one of the companies that competes to offer traffic school courses! It's just a big, vaguely government-looking funnel into ASC's core consumer product.

In some states, the situation is even better. DMV services are partially privatized or "agents" can submit paperwork on the behalf of the consumer. Either of these models allow a website that tops Google results to submit your driver's license renewal on your behalf... and tack on a "convenience fee" for doing so. Indeed, Florida allows private third-parties to administer the written exam for a driver's license, and you know dmvflorida.org offers such an online exam for just $24.95.

You can, of course, renew your driver's license online directly with the state, at least in the vast majority of cases. so how does a website that does the same thing, with the same rates, plus their own fee, compete? SEO. Their best bet is to outrank the actual state website, grabbing consumers and funneling them towards profitable offerings before they find the actual DMV website.

There's a whole world of DMV websites that operate in a fascinating nexus of SEO spam, referral farm, and nearly-fraudulent imitation of official state websites. This has been going on since, well, I have a reliable source that claims since 1999: dmv.org.

dmv.org is an incredible artifact of the internet. It contains an enormous amount of written content, much of it of surprisingly high quality, in an effort to maintain strong search engine rankings. It used to work: for many years, dmv.org routinely outranked state agency websites for queries that were anywhere close to "dmv" or "renew driver's license" or "traffic school." And it was all in the pursuit of referral and advertising revenue. Take it from them:

Advertise with DMV.ORG

Partner with one of the most valuable resource for DMV & driving - driven by 85% organic reach that captures 80% of U.S drivers, DMV.ORG helps organize the driver experience across the spectrum of DMV and automotive- related information. Want to reach this highly valued audience?

dmv.org claims to date back to 1999, and I have no reason to doubt them, but the earliest archived copies I can find are from 2000 and badly broken. By late 2001 the website has been redesigned, and reads "Welcome to the Department of Motor Vehicles Website Listings." If you follow the call to action and look up your state, it changes to "The Department of Motor Vehicles Portal on the Web!"

They should have gone for dmvportal.org for added credibility.

In 2002, dmv.org takes a new form: before doing pretty much anything, it asks you for your contact information, including an AOL, MSN, or Yahoo screen name. They promise not to sell your address to third parties but this appears to be a way to build their own marketing lists. They now prominently advertise vehicle history reports, giving you a referral link to CarFax.

Over the following months, more banner ads and referral links appear: vehicle history reports, now by edriver.com, $14.99 or $19.99. Driving record status, by drivingrecord.org, $19.99. Traffic School Online, available in 8 states, dmv-traffic-school.com and no price specified. The footer: "DMV.ORG IS PRIVATELY OPERATED AND MAINTAINED FOR THE BENEFIT OF ITS USERS."

In mid-2003, there's a rebranding. The header now reads "DMV Online Services." There are even more referral links. Just a month later, another redesign, a brief transitional stage, before in September 2003, dmv.org achieves the form familiar to most of us today: A large license-plate themed "DMV.ORG" logotype, referral links everywhere, map of the US where you can click on your state. "Rated #1 Site, CBS Early Show."

This year coincides, of course, with rapid adoption of the internet. Suddenly consumers really are online, and they really are searching for "DMV." And dmv.org is building a good reputation for itself. A widely syndicated 2002 newspaper article about post-marriage bureaucracy (often appearing in a Bridal Guide supplement) sends readers to dmv.org for information on updating their name. The Guardian, of London, points travelers at dmv.org for information on obtaining a handicap placard while visiting the US.

You also start to see the first signs of trouble. Over the following years, an increasing number of articles both in print and online refer to dmv.org as if it is the website of the Department of Motor Vehicles. We cannot totally blame them for the confusion. First, the internet was relatively new, and reporters had perhaps not learned to be suspicious of it. Second, states themselves sometimes fanned the flames. In a 2005 article, the director of driver services for the Mississippi Department of Transportation tells the reporter that you can now renew your driver's license online... at dmv.org.

dmv.org was operated by a company called eDriver. It's hard to find much about them, because they have faded into obscurity and search results are now dominated by the lawsuit that you probably suspected is coming. The "About Us" page of the dmv.org of this period is a great bit of copywriting, complete with dramatic stories, but almost goes out of its way not to name the people involved. "One of our principals likes to say..."

eDriver must not have been very large, their San Diego office address was a rented mail box. Whether or not it started out that way is hard to say, but by 2008 eDriver was a subsidiary of Biz Groups Inc., along with Online Guru Inc and Find My Specialist Inc. These corporate names all have intense "SEO spam" energy, and they seem to have almost jointly operated dmv.org through a constellation of closely interlinked websites. In 2008, eDriver owned dmv.org but didn't even run it: they contracted Online Guru to manage the website.

Biz Groups Inc was owned by brothers Raj and Ravi Lahoti. Along with third brother David, the Lahotis were '00s domain name magnates. They often landed on the receiving end of UDRP complaints, ICANN's process for resolving disputes over the rightful ownership of domain names. Well, they were in the business: David Lahoti owns UDRP-tracking website udrpsearch.com to this day.

Their whole deal was, essentially, speculating on domain names. Some of them weren't big hits. An article on a dispute between the MIT Scratch project and the Lahotis (as owners of scratch.org) reads "Ravi updated the site at Scratch.org recently to includes news articles and videos with the word scratch in them. It also has a notice that the domain was registered in 1998 and includes the dictionary definition of scratch."

Others were more successful. In 2011, Raj Lahoti was interviewed by a Korean startup accelerator called beSuccess:

My older brother Ravi was the main inspiration behind starting OnlineGURU. Ravi owned many amazing domain names and although he didn't build a website on every one of his domains, he DID build a small website at www.DMV.org and this website started doing well. Well enough that he saw an opportunity to do something bigger with it and turn it into a bigger business.

And he is clear on how the strategy evolved to focus on SEO farming:

Search Engine Marketing and Search Engine Optimization has definitely been most effective in my overall marketing strategy. The beautiful thing about search engines is that you can target users who are looking for EXACTLY what you offer at the EXACT moment they are looking for it. Google Adwords has so many tools, such as the Google Keyword Tool where you can learn what people are searching for and how many people are searching the same thing. This has allowed me to learn about WHAT the world wants and gives me ideas on how I can provide solutions to help people with what they are looking for.

Also, San Diego Business Journal named Raj Lahoti "among the finalists of the publication's Most Admired CEO award" in 2011. So if he ever meets John Comly, they'll have something to talk about.

The thing is, the relationship between dmv.org and actual state motor vehicle administrators became blurrier over time... perhaps not coincidentally, just as dmv.org ascended to a top-ranking result across a huge range of Google queries. It really was a business built entirely on search engine ranking, and they seemed to achieve that ranking in part through a huge amount of content (that is distinctly a cut above the nearly incoherent SEO farms you see today), but also in part through feeding consumer confusion between them and state agencies. I personally remember ending up on dmv.org when looking for the actual DMV's website, and that was probably when I was trying to get a driver's license to begin with. It was getting a bit of a scammy reputation, actual DMVs were sometimes trying to steer people away from it, and in 2007 they were sued.

A group of website operators in basically the same industry, TrafficSchool.com Inc and Driver's Ed Direct, LLC, filed a false advertising suit against the Online Guru family of companies. They claimed not that dmv.org was fraudulent, but that it unfairly benefited from pretending to be an official website.

Their claim must have seemed credible. At the beginning of 2008, before the lawsuit made it very far, dmv.org's tagline changed from "No need to stand IN LINE. Your DMV guide is now ON LINE!" to "Your unofficial guide to the DMV." This became the largest indication that dmv.org was not an official website, supplementing the small, grey text that had been present in the footer for years.

The judge was not satisfied.

See, the outcome of the lawsuit was sort of funny. The court agreed that dmv.org was engaging in false advertising under the Lanham Act, but then found that the plaintiffs were doing basically the same thing, leaving them with "unclean hands." Incidentally, they would appeal and the appeals court would disagree on some details of the "unclean hands" finding, but the gist of the lower court's ruling held: the plaintiffs would not receive damages, since they had been pursuing the same strategy, but the court did issue an injunction requiring dmv.org to add a splash screen clearly stating that it was not an official website.

The lawsuit documents are actually a great read. The plaintiffs provided the court with a huge list of examples of confusion, including highlights like a Washington State Trooper emailing dmv.org requesting a DUI suspect's Oregon driving records. dmv.org admitted to the court that they received emails like this on "a daily basis," many of them being people attempting to comply with mandatory DUI reporting laws by reporting their recent DUI arrest... to Online Guru.

The court noted the changes made to dmv.org in early 2008, including the "Unofficial" heading and changing headings from, for example, "California DMV" to "California DMV Info." But those weren't sufficient: going forward, users would have to click "acknowledge" on a page warning them.

It is amusing, of course, that the SEO industry of the time interpreted the injunction mainly in the SEO context. This was, after all, a website that lived and died by Google rankings, part of a huge industry of similar websites. Eric Goldman's Technology and Marketing Law Blog wrote that "My hypothesis is that such an acknowledgment page wrecks DMV.org’s search engine indexing by preventing the robots from seeing the page content."

The takeaway:

This suggests a possible lesson to learn from this case. The defendants had a great domain name (DMV.org) that they managed to build nicely, but they may have too aggressive about stoking consumer expectations about their affiliation with the government.

It's wild that "get a good domain name and pack it with referral links" used to be a substantial portion of the internet economy. Good thing nothing that vapid survives today! Speaking of today, what happened to dmv.org?

Well, the court order softened over time, and the acknowledgment page ultimately went away. It was replaced by a large, top-of-page banner, almost comically reminiscent of those appearing on cigarettes. "DMV.ORG IS A PRIVATELY OWNED WEBSITE THAT IS NOT OWNED OR OPERATED BY ANY STATE GOVERNMENT AGENCY." Below that, the license plate dmv.org logotype, same as ever.

Besides, they reformed. At sustainablebrands.com we read:

Over our 10-year history, DMV.org’s mission has shifted entirely from profit to purpose. We not only want to bring value to our users by making their DMV experience easier, we ultimately want to reduce transportation-related deaths, encourage eco-friendly driving habits, and influence other businesses to reduce their carbon footprints and become stewards of change themselves.

This press-release-turned-article says that they painted "the company’s human values on our wall, to remind ourselves every day what we’re here for and why" and that, curiously, dmv.org "potentially aim[s] to" "eliminate Styrofoam from local eateries." The whole thing is such gross greenwashing, bordering on incoherent, that I might accuse it of being AI-generated were it not a decade old.

dmv.org lived by Google and, it seems, it will die by Google. Several SEO blogs report that, sometime in 2019, Google seems to have applied aggressive manual adjustments to a list of government-agency-like domain names that include irs.com (its whole own story) and dmv.org. Their search traffic almost instantaneously dropped by 80%.

dmv.org is still going today, but I'm not sure that it's relevant any more. I tried a scattering of Google queries like "new mexico driver's license" and "traffic school," the kind of thing where dmv.org used to win the top five results, and they weren't even on the first page. Online Guru still operates dmv.org, and "dmv.org is NOT your state agency" might as well be the new tagline. Phrases like that one constantly appear in headings and sidebars.

They advertise auto insurance, and will sell you online practice tests for $10. Curiously, when I look up how to renew my driver's license in New Mexico, dmv.org sends me to the actual NM MVD website. That's sort of a funny twist, because New Mexico does indeed allow renewal through private service providers that are permitted to charge a service fee. I don't think dmv.org makes enough money to manage compliance with all these state programs, though, so it's actually returned to its roots, in a way: just a directory of links to state websites.

Also, there's a form you can fill out to become a contributor! Computers Are Bad has been fun, but I'm joining the big leagues. Now I write for dmv.org.

--------------------------------------------------------------------------------

>>> 2024-06-02 consumer electronics control

In a previous episode, I discussed audio transports and mention that they have become a much less important part of the modern home theater landscape. One reason is the broad decline of the component system: most consumers aren't buying a television, home theater receiver, several playback devices, and speakers. Instead, they use a television and perhaps (hopefully!) a soundbar system, which often supports wireless satellites if there are satellites at all. The second reason for the decline of audio transports is easy to see when we examine these soundbar systems: most connect to the television by HDMI.

This is surprising if you consider that soundbars are neither sources nor sinks for video. But it's not so surprising if you consider the long-term arc of HDMI [1], towards being a general-purpose consumer AV interconnect. HDMI has become the USB of home theater, and I mean that as a compliment and an insult. So, like USB, HDMI comes in a confusing array of versions with various mandatory, optional, and extension features. The capabilities of HDMI vary by the devices on the ends, and in an increasing number of cases, even by the specific port you use on the device.

HDMI actually comes to this degree of complexity more honestly than USB. USB started out as a fairly pure and simple serial link, and then more use-cases were piled on, culminating in the marriage of two completely different interconnects (USB and Thunderbolt) in one physical connector. HDMI has always been a Frankenstein creation. At its very core, HDMI is "DVI plus some other things with a smaller connector."

DVI, or really its precursors that established the actual video format, was intended to be a fairly straightforward step from the analog VGA. As a result, the logical design of DVI (and thus HDMI) video signals are pretty much the same as the signals that have been used to drive CRT monitors for almost as long as they've existed. There are four TMDS data lines on an HDMI cable, each a differential pair with its own dedicated shield. The four allow for three color signals (which can be used for more than one color space) and a clock. Two data pins plus a shield, times four, means 12 pins. That's most, but not all, of the 19 pins on an HDMI connector.

A couple of other pins are used for an I2C connection, to allow a video source to query the display for its specifications. A couple more are used for the audio return channel or ethernet (you can't do both at the same time) feature of HDMI. There's a 5V and a general signal ground. And then there's the CEC pin.

The fact that CEC merits its own special pin suggests that it is an old part of the standard, and indeed it is. CEC was planned from the very beginning, although it didn't get a full specification as part of the HDMI standard until HDMI 1.2a. Indeed, CEC is older than HDMI, dating to at least 1998, when it was standardized as part of SCART. But let's take a step back and consider the application.

One of the factors in the decline of component stereo systems is the remote control. In the era of vinyl, when you had to get off the couch to start a record anyway, remote controls weren't such an important part of the stereo market. The television changed everything about the way consumers interact with AV equipment: now we all stay on the couch.

I think we all know the problem, because we all lived through it: the proliferation of remotes. When your TV, your VCR, and your home theater receiver all have remote controls, you end up carrying around a bundle of cheap plastic. You will inevitably drop them, and the battery cover will pop off, and the batteries will go under the couch. This was one of the principal struggles faced by the American home for decades.

There are, of course, promised solutions on the market. Many VCR remotes had the ability to control a TV, and often the reverse as well. If you bought your TV and VCR from the same manufacturer this worked. If you didn't, it might not, or at least setup will be more complex. This is because the protocols used by IR remotes are surprisingly unstandardized. Surprisingly unstandardized in that curious way where there are few enough IR transceiver ICs that a lot of devices actually are compatible (consider the ubiquitous Philips protocol), but no one documents it and detailed button codes often vary in small and annoying ways.

So we got the universal remote. These remotes, often thrown in with home theater receivers as a perk, have some combination of a database of remote protocols pre-reverse-engineered by the manufacturer and a "learn" mode in which they can record unknown protocols for naive playback. Results were... variable. I heard that some of the expensive universal remotes like Logitech Harmony (dead) and Control4 (still around) were actually pretty good, but they required some emphasis on the word "expensive." Universal remotes were sort of a mixed bag, but they were fiddly enough to keep working that consumer adoption doesn't seem to have been high.

So, another approach came to us from the French. In the Europe of the 1970s, there was not yet a widely accepted norm for connecting a video source to a TV (besides RF modulation). France addressed the matter by legislation, mandating SCART in 1980. Over the following years, SCART became a norm in much of Europe. SCART is a bit of an oddity to Americans, as it never achieved a footprint on this continent. That's perhaps a bit disappointing, because SCART was ahead of its time.

For example, much like HDMI, SCART carried bidirectional audio. It supported multiple video formats over one cable. Most notably, though, SCART was designed for daisy chaining. Some simple aspects of the SCART design provided a basic form of video routing, where the TV could bidirectionally exchange video signals with one of several devices in a chain. The idea of daisy-chainable video interconnects continuously reappears but seldom finds much success, so I'd call this one of the more notable aspects of SCART.

That's not why we're here, though. Another interesting aspect of SCART was its communications channel between devices. The core SCART specification included a basic system of voltage signaling to indicate which device was active, but in 1998 CENELEC EN 50157-1 was standardized as a flexible serial link between devices over the SCART cable. Most often called AV.link, this channel could be used for video format negotiation, but also promised a solution to multiplying remotes: the AV.link channel can transmit remote control commands between devices. For example, your TV remote can have play/pause buttons, and when you push them the TV can send AV.link play/pause commands to whichever video source is active.

AV.link is a very simple design. A one wire (plus ground) serial bus operates at a slow (but durable) 333bps with collision detection. Devices are identified by four-bit addresses chosen at random (but checked for collision). Messages have a simple format: a one-byte header with the sending and receiving addresses, a one-byte opcode, and then whatever bytes are expected as parameters to the opcode.

AV.link is one of those standards that never quite got its branding together. Unlike, say, USB, where a consistent trademark identity is used, AV.link goes by different names from different vendors. Wikipedia offers the names nexTViewLink (horrible), SmartLink (mediocre), Q-Link (lazy), and EasyLink (mediocre again). One wonders if consumers were confused by these different vendor brands for the same thing, it's not a situation that happens very often with consumer interconnects.

When HDMI was developed, the provision of a pin for AV.link was pretty much copied over from SCART. Originally, the functionality wasn't even really specified, and just assumed to be similar to SCART. Later HDMI versions included a much more complete description of CEC as a supplement. Hardware support for CEC is mandated for devices like TVs as part of the HDMI certification process, but curiously, software support isn't really included. As a result, it is very common, but not universal, that TVs fully support CEC. Other AV devices like home theater receivers almost universally have CEC support. Computers almost universally do not, as cost and licensing considerations mean that GPUs do not provide a CEC transceiver.

Inconsistent implementations are not the only way that CEC is a little sketchy. Remember how different vendors referred to SCART AV.link by different names? CEC has the same problem. I won't bother with the whole list, but the names you're more likely to have seen include Samsung Anynet+, LG SimpLink, and... well, Philips EasyLink is still with us. In practice, a lot of people seem to ignore these names, and CEC is a lot more common than Anynet+ when discussing Samsung TVs. That doesn't stop Samsung from pushing their own branding in their menus and port labeling, though.

Because CEC inherits the AV.link features designed for SCART, it has a surprisingly rich featureset. For example, if you have an HDMI switch with real CEC support (these don't seem to be that common!) and a TV with software support, the TV can discover the topology of connected devices and remote control the switch to use the switch inputs as an extension of its own input selection menu.

Most CEC features are more prosaic, though. Considering the list of high-level features in the specification, "One Touch Play" means that a device can indicate that it has video to show (causing a TV to turn on and select that input) while "System Standby" means that a device being turned off can tell the other devices on the bus to turn off as well. "One Touch Record," "Deck Control," "Tuner Control," "Device Menu Control," and "System Audio Control" are all variations on devices forwarding simple remote commands (play, pause, up, down, etc) to other devices that might care about them more. For example, when you use a TV with a stereo receiver or soundbar, it should forward volume up/down commands from the remote to the audio device via System Audio Control.

Considering the decline of component systems, there are basically two common scenarios where CEC is used today. These are really the same scenario in a lot of ways, but they vary in the details.

It is interesting, isn't it, that an interconnect with four very-high-speed serial video channels is often put into use in a scenario where those channels are useless. Instead, the much lower-rate ARC and CEC channels are the important ones. Well, think about USB as a power connector... these things happen.

CEC could be used in much more complex scenarios. For example, if you had a DVR connected to your TV via CEC, you could browse the electronic program guide (EPG) on your TV and choose a program to record. This would cause the TV to use CEC Timer Programming to send the program details from the TV EPG to the DVR to schedule the recording. How widely was this ever used? I don't know, I suspect not very, because these days DVRs are almost invariably provided by the cable or satellite company, who expect you to use the DVR's EPG rather than your TVs anyway.

This is actually one of the scenarios where you see ARC used for reasons other than synchronizing control of an audio output: set-top boxes (STBs). Media companies that distribute STBs, mostly cable and satellite operators, tend to be in a bit of a war to own your television watching experience. They face stiff competition from "Smart TVs." I have a suspicion that the complete proliferation of smart TVs is largely an artifact of the television manufacturers trying to win advertising surface area away from the STB manufacturers, who have traditionally held most of it via the EPG.

As some evidence of this fight, consider the case of Xfinity Xumo (formerly Flex), the compact STB that Xfinity offers to its internet customers for free. Since it's advertised to people who don't necessarily have any TV service from Xfinity, it's not really a conventional STB. It's more of a slightly-weird-but-free Roku or Amazon Fire Stick. It doesn't really offer anything that your TV doesn't already, but unlike your TV, it's controlled by Comcast. This gives them the opportunity to upsell you on IPTV services, but Comcast never seems to have pursued this route that far. Mostly it gives them the opportunity to advertise to you, and to grab some partner revenue from various streaming apps.

Anyway, that was a bit of a digression. The point is that Comcast and Dish Network and all of their compatriots don't want you using your TV, they want you using your STB. So they give you a big chunky remote ("With Voice Control!") and the STB attempts to use CEC to control the TV so you never have to touch its small, svelte remote ("With Voice Control!") and split their sponsored content revenue with LG.

That's an interesting detail of this whole landscape, isn't it? CEC was developed as a solution to a technical problem: people had multiple devices, and hauling around multiple remotes was frustrating. Over the decades since, it has evolved into a strategy to address a business problem: everyone that sells you AV equipment prefers that you passionately navigate their on-screen menus while completely forgetting about those of your other components.

That's pretty much what's happening with the audio devices as well. TV manufacturers want to capture as much of your entertainment attention and budget as possible, so ideally they sell you a TV and their matching soundbar system (which can be fairly inexpensive since it is closely coupled to the TV and needs very little of its own control logic). CEC here is an under-the-hood implementation detail, something that happens behind the scenes to make your soundbar do the few things it does.

Say you're a higher-end customer, though, with a home theater receiver. The AV receiver industry has been surprisingly unambitious about capturing Platform Revenue, probably because soundbars have pretty much eliminated everything but higher-end, "audiophile"-focused brands. These companies either lack the technical resources to develop a good Entertainment Platform or don't think their customers will respond well to yet another remote with a Pluto TV button. I would like to say it's mostly the latter, but given my experience with the on-screen design and mobile apps of several leading AV receiver manufacturers, I suspect it's mostly the former.

So CEC functions perhaps the most as it was originally intended: you can mainly interact with your TV, and CEC carries control messages to the receiver as needed so that you don't need to find its remote to select the right input. Conceptually you can even use the TV to control non-video functions. For example, my particular combination of a Samsung TV and Yamaha receiver implements CEC completely enough that I can turn on the receiver, select the turntable preamp input, and control the volume via the TV if I want to. Then I still have to get up to actually put a record onto the turntable, and now the TV is just on the whole time, so this isn't that appealing in practice. I am still rummaging fro the receiver's own remote, that or using its terrible Android app.

In the STB scenario, something like Xfinity X1 or Dish Hopper, we have an inversion of control: the only remote you'll need, they hope, is the STB's remote. It will remote control the TV via CEC as needed. This inevitably sets up a power struggle where your Smart TV gets lonely and wants attention. I am mostly kidding about this emotional interpretation of the situation, but obviously the TV manufacturer does have an incentive to distract your attention from the STB, which probably has something to do with the tendency of Smart TVs to pop up a lot of on-screen chrome whenever you turn them on.

The coolest thing about CEC, in my mind, is that unlike HDMI it is multi-drop. That is, when you connect a bunch of HDMI sources to a multi-input TV or receiver or another switching device, they can all be connected to the same unified CEC bus. That means that HDMI devices can communicate with each other via CEC even when there's no active video or audio connection.

CEC even has a fairly complex addressing mechanism to take advantage. CEC physical addresses are assigned based on bus topology, and a mapping protocol is used to advertise a correspondence between new physical addresses and logical addresses. Logical addresses, the same 4-bit addresses from AV.link, are assigned based on capabilities. Typically logical address 0 will be the TV, 1 and 2 will be recorders, 3 will be a tuner, 4 a playback device, 5 a home stereo receiver. You can have a fairly large component setup where everything is controllable by sending CEC messages to standard logical addresses.

And other aspects of CEC are designed to accommodate these kinds of more complex networks. For example, when the user selects a device to watch on their TV, the TV can send a "Set Stream Path" message (opcode 0x86). The parameter on this message is the physical CEC address of the desired device, and any CEC switches in the path are expected to see the message and select the appropriate input to form a path from the selected device to the TV. It's a little bit of centrally-controlled circuit switching right in your entertainment center. Neat!

You can even do broadcast messaging across the entire CEC topology. TVs often use this to discover what devices they're connected to, saving the user some menu setup. That's about the only time you'd notice it, though: like CEC's other more advanced capabilities, routing and multi-device messaging are rarely used outside of its very simplest application.

I want to pass some sort of tidy moral judgment, but CEC is a hard case. It's kind of a mixed bag. The more basic functionality tends to work well and adds convenience. The more complex functionality tends to either not work or be buried deeply enough in configuration menus that no one uses it. It inevitably leads to some weird, inelegant behavior. My husband will put a cab ride video on the TV and then Spotify Cast to the receiver. But then what if you want to listen to the video audio? Easiest way to get the receiver switched back to the TV audio is, of course, to turn the TV off and on again. When you turn it on, it uses CEC "One Touch Play" to signal the receiver to select it again. The particular convergence of technologies here leads to a strange tic, sort of a superstitious behavior, that works fine but feels bad.

If you're a weirdo like me, you use your TV heavily as a monitor for a computer. You might find the gap here rather conspicuous: when I wiggle the mouse to wake the computer up, the TV doesn't turn on. HDMI keeps gaining features, video games are a big driver of high-end PC and television sales, there is an inevitable convergence happening between "monitor" and "TV," and between "video source" and "computer." But the computer video industry is, well, a little slow to catch on.

You might remember that it took an awkwardly long time for PC GPUs to have consistent support for HDMI audio, and then it was still weird and sketchy for a good few years. Well, we haven't even quite made it to that point on the CEC front. I don't think any conventional PCs have CEC transceivers. The solution, if you are mad enough to want one, is a USB CEC adapter. They're basically passthrough devices for HDMI, they just tap the CEC pin and hook it up to a UART. Not many companies make them but they're cheap enough. Software support is... minimal, but it'll let Kodi turn your TV on.

It's fun to think about, though. You know how CEC is multi-drop? You could hook up multiple computers to an HDMI switch and they could talk to each other with CEC. You could use some vendor-specific opcodes to convey IP. You could log onto the internet over HDMI, at 333bps. You could put OpenSC over IP over HDMI CEC and turn your lights on via your stereo receiver. What a dream! I was going to say you could do DMX-512 over CEC but actually at CEC's slow speed the register-broadcast model of DMX would become a pretty significant problem.

You could also log onto the internet over HDMI at 100Mbps, but that's using different pins, your GPU definitely doesn't support it, and I don't even know of a way to do HDMI Ethernet from a PC. CEC may be a bit of an awkward cousin but at least it's more popular than HDMI Ethernet.

[1] pun not intended

--------------------------------------------------------------------------------

>>> 2024-05-25 grc spinrite

I feel like I used to spend an inordinate amount of time dealing with suspect hard drives. I mean, like, back in high school. These days I almost never do, or on the occasion that I have storage trouble, it's a drive that has completely stopped responding at all and there's little to do besides replacing it. One time I had two NVMe drives in two different machines do this to me the same week. Bad luck or quantum phenomenon, who knows.

What accounts for the paucity of "HDD recovery" in my adult years? Well, for one, no doubt HDD technology has improved over time and modern drives are simply more reliable. The well-aged HDDs I have running without trouble in multiple machines right now support this theory. But probably a bigger factor is my buying habits: back in high school I was probably getting most of the HDDs I used second-hand from the Free Geek thrift store. They were coming pre-populated with problems for my convenience.

Besides, the whole storage industry has changed. What's probably more surprising about my situation is how many "spinning rust" HDDs I still own. Conventional magnetic storage only really makes sense in volume. These days I would call an 8TB HDD a small one. The drives that get physical abuse, say in laptops, are all solid state. And solid state drives... while there is no doubt performance degradation over their lifetimes, failure modes tend to be all-or-nothing.

I was thinking about all of this as I ruminated on one of the "holy grail" tools of the late '00s: SpinRite, by Gibson Research Corporation.

The notion that HDDs aren't losing data like they used to is supported by the dearth of data recovery tools on the modern shareware market. Well, maybe that's more symptomatic of the complete hollowing out of the independent software industry by the interests of capitalism, but let's try to dwell on the positive. Some SEO-spam blog post titled "Best data recovery software of 2024" still offers some classic software names like "UnDeleteMyFiles Pro," but some items on the list are just backup tools, and options like Piriform Recuva and the open-source PhotoRec still rank prominently... as they did when I was in high school and my ongoing affection for Linkin Park was less embarrassing [1].

Back in The Day, freeware, shareware, and commercial (payware?) data recovery software proliferated. It was advertised in the back of magazines, the sidebar banner ads of websites, and even appeared in the electronics department of Fred Meyer's. You also saw a lot of advertisements for services that could perform more intensive methods, like swapping an HDD's controller for one from another unit of the same model. These are all still around today, just a whole lot less prominent. Have you ever seen an Instagram ad for UnDeleteMyFiles Pro?

First, we should talk a bit about the idea of data recovery in general. There are essentially two distinct fields that we might call "data recovery": consumers or business users trying to recover their Important Files (say, accounting spreadsheets) from damaged or failed devices, and forensic analysts trying to recover Important Files (say, the other accounting spreadsheets) that have been deleted.

There is naturally some overlap between these two ventures. Consumers sometimes accidentally delete their Important Files and want them back. Suspects sometimes intentionally damage storage devices to complicate forensics. But the two different fields use rather different techniques.

Let's start by examining forensics, both to set up contrast to consumer data recovery and because I know a lot more about it. One of the quintessential techniques of file system forensics is "file carving." A file carving tool examines an arbitrary sequence of bytes (say, from a disk image) and looks for the telltale signs of known file formats. For example, most common file formats have a fixed prefix of some kind. ZIP files start with 0x504B0304, the beginning of which is the ASCII "PK" for Phil Katz who designed the format. Some formats also have a fixed trailer, but many more have structure that can be used to infer the location of the end of the file. For example, in ZIP files the main header structure, the "central directory," is actually a trailer found at the end of the file.

If you can find the beginning and end of a file, and it's stored sequentially, you've now got the whole file. When the file is fragmented in the byte stream (commonly the case with disk images), the problem is a little tougher, but still you can find a lot of value. A surprising number of files are stored sequentially because they are small, some filetypes have internal structure that can be used to infer related blocks and their order, and even finding a single block of a file can be useful if it happens to contain a spreadsheet row starting "facilitating payments to foreign officials" or, I don't know, "Fiat@".

You end up doing this kind of thing a lot because of a detail of file systems that all of my readers probably know. It's often articulated as something like "when you delete a file, it's not deleted, just marked as having been deleted." That's not exactly wrong but it's also an oversimplification in a way that makes it more difficult to understand why that is the case. There's a whole level of indirection due to block allocation, updating the bitmap on every file delete is a relatively time-consuming process that offers little value, actually overwriting blocks would be even more time consuming with even less value, etc. Read Brian Carrier for the whole story.

Actually, screw Brian Carrier, I've written before about the adjacent topic of secure erasure of computer media.

My point is this: these forensic methods are performed on a fully functional storage device (or more likely an image of one), where "recovery" is necessary and possible because of the design of the file system. The storage device, as hardware, is not all that involved. Well, that's really an oversimplification, and points to an important consideration in modern data recovery: storage devices have gotten tremendously more complex, and that's especially true of SSDs.

Even HDDs tend to have their own thoughts and feelings. They can have a great deal of internal logic dedicated to maintaining the disk surface, optimizing performance, working around physical defects on the surface, caching, encryption, etc. Pretty much all of this is proprietary to the manufacturer, undocumented, and largely a mystery to the person performing recovery. Thinking of the device as a "sequence of bytes" throws out a lot of what's really going on, but it's a necessary compromise.

SSDs have gone even further. Flash storage is less durable than magnetic storage but also more flexible. It requires new optimizations to maximize life and facilitates optimizations for access time and speed. Some models of SSDs vary from each other only by their software configuration (this has long been suspected of some HDDs as well, but I have no particular insight into Western Digital color coding). Even worse for the forensic analyst, the TRIM command creates a whole new level of active management by the storage device: SSDs know which blocks are in use, allowing them to constantly remap blocks on the fly. It is impossible, without hardware reverse engineering techniques, to produce a true image of an SSD. You are always working with a "view" of the SSD mediated by its firmware.

So let's compare and contrast forensic analysis to consumer data recovery. The problem for most consumers is sort of the opposite: they didn't delete the file. If they could get the sequence of bytes off the storage device, they could just access the file through the file system. The problem is that the storage device is refusing to produce bytes at all, or it's producing the wrong ones.

Techniques like file carving are not entirely irrelevant to consumer data recovery because it's common for storage devices to fail only partially. There are different ways of referring to the physical geometry of HDDs, and besides, modern storage devices (HDDs and SSDs alike) abstract away their true geometry. Different file systems also use different terminology for their own internal system of mapping portions of the drive to logical objects. So while you'll find people say things like "bad cluster" and "bad sector," I'm just going to talk about blocks. The block is the smallest elementary unit by which your file system interacts with the device. The size of a block is typically 512B for smaller devices and 4k for larger devices.

A common failure mode for storage devices (although, it seems, not so much today) is the loss of a specific block: the platter is damaged, or some of the flash silicon fails, and a specific spot just won't read any more. The storage device can, and likely will, paper over this problem by moving the block to a different area in the storage medium. But, in the process, the contents of the block are probably lost. The new location just contains... whatever was there before [2]. Sometimes the bad block is in the middle of a file, and that sucks for that file. Sometimes the bad block is in the middle of a file system structure like the allocation table, and that sucks for all of the files.

More complicated file systems tend to incorporate precautionary measures against this kind of thing, so the blast radius is mostly limited to single files. For example, NTFS keeps a second copy of the allocation table as a backup. Journaling can also provide a second source of allocation data when the table is damaged.

Simpler file systems, like the venerable FAT, don't have any of these tricks. They are, after all, old and simple. But old age and simplicity gives FAT a "lowest common denominator" status that sees it widely used on removable devices. PhotoRec, while oriented towards the consumer data recovery application, is actually a file carving tool. It's no coincidence that it's called PhotoRec. Removable flash devices like SD cards have simple controllers and host simple file systems. They are, as a result, some of the most vulnerable devices to block failures that render intact files undiscoverable.

What about the cases where file isn't intact, though? Where the block that has become damaged is part of the file that we want? What about cases where a damaged head leaves an HDD unable to read an entire surface?

Well, the news isn't that great. Despite this being one of the most common types of consumer storage failure for a decade or more, and despite the enormous inventory of software that promises to help, your options are limited. A lot of the techniques that software packages used in these situations lack supporting research or are outright suspect. Let's start on solid ground, though, with the most obvious and probably safest option.

One of the problems you quickly encounter when working with a damaged storage device is the file system and operating system. File systems don't like damaged storage devices, and operating systems don't like file systems that refuse to give up a file they say exists. So you try to copy files off of the bad device and onto a good one using your daily-driver file browser, and it hits a block that won't read and gets stuck. Maybe it hangs almost indefinitely, maybe you get an obscure error and the copy operation stops. Your software is working against you.

One of the best options for data recovery from suspect devices is an open-source tool called ddrescue. ddrescue is very simple and substantially similar to dd. It has one critical trick up its sleeve: when reading a block fails, ddrescue retries a limited number of times and then moves on. With that little adaptation, you can recover all of the working blocks from a device and so likely recover all of the files but a few.

Besides, just retrying a few times has value. Especially on magnetic devices, the result of reading the surface can be influenced by small perturbances. An unreadable sector might be readable every once in a while. This doesn't seem to happen as much with SSDs due to the dynamics of flash storage and preemptive correction of weak or ambiguous values, but I'm sure it still happens every once in a while.

At the end of the day, though, this method still means accepting the loss of some data. Losing some data is better than losing all of it, but it might not be good enough. Isn't there anything we can do?

HDDs used to be different. For one, they used to be bigger. But there's more to it than that. Older hard drives used stepper motors to position the head stack, and so head positioning was absolute but subject to some mechanical error. Although this was rarely the case on the consumer market, early hard drives were sometimes sold entirely uninitialized, without the timing marks the controller used to determine sector positions. You had to use a special tool to get the drive to write them [3]. It was common for older drives to come with a report (often printed on the label) of known bad sectors to be kept in mind when formatting.

We now live in a different era. Head stacks are positioned by a magnetic coil based on servo feedback from the read head; mechanical error is virtually absent and positioning is no longer absolute but relative to the cylinder being read. Extensive low-level formatting is required but is handled completely internally by the controller. Controllers passively detect bad blocks and reallocate around them. Honestly, there's just not a lot you can do. There are too many levels of abstraction between even the ATA interface and the actual storage to do anything meaningful at the level of the magnetic surface. And all of this was pretty much true in the late '00s, even before SSDs took over.

So what about SpinRite?

SpinRite dates back to 1987 and is apparently still under development by its creator Steve Gibson. Gibson is an interesting figure, one of the "Tech Personalities" that contemporary media no longer creates (insert comment about decay in the interest of capitalism here). Think Robert Cringely or Leo Laporte, with whom Gibson happens to cohost a podcast. In my mind, Gibson is perhaps most notable for his work as an early security researcher, which had its misses but also had its hits. Through the whole thing he's run Gibson Research Corporation. GRC offers a variety of one-off web services, like a password generator (generated, erm, server-side) and something that displays the TLS fingerprint of a website you enter. There's a user-triggered port scanner called ShieldsUp, which might be interesting were it not for the fact that its port list seems limited to the Windows RPC mapper and some items of that type... things that were major concerns in the early '00s but rarely a practical problem today.

It's full of some gems. Consider the password generator...

What makes these perfect and safe? Every one is completely random (maximum entropy) without any pattern, and the cryptographically-strong pseudo random number generator we use guarantees that no similar strings will ever be produced again. Also, because this page will only allow itself to be displayed over a snoop-proof and proxy-proof high-security SSL connection, and it is marked as having expired back in 1999, this page which was custom generated just now for you will not be cached or visible to anyone else. ... The "Techie Details" section at the end describes exactly how these super-strong maximum-entropy passwords are generated (to satisfy the uber-geek inside you).

You know I'm reading the Techie Details. They describe a straightforward approach using AES in CBC mode, fed by a counter and its own output. It's unremarkable except that just about any modern security professional would have paroxysms at the fact that he seems to have implemented it himself. Sure, there are better methods (like AES CTR), but this is the kind of thing where you shouldn't even really be using methods. "I read it from /dev/urandom" is a far more reassuring explanation than a block diagram of cryptographic primitives. /dev/urandom is a well-audited implementation, whatever is behind your block diagram is not. Besides, it's server side!

My point is not so much to criticize Gibson's technical expertise, although I certainly think you could, but to say that he doesn't seem to have updated his website in some time. A lot of little details like references to WEP and the fact that the PDFs are Corel Ventura output support this theory. By association, I suspect that GRC's flagship product, SpinRite, doesn't get a lot of active maintenance either.

Even back around 2007 when I first encountered SpinRite it was already a little questionable, and I remember a rough internet consensus of "it likely doesn't do anything but it probably doesn't hurt to try." A little research finds that "is SpinRite snake oil?" threads date back to the Usenet era. It doesn't help that Steve Gibson's writing is pervaded by a certain sort of... hucksterism. A sort of ceaseless self-promotion that internet users associate mostly with travel influencers selling courses about how to make money as a travel influencer.

But what does SpinRite even claim? After a charming disclaimer that GRC is opposed to software patents but nonetheless involved in "extensive ongoing patent acquisition" related to SpinRite, a document titled "SpinRite: What's Under the Hood" gives some details. It's undated but has metadata pointing at 1998. That's rather vintage I see several reasons to think that there have been few or no functional changes in SpinRite since that time.

SpinRite is a bootable tool based on FreeDOS. It originated as an interleaving tool, which I won't really explain because it's quite irrelevant to modern storage devices and really just a historic detail of SpinRite. It also "introduc[ed] the concept of non-destructive low-level reformatting," which I won't really explain because I don't know what it means, other than it seems to fall into the broad category of no one really knowing what "low level formatting" means. It's a particularly amusing example, because most modern software vendors use "low level formatting" to refer explicitly to a destructive process.

SpinRite "completely bypasses the system's motherboard BIOS software when used on any standard hard disk system." I assume this means that SpinRite directly issues ATA commands, which probably has some advantages, although the specific ones the document calls out seem specious.

In reference to SpinRite's data recovery features, we read that "The DynaStat system's statistical analysis capability frequently determines a sector's correct data even when the data could never be read correctly from the mass storage medium." This is what I remember as the key claim of SpinRite marketing over a decade ago: that SpinRite would attempt rereading a block a very large number of times and then determine on a bit-by-bit basis what the most likely value is. It seems reasonable on the surface, but it wouldn't make much sense with a drive with internal error correction. That's universal today but I'm not sure how long that's been true, presumably in the late '90s this was a better idea.

That's probably the high point of this document's credibility. Everything from there gets more suspect. It claims that SpinRite has a proprietary system that models the internal line coding used by "every existing proprietary" hard drive, an unlikely claim in 1998 and an impossible one today without a massive reverse engineering effort. Consider also "its second recovery strategy of deliberately wiggling the drive's heads." It seems to achieve this by issuing reads to cylinders on either side of the cylinder in question, but it's questionable if that would even work in principle on a modern drive. You must then consider the use of servo positioning on modern drives, which means that the head will likely oscillate around the target cylinder before settling on it anyway.

This gives the flavor of the central problem with SpinRite: it claims to perform sophisticated analysis at a very low level of the drive's operation, but it claims to do that with hard drives that intentionally abstract away all of their low level details.

A lot of the document reads, to modern eyes, like pure flimflam, written by someone who knew enough about HDDs to sound technical but not enough to really understand the implications of what they were saying. The thing is, though, this document is from '98 and the software was already a decade old at the time! The document does note that SpinRite 3.0 was a complete rewrite, but I suspect it was the last complete rewrite and probably carried a lot of its functionality over from the first two versions.

I think that SpinRite probably does implement the functionality that it claims and that those features might have been of some value in the late '80s and much of the '90s. Then technology moved on and SpinRite became irrelevant. Probably the only thing that SpinRite does of any value on a modern drive is just rewriting the entire addressable area, which gives the controller an opportunity to detect bad blocks and remap them. That should also happen in the course of normal operation, though, and even tools dedicated to that purpose (like the open-source badblocks) are becoming rather questionable in comparison to the preemptive capabilities of modern HDDs. This type of bad-block-detecting rewrite pass is probably only useful in pathological cases on older devices, but it's also the only real claim of the vast majority of modern "hard drive repair" software.

It seems a little mean-spirited to go after GRC for their old software, but they continue to promote it at a cost of $89. The FAQ tells us that "SpinRite is every bit as necessary today as it ever was — maybe even more so since people store so much valuable personal 'media' data on today's massive drives." I resent the implication of the scare-quoted "media," Mr. Gibson, but what I do with my hard drives in my own home is none of your business.

The FAQ tells us "SpinRite is often credited with performing "true miracles" of data recovery," but is oddly silent on the topic of SSDs. Some dedicated Wikipedia editor rounded up a number of occasions on which Gibson said that SpinRite was of limited or no use with SSDs, and yet the GRC website currently includes the heading "Amazingly effective for SSDs!" There is no technical explanation offered for how SpinRite's exceptionally platter-centric features affect an SSD, nor mention of any new functionality targeting flash storage. Instead, there's just anecdotal claims that SpinRite made SSDs faster and a suggestion that the reader google a well-known behavior of flash storage for which SSD controllers have considerable mitigations.

It is an odd detail of the GRC website that most of the new information about the product is provided in the form of video. Specifically, videos excerpted from recent episodes of Gibson and Laporte's podcast "Security Now." Security Now is weekly, so I don't think that SpinRite promotional material makes up a large portion of it, but it does seem conspicuous that Gibson uses the podcast as a platform for 15 minute stories about how SpinRite worked miracles. These segments, and their mentions of how SpinRite is a very powerful tool that one shouldn't run on SSDs too often, absolutely reek of the promotional techniques behind Orgone accumulators, Hulda Clark's "Zapper," and color therapy. It is, it seems, quack medicine for the hard drive.

I don't think SpinRite started as a scam, but I sure think it ended as one.

A lot of this was already apparent back in the late '00s, and I can't honestly say that bootleg copies of SpinRite every improved anything for me. So why did I love it so much? The animations!

SpinRite's TUI was truly a work of art. Just watch it go!.

[1] I recently bought the 20th anniversary vinyl box set of Meteora, which emphasizes that (1) 20 years have passed and (2) I am still a loser.

[2] This kind of visible failure seems uncommon with SSDs, likely because SSD controllers tend to read out the flash in a critical, suspicious way and take preemptive action when the physical state is less than perfectly clear cut. In a common type of engineering irony, the fact that flash storage is less reliable than magnetic media requires aggressive management of the problem that makes the overall system more reliable. Or at least that's what I tell myself when another SSD has gone completely unresponsive.

[3] Honestly this doesn't seem to have been typical with any hard drives by the microcomputer era, which makes perfect sense if you consider that these hard drives were sold with bad sector lists and therefore must have been factory tested. The whole "low level formatting" thing has been 70% a scam and 30% confusion with the very different technical tradition of magnetic diskettes, since probably 1990 at least.

--------------------------------------------------------------------------------

>>> 2024-05-15 catalina connections

Some things have been made nearly impossible to search for. Say, for example, the long-running partnership between Epson and Catalina: a query that will return pages upon pages of people trying to use Epson printers with an old version of MacOS.

When you think of a point of sale printer, you probably think of something like the venerable Epson TM-T88. A direct thermal printer that heats small sections of specially coated paper, causing it to turn black. Thermal paper of this type is made in various widths, but the 80mm or 3 1/8" used by the TM-T88 is the most common. The thermally-reactive coating on the paper incorporates some, umm, questionable chemicals, but moreover, the durability of direct thermal prints is poor. The image tends to fade over not that long of a timespan. Besides, the need for special paper is an irritation.

So, there are other technologies available. Thermal transfer, in which a ribbon of ink (I suspect actually a thermoplastic) is pressed against the paper and heated to cause the ink to stick, is often used for more durability-sensitive applications like warehouse labeling. The greater flexibility of paper (or plastic) stock sees thermal transfer used in specialty applications as well, like conference attendee badges. Thermal transfer printers tend to be more expensive and more complex than direct thermal, though, and are rarely used at the POS.

Impact printers are actually fairly common in a POS-adjacent application. These printers punch metal pins against an inked ribbon, pushing it against the paper to leave a mark. Impact printers were actually the norm for receipt printing prior to the development of inexpensive thermal printers. They remain popular in restaurant kitchens: the plain paper they use is less readily damaged by oils, and won't turn entirely black if exposed to too much heat, as might happen when a ticket is clipped above a grill. Impact receipt printers today are often referred to as kitchen ticket printers as a result.

Impact receipt printers, and many impact printers in general, have a neat trick: you can manufacture an ink ribbon in two colors, say, black on one half and red on the other. By either using two sets of impact pins or shifting the position of the impact head, either black or red can be printed. Dual-color printers with black and red ribbons became ubiquitous for kitchen tickets, although the red doesn't tend to reproduce well from an old, dry ribbon.

The ability of impact printers to use plain paper had another advantage: slip printing. A slip printer is a device intended to print characters on a small piece of paper inserted into it. Historically they were often used by bank tellers to print account and reference numbers onto deposit slips, for later auditing. In other applications they functioned as more sophisticated "received" stamps, adding not just the time and date but customer account or transaction numbers to received paperwork. The legal profession has a tradition of "Bates numbering," which traces its history to a rather different printing device, but Bates numbers could be applied by slip printers as well. In this case, of course, we would need to refer to them as Generic Sequential Page Numbers, Compare to Bates (TM).

A variant of the slip printer, really a receipt printer (often thermal) and slip printer (often impact) married into one box, is known as a check validator. Very common in grocery stores until recently, these printers both produced receipts and printed an audit number and endorsement on the back of the check a customer might offer in payment. It's difficult to imagine paying for groceries with a check, but it used to be a common practice. For many years, the practicalities of accepting checks were a major driver of POS technology. When a cashier rung you up, there were two options: they pushed the cash button, and the POS "bumped" the cash drawer open, or they pushed the check button, and the POS sent an endorsement to the check validator. The close coupling of these two features means that cash drawer bumping is traditionally the task of the receipt printer, and cash bump outputs are common to this day.

But where, exactly, is this tour of POS printing technology taking us? Well, you might notice the absence of the humble inkjet. It might seem surprising: inkjet mechanisms can actually be quite compact, and they tend to be a natural evolution of impact printing. Well, there are indeed inkjet printers in the receipt printer class, but there are some practical considerations. Moving a smaller print head across the paper in bands requires a more complex mechanism, and it's slow compared to printing in one pass. Inkjet heads large enough to span the whole width of the receipt tape are fairly expensive.

And after all that, inkjet seems high maintenance compared to the almost bulletproof reliability of direct thermal printers. Consider the state of the average gas pump "CRIND" (Card Reader In Dispenser) receipt, and then consider that the small thermal mechanism is still managing to produce that output after many years in the harsh conditions of the outdoors. Inkjets tend to quickly malfunction without some sort of automated mechanical cleaning, and that's under office conditions.

So, to put it succinctly, inkjet receipt printers just aren't popular.

You could make similar comments about office printers, where inkjet suffers in many ways when compared to laser or LED printers. But they have been a tremendous success at the lower end of the market. There are a few reasons for this outcome, but one of the bigger ones is color: for a laser or LED printer to produce color used to be rather complicated. In the '00s, many inexpensive color laser printers were "four-pass" printers: the page had to be looped through the print engine four times, one for each color! It saved a lot of parts but made printing more than four times slower. Inkjets were far from this problem. It's a fairly simple matter to make an inkjet print head that serves multiple colors in one assembly!

The same ideas are applicable to receipt printers. If you, for some reason, want a full-color receipt, inkjet is the way to go. But no one wanted a full-color receipt. Even dual-color impact printers disappeared into the kitchen.

And then a company called Catalina came along. Catalina keeps a somewhat low profile among consumers, certainly lower than the MacOS release. Search results suggest lower even than the island off of Los Angeles, for which the company, and the MacOS release, are named. There's no Wikipedia article about Catalina, and their own About Us brief and made up mostly of nonsense like this:

Transforming data into insights, and insights into action through a seamless consumer experience that drives results.

Catalina is one of those companies that you never think about, but that is constantly thinking about you. Today we would call it ad-tech.

Catalina is tough to research. Obviously they did not intentionally choose a name that would become a MacOS release; they were using the Catalina name many years earlier. But it does seem like they have participated in a bit of obfuscation. Today, they continue to advertise a charming phone number: 1-800-8-COUPON. This "translates," of course, to 1-800-826-8766. During the 1990s they ran numerous classified ads using this phone number, but the numeric version instead of the easier to remember "vanity" representation. The ads were for advertising associate positions, but curiously did not mention the name of the company at all.

Actually, some of these ads give a slightly different phone number, 1-800-826-8768. It is quite conceivable that both phone numbers were issued to the company, given the different toll-free number industry of the '90s. But the fact that OCR frequently confuses these two numbers leads one to suspect that some of the 8768 ads may have been a copy mistake.

Even better, a few of the ads for the 8768 number, and one ad with the 8766 number, do give the name of a company, but an unfamiliar one: Aquarius Enterprises.

Aquarius Enterprises was a "register tape advertising" or "receipt back advertising" venture. In other words, they sold advertising on the backs of receipts. Curiously, while Catalina mentions their 40-year history, Aquarius Enterprises calls themselves "the most successful register tape advertising" for "over 25 years"... in 1993. Are they the same company? Well, they used the same phone number. Catalina is headquartered in St. Petersburg, Florida today, but seems to have moved, as early articles describe then as Anaheim-based... rather closer to the El Segundo address often used by Aquarius Enterprises.

Perhaps it is a coincidence of similar phone numbers and similar industries, but I strongly suspect that Catalina was a spin-out of Aquarius Enterprises. I tried finding shared employees, but there is remarkably little information about Aquarius Enterprises outside of their classified ads for sales associates. But then, once again, it's not an easy name to search for.

Whatever its origins, Catalina launched in 1985 with "Coupon $olutions." Besides the cringeworthy name, this venture was remarkably similar to what consumers will know them for today: Coupon $olutions consisted of software that recorded a consumer's purchases at the POS, and then printed on-demand targeted coupons.

Early articles about Catalina describe the system as relatively simple. Coupons would be printed for "complimentary items." For example, the purchase of baby food would result in an coupon for diapers. The coupons themselves were also simple: printed in monochrome on tape with a distinctive printed edge.

Coupon $olutions debuted at two Boys markets in Los Angeles. It grew fast. By 1990, Catalina's coupon printers were installed in 3,300 grocery stores nationwide. Newspaper coverage started to mention privacy concerns in the 1990s, waving them away with Catalina's assurances that there was no privacy concern because they tracked only purchases and not the shopper's identity. Of course, in the late '80s Catalina had trialed a shopper loyalty card program that would rather change that situation, but it seems to have been unsuccessful.

As time passed, Catalina expanded further into retail technology. They opened their own clearinghouse service for coupons, and marketed their on-demand coupon system to stores as an analytics product, since it provided real-time reporting on purchases (in this era even large retailers would often not have granular, fast reporting from their POS system).

The 1990s treated Catalina well, but they seem to have flown a little too close to technology, and the dot com bust hit them as well. In the early '00s, they weathered layoffs, an accounting probe, and a stock dive. Still, 2005 brought a big step forward: color.

Yes, we're finally back to the point. Catalina Marketing partnered with Epson to introduce a special variant of the TM-C610 color receipt printer, called the TM-C600. Called the CMC-6 by Catalina, the printer uses a full-width inkjet head to produce 360 DPI full color on 57.5mm paper.

Lately, though, you may have noticed these printers yielding unsatisfactory results. When I've gotten Checkout Coupons at all, they've been barely legible or, increasingly, completely blank. Curious.

Catalina went bankrupt in 2018, and underwent a reorganization. The company emerged, but apparently not by that much, as it went bankrupt once again in 2023. Catalina offers a fully managed service, meaning that they ship stores new ink cartridges when remote monitoring of the printers indicates that it will be needed. I have a suspicion that Catalina's second bankruptcy has introduced some disruptions. And yet, in an article they claim:

Catalina is assuring clients and shoppers that it’s still business as usual, and ongoing promotions won’t be affected. “There will be no interruption in Catalina’s ability to serve its customers or any impact on how it works with them,” Catalina says.

I'm not sure that this is working out, even a year into the bankruptcy process. Safeway/Albertsons has apparently decided to remove the Catalina printers entirely. Smith's (Kroger) doesn't seem to maintain them at all. Walgreens is apparently more committed to the cause, as they are with the cooler screens, but even there checkout coupons have become inconsistent.

Besides, I don't think even Catalina views the printers as very important any more. They're relegated to a small corner of Catalina's website, with the vast majority of their marketing material dedicated to analytics, targeting, and digital marketing. Catalina seems to be a major player in the in-app digital coupons now emphasized by a lot of grocers, although I've personally found the system to be laughably unusable. But it's not surprising that you get a laughably unusable app from an industry that churns out this kind of copy:

84.51° currently delivers personalized promotional offers to Kroger’s digitally engaged shoppers via its website, mobile app, and more broadly via its Loyal Customer Mailer. Catalina Reach Extender is a complementary solution to the way current offers are delivered and will expand the impact of promotional offers by aligning those offers to the way customers shop – in-store, online or both.

As far as I can tell, this press release is just describing making digital coupons (managed by a company that is, improbably, called 84.51°) also print out on the Catalina printers. The ones that barely work any more. Well, that was January of '23, they didn't know about the second bankruptcy yet.

Catalina may date to 1985, but it's sort of a case study in the advertising industry. It's a huge, publicly traded company, with a market cap that's reached at least $1.7 billion, and two bankruptcies. They write such obtuse copy that it's hard to understand what exactly they do these days, which is probably mainly a way to distract from the fact that their main business is now collecting and selling consumer data. And I would say that no one likes them... subreddits of retail employees are full of comments expressing relief when the Catalina printers would break, since unplugging them would result in multiple phone calls a day from Catalina investigating the "problem."

BUT: there are couponers.

That's right, there's a whole internet subculture that is obsessed with these checkout coupons. They catalog the coupons on offer, and document the process for requesting a replacement coupon from Catalina when the one you expected failed to print. So very strange to me, a reminder of the many people out there and their many strange hobbies.

Why would you ever waste your time on these coupons? I have real things to do, like collecting thermal printers.

--------------------------------------------------------------------------------
                                                                        older ->