_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2024-01-21 multi-channel audio part 1

Stereophonic or two-channel audio is so ubiquitous today that we tend to refer to all kinds of pieces of consumer audio reproduction equipment as "a stereo." As you might imagine, this is a relatively modern phenomenon. While stereo audio in concept dates to the late 19th century, it wasn't common in consumer settings until the 1960s and 1970s. Those were very busy decades in the music industry, and radio stations, records, and film soundtracks all came to be distributed primarily in stereo.

Given the success of stereo, though, one wonders why larger numbers of channels have met more limited success. There are, as usual, a number of factors. For one, two-channel audio was thought to be "enough" by some, considering that humans have two ears. Now it doesn't quite work this way in practice, and we are more sensitive to the direction from which sound comes than our binaural system would suggest. Still, there are probably diminishing returns, with stereo producing the most notable improvement in listening experience over mono.

There are also, though, technical limitations at play. The dominant form of recorded music during the transition to stereo was the vinyl record. There is a fairly straightforward way to record stereo on a record, by using a cartridge with coils on two opposing axes. This is the limit, though: you cannot add additional channels as you have run out of dimensions in the needle's free movement.

This was probably the main cause of the failure of quadraphonic sound, the first music industry attempt at pushing more channels. Introduced almost immediately after stereo in the 1970s, quadraphonic or four-channel sound seemed like the next logical step. It couldn't really be encoded on records, so a matrix encoding system was used in which the front-rear difference was encoded as phase shift in the left and right channels. In practice this system worked poorly, and especially early quadraphonic systems could sound noticeably worse than the stereo version. Wendy Carlos, an advocate of quadraphonic sound but harsh critic of musical electronics, complained bitterly about the inferiority of so-called quadraphonic records when compared to true four-channel recordings, for example on tape.

Of course, four-channel tape players were vastly more expensive than record players in the 1970s, as they ironically remain today. Quadraphonic sound was in a bind: it was either too expensive or too poor of quality to appeal to consumers. Quadraphonic radio using the same matrix encoding, while investigated by some broadcasters, had its own set of problems and never saw permanent deployment. Alan Parsons famously produced Pink Floyd's "Dark Side of the Moon" in quadraphonic sound; the effort was a failure in several ways but most memorably because, by the time of the album's release in 1973, the quadraphonic experiment was essentially over.

Three-or-more-channel-sound would have its comeback just a few years later, though, by the efforts of a different industry. Understanding this requires backtracking a bit, though, to consider the history of cinema prints.

Many are probably at least peripherally aware of Cinerama, an eccentric-seeming film format that used three separate cameras, and three separate projectors, to produce an exceptionally widescreen image. Cinerama's excess was not limited to the picture: it involved not only the three 35mm film reels for the three screen panels, but also a fourth 35mm film that was entirely coated with a magnetic substrate and was used to store seven channels of audio. Five channels were placed behind the screen, effectively becoming center, left, right, left side, and right side. The final two tracks were played back behind the audience, as the surround left and surround right.

Cinerama debuted in 1952, decades before 35mm films would typically carry even stereo audio. Like quadraphonic sound later, Cinerama was not particularly successful. By the time stereo records were common, Cinerama had been replaced by wider film formats and anamorphic formats in which the image was horizontally compressed by the lens of the camera, and expanded by the lens of the projector. Late Cinerama films like 2001: A Space Odyssey were actually filmed Super Panavision 70 and projected onto Cinerama screens from a single projector with a specialized lens.

There's a reason people talk so much about Cinerama, though. While it was not a commercial success, it was influential on the film industry to come. Widescreen formats, mostly anamorphic, would become increasingly common in the following decades. It would take years longer, but so would seven-channel theatrical sound.

"Surround sound," as these multi-channel formats came to be known in the late '50s, would come and go in theatrical presentations throughout the mid-century even as the vast majority of films were presented monaurally, with only a single channel. Most of these relied on either a second 35mm reel for audio only, or the greater area for magnetic audio tracks allowed by 70mm film. Both of these options were substantially more expensive for the presenting theater than mono, limiting surround sound mostly to high-end theaters and premiers. For surround sound to become common, it had to become cheap.

1971's A Clockwork Orange (I will try not to fawn over Stanley Kubrick too much but you are learning something about my film preferences here) employed a modest bit of audio technology, something that was becoming well established in the music industry but was new to film. The magnetic recordings used during the production process employed Dolby Type A noise reduction, similar to what became popular on compact cassette tapes, for a slight improvement in audio quality. The film was still mostly screened in magnetic mono, but it was the beginning of a profitable relationship between Dolby Labs and the film industry. Over the following years a number of films were released with Dolby Type A noise reduction on the actual distribution print, and some theaters purchased decoders to use with these prints. Dolby had bigger ambitions, though.

Around the same time, Kodak had been experimenting with the addition of stereo audio to 35mm release prints, using two optical tracks. They applied Dolby noise reduction to these experimental prints, and brought Dolby in to consult. This presented the perfect opportunity to implement an idea Dolby had been considering. Remember the matrix encoded quadraphonic recording that had been a failure for records? Dolby licensed a later-generation matrix decoder design from Sansui, and applied it to Kodak's stereo film soundtracks, allowing separation into four channels. While the music industry had placed the four channels at the four corners of the soundstage, the film industry had different tastes, driven mostly by the need to place dialog squarely in the center of the field. Dolby's variant of quadraphonic audio was used to present left, right, center, and a "surround" or side channel. This audio format went through several iterations, including much improved matrix decoding, and along the way picked up a name that is still familiar today: Dolby Stereo.

That Dolby Stereo is, in fact, a quadraphonic format reflects a general atmosphere of terminological confusion in the surround sound industry. Keep this in mind.

One of Dolby Stereo's most important properties was its backwards compatibility. The two optical tracks could be played back on a two-channel (or actually stereo) system and still sound alright. They could even be placed on the print alongside the older magnetic mono audio, providing compatibility with mono theaters. This compatibility with fewer channels became one of the most important traits in surround sound systems, and somewhat incidentally served to bring them to the consumer. Since the Dolby Stereo soundtrack played fine on a two-channel system, home releases of films on formats like VHS and Laserdisc often included the original Dolby Stereo audio from the print. A small industry formed around these home releases, licensing the Dolby technology to sell consumer decoders that could recover surround sound from home video.

For cost reasons these decoders were inferior to Dolby's own in several ways, and to avoid the hazard of damage to the Dolby Stereo brand, Dolby introduced a new marketing name for consumer Dolby Stereo decoders: Dolby Surround.

By the 1980s, Dolby Stereo, or Dolby Surround, had become the most common audio format on theatrical presentations and their home video releases. Even some television programs and direct-to-video material was recorded in Dolby Surround. Consumer stereo receivers, in the variant that came to be known as the home theater receiver, often incorporated Dolby Surround decoders. Improvements in consumer electronics brought the cost of proper Dolby Stereo decoders down, and so the home systems came to resemble the theatrical systems as well. Seeking a new brand to unify the whole mess of Dolby Stereo and Dolby Surround (which, confusingly, were often 4 and 3 channel, respectively), Dolby seems to have turned to the "Advanced Logic" and "Full Logic" terms once used by manufacturers of quadraphonic decoders. Dolby's theatrical sound solution came to be known as Dolby Pro Logic. A Dolby Pro Logic decoder processed two audio channels to produce a four-channel output. According to a modern naming convention, Dolby Pro Logic is a 4.0 system: four full-bandwidth channels.

This entire thing, so far, has been a preamble to the topic I actually meant to discuss. It's an interesting preamble, though! I just want to apologize that I didn't mean to write a history of multi-channel audio distribution and so this one isn't especially complete. I left out a number of interesting attempts at multi-channel formats, of which the film industry produced a surprising number, and instead focused on the ones that were influential and/or used for Kubrick films [1].

Dolby Pro Logic, despite its impressive name, was still an analog format, based on an early '70s technique. Later developments would see an increase in the number of channels, and the transition to digital audio formats.

Recall that 70mm film provided six magnetic audio channels, which were often used in an approximation of the seven-channel Cinerama format. Dolby experimented with the six-channel format, though, confusingly also under the scope of the Dolby Stereo product. During the '70s, Dolby observed that the ability of humans to differentiate the source of a sound is significantly reduced as the sound becomes lower in frequency. This had obvious potential for surround sound systems, enabling something analogous to chroma subsampling in video. The lower-frequency component of surround sound does not need to be directional, and for a sense of directionality the high frequencies are most important.

Besides, bassheads were coming to the film industry. The long-used Academy response curve fell out of fashion during the '70s, in part due to Dolby's work, in part due to generally improved loudspeaker technology, and in part due to the increasing popularity of bass-heavy action films. Several 70mm releases used one or more of the audio channels as dedicated bass channels.

For the 1979 film Apocalypse Now in its 70mm print, Dolby premiered a 5.1 format in which three full-bandwidth channels were used for center, left, and right, two channels with high-pass filtering were used for surround left and surround right, and one channel with low-pass filtering was used for bass. Apocalypse Now was not, in fact, the first film to use this channel configuration, but Dolby promoted it far more than the studios had.

Interestingly, while I know less about live production history, the famous cabaret Moulin Rouge apparently used a 5.1 configuration during the 1980s. Moulin Rouge was prominent enough to give the 5.1 format a boost in popularity, perhaps particularly important because of the film industry's indecision on audio formats.

The seven-channel concept of the original Cinerama must have hung around in the film industry, as there was continuing interest in a seven-channel surround configuration. At the same time, the music industry widely adopted eight-channel tape recorders for studio use, making eight-channel audio equipment readily available. The extension to 7.1 surround, adding left and right side channels to the 5.1 configuration, was perhaps obvious. Indeed, what I find strangest about 7.1 is just how late it was introduced to film. Would you believe that the first film released (not merely remastered or mixed for Blu-Ray) in 7.1 was 2010's Toy Story 3?

7.1 home theater systems were already fairly common by then, a notable example of a modern trend afflicting the film industry: the large installed base and cost avoidance of the theater industry means that consumer home theater equipment now evolves more quickly than theatrical systems. Indeed, while 7.1 became the gold standard in home theater audio during the 2000s, 5.1 remains the dominant format in theatrical sound systems today.

Systems with more than eight channels are now in use, but haven't caught on in the consumer setting. We'll talk about those later. For most purposes, eight-channel 7.1 surround sound is the most complex you will encounter in home media. The audio may take a rather circuitous route to its 7.1 representation, but, well, we'll get to that.

Let's shift focus, though, and talk a bit about the actual encodings. Audio systems up to 7.1 can be implemented using analog recording, but numerous analog channels impose practical constraints. For one, they are physically large, making it infeasible to put even analog 5.1 onto 35mm prints. Prestige multi-channel audio formats like that of IMAX often avoided this problem by putting the audio onto an entirely separate film reel (much like Cinerama back at the beginning), synchronized with the image using a pulse track and special equipment. This worked well but drove up costs considerably. Dolby Stereo demonstrated that it was possible to matrix four channels into two channels (with limitations), but considering the practical bandwidth of the magnetic or optical audio tracks on film you couldn't push this technique much further.

Remember that the theatrical audio situation changed radically during the 1970s, going from almost universal mono audio to four channels as routine and six channels for premiers and 70mm. During the same decade, the music reproduction industry, especially in Japan, was exploring another major advancement: digital audio encoding.

In 1980, the Compact Disc launched. Numerous factors contributed to the rapid success of CDs over vinyl and, to a lesser but still great extent, the compact cassette. One of them was the quality of the audio reproduction. CDs were a night and day change: records could produce an excellent result but almost always suffered from dirt and damage. Cassette tapes were better than most of us remember but still had limited bandwidth and a high noise floor, requiring Dolby noise reduction for good results. The CD, though, provided lossless digital audio.

Audio is encoded on an audio CD in PCM format. PCM, or pulse code modulation, is a somewhat confusing term that originated in the telephone industry. If we were to reinvent it today, we would probably just call it digital modulation. To encode a CD, audio is sampled (at 44.1 kHz for historic reasons) and quantized to 16 bits. A CD carries two channels, stereo, which was by then the universal format for music. Put together, those add up to 1.4Mbps. This was a very challenging data rate in 1980, and indeed, practical CD players relied on the fact that the data did not need to be read perfectly (error correcting codes were used) and did not need to be stored (going directly to a digital to analog converter). These were conveniently common traits of audio reproduction systems, and the CD demonstrated that digital audio was far more practical than the computing technology of the time would suggest.

The future of theatrical sound would be digital. Indeed, many films would be distributed with their soundtracks on CD.

There remained a problem, though: a CD could encode two channels. Even four channels wouldn't fit within the data rate CD equipment was capable of, much less six or eight. The film industry would need to formats that could encode six or eight channels of audio into either the bandwidth of a two-channel signal or into precious unused space on 35mm film prints.

Many ingenious solutions were developed. A typical 35mm film print today contains three distinct representations of the audio: a two-channel optical signal outside of the sprocket holes (which could encode Dolby Stereo), a continuous 2D barcode between the frame and sprocket holes which carries the SDDS (Sony Dynamic Digital Sound) digital signal, and individual 2D barcodes between the sprocket holes which encode the Dolby digital signal. Finally, a small pulse pattern at the very edge of the film provides a time code used for synchronization with audio played back from a CD, the DTS system.

But then, a typical 35mm film print today wouldn't exist, as 35mm film distribution has all but disappeared. Almost all modern film is played back entirely digitally from some sort of flexible stream container. You would think, then, that the struggles of encoding multi-channel audio are over. Many media container formats can, after all, contain an arbitrary number of audio channels.

Nothing is ever so simple. Much like a dedicated audio reel adds cost, multiple audio channels inflate file sizes, media cost, and in the era of playback from optical media, could stress the practical read rate. Besides, constraints of the past have a way of sticking around. Every multichannel audio format to find widespread success in the film industry has done so by maintaining backwards compatibility with simple mono and stereo equipment. That continues to be true today: modern multi-channel digital audio formats are still mostly built as extensions of an existing stereo encoding, not as truly new arbitrary-channel formats.

At the same time, the theatrical sound industry has begun a transition away from channel-centric audio formats and towards a more flexible system that is much further removed from the actual playback equipment.

Another trend has emerged since 1980 as well, which you probably already suspected from the multiple formats included in 35mm prints. Dolby's supremacy in multi-channel audio was never as complete as I made it sound, although they did become (and for some time remained) the most popular surround sound solution. They have always had competition, and that's still true today. Just as 35mm prints came with the audio in multiple formats, current digitally distributed films often do as well.

In Part 2, I'll get to the topic I meant to write about today before I got distracted by history: the landscape of audio formats included in digitally distributed films and common video files today, and some of the ways they interact remarkably poorly with computers. We're going to talk about:

Postscript: Film dweebs will of course wonder where George Lucas is in this story. His work on the Star Wars trilogy lead to the creation of THX, a company that will long be remembered for its distinctive audio identity. The odd thing is that THX was never exactly a technology company, although it was closely involved in sound technology developments of the time. THX was essentially a certification agency: THX theaters installed equipment by others (Altec Lansing, for much of the 20th century), and used any of the popular multi-channel audio formats.

To be a THX-certified theater, certain performance requirements had to be met, regardless of the equipment and format in use. THX certification requirements included architectural design standards for theaters, performance specifications for audio equipment, and a specific crossover configuration designed by Lucasfilm.

In 2002, Lucasfilm spun out THX and it essentially became a rental brand, shuffled into the ownership of gamer headphone manufacturer Razer today. THX certification still pops up in some consumer home theater equipment but is no longer part of the theatrical audio industry.

Read part 2 >

[1] Incidentally, Kubrick did not adapt to Dolby Stereo. Despite his early experience with Dolby noise reduction, all of his films would be released in mono except for 2001 (six-channel audio only in the Cinerama release) and Eyes Wide Shut (edited in Dolby Stereo after Kubrick's death).

--------------------------------------------------------------------------------

>>> 2024-01-16 the tacnet tracker

Previously on Deep Space Nine, I wrote that "the mid-2000s were an unsettled time in mobile computing." Today, I want to share a little example. Over the last few weeks, for various personal reasons, I have been doing a lot of reading about embedded operating systems and ISAs for embedded computing. Things like the NXP TriMedia (Harvard architecture!) and pSOS+ (ran on TriMedia!). As tends to happen, I kept coming across references to a device that stuck in my memory: the TacNet Tracker. It prominently features on Wikipedia's list of applications for the popular VxWorks real-time operating system.

It's also an interesting case study in the mid-2000s field of mobile computing, especially within academia (or at least the Department of Energy). You see, "mobile computing" used to be treated as a field of study, a subdiscipline within computer science. Mobile devices imposed practical constraints, and they invited more sophisticated models of communication and synchronization than were used with fixed equipment. I took a class on mobile computing in my undergraduate, although it was already feeling dated at the time.

Today, with the ubiquity of smartphones, "mobile computing" is sort of the normal kind. Perhaps future computer science students will be treated to a slightly rusty elective in "immobile computing." The kinds of strange techniques you use when you aren't constrained by battery capacity. Busy loop to blink the cursor!

Sometime around 2004, Sandia National Laboratory's 6452 started work on the TacNet Tracker. The goal: to develop a portable computer device that could be used to exchange real-time information between individuals in a field environment. A presentation states that an original goal of the project was to use COTS (commercial, off-the-shelf) hardware, but it was found to be infeasible. Considering the state of the mobile computing market in 2004, this isn't surprising. It's not necessarily that there weren't mobile devices available; if anything, the opposite. There were companies popping up with various tablets fairly regularly, and then dropping them two years later. You can find any number of Windows XP tablets; but the government needed something that could be supported long-term. That perhaps explains the "Life-cycle limitations" bullet point the presentation wields against COTS options.

The only products with long-term traction were select phones and PDAs like the iPaq and Axim. Even this market collapsed almost immediately with the release of the iPhone, although Sandia engineers wouldn't have known that would come. Still, the capabilities and expandability of these devices were probably too limited for the Tracker's features. There's a reason all those Windows XP tablets existed. They weighed ten pounds, but they were beefy enough to run the data entry applications that were the major application of commercial mobile computing at the time.

The TacNet Tracker, though, was designed to fit in a pocket and to incorporate geospatial features. Armed with a Tracker, you could see the real-time location of other Tracker users on a map. You could even annotate the map, marking points and lines, and share these annotations with others. This is all very mundane today! At the time, though, it was an obvious and yet fairly complex application for a mobile device.

The first question, of course, is of architecture. The Tracker was built around the XScale PXA270 SoC. XScale, remember, was Intel's marketing name for their ARMv5 chips manufactured during the first half of the '00s. ARM was far less common back then, but was already emerging as a leader in power-efficient devices. The PXA270 was an early processor to feature speed-stepping, decreasing its clock speed when under low load to conserve power.

The PXA270 was attached to 64MB of SDRAM and 32MB of flash. It supported more storage on CompactFlash, had an integrated video adapter, and a set of UARTs that, in the Tracker, would support a serial interface, a GPS receiver, and Bluetooth.

A rechargeable Li-Poly pack allowed the Tracker to operate for "about 4 hours," but the presentation promises 8-12 hours in the future. Battery life was a huge challenge in this era. It probably took about as long to charge as it did to discharge, too. There hadn't been much development in high-rate embedded battery chargers yet.

The next challenge was communication. 802.11 WiFi was achieving popularity by this time, but suffered from a difficult and power-intensive association process even more than it does today. Besides, in mobile applications like those the Tracker was intended for, conventional WiFi's requirement for network infrastructure was impractical. Instead, Sandia turned to Motorola. The Tracker used a PCMCIA WMC6300 Pocket PC MEA modem. MEA stands for "Mesh Enabled Architecture," which seems to have been the period term for something Motorola later rebranded as MOTOMESH.

Marketed primarily for municipal network and public safety applications, MOTOMESH is a vaguely 802.11-adjacent proprietary radio protocol that provides broadband mesh routing. One of the most compelling features of MEA and MOTOMESH is its flexibility: MOTOMESH modems will connect to fixed infrastructure nodes under central management, but they can also connect directly to each other, forming ad-hoc networks between adjacent devices. 802.11 itself was conceptually capable of the same, but in practice, the higher-level software to support this kind of use never really emerged. Motorola offered a complete software suite for MOTOMESH, though, and for no less than Windows CE.

Yes, it really enforces the period vibes that the user manual for the WMC6300 modem starts by guiding you through using Microsoft ActiveSync to transfer the software to an HP iPaq. One did not simply put files onto a mobile device at the time; you had to sync them. Microsoft tried to stamp out an ecosystem of proprietary mobile device sync protocols with ActiveSync. Ultimately none of them would really see much use, PDAs were always fairly niche.

Sandia validated performance of the Tracker's MEA modem using an Elektrobit Propsim C2. I saw one of these at auction once (possibly the same one!), and sort of wish I'd bid on it. It's a chunky desktop device with a set of RF ports and the ability to simulate a wide variety of different radio paths between those ports, introducing phenomena like noise, fading, and multipath that will be observed in the real world. The results are impressive: in a simulated hilly environment, Trackers could exchange a 1MB test image in just 13.6 seconds. Remember that next time you are frustrated by LTE; we really take what we have today for granted.

But what of the software? Well, the Tracker ran VxWorks. Actually, that's how I ran into it: it seems that Wind River (developer of VxWorks) published a whitepaper about the Tracker, which made it onto a list of featured applications, which was the source a Wikipedia editor used to flesh out the article. Unfortunately I can't find the original whitepaper, only dead links to it. I'm sure it would have been a fun read.

VxWorks is a real-time operating system mostly used in embedded applications. It supports a variety of architectures, provides a sophisticated process scheduler with options for hard real-time and opportunistic workloads, offers network, peripheral bus, and file system support, and even a POSIX-compliant userspace. It remains very popular for real-time control applications today, although I don't think you'd find many UI-intensive devices like the Tracker running it. A GUI framework is actually a fairly new feature.

The main application for the Tracker was a map, with real-time location and annotation features. It seems that a virtual whiteboard and instant messaging application were also developed. A charmingly cyberpunk Bluetooth wrist-mounted display was pondered, although I don't think it was actually made.

But what was it actually for?

Well, federal R&D laboratories have a tendency to start a project for one application and then try to shop it around to others, so the materials Sandia published present a somewhat mixed message. A conference presentation suggests it could be used to monitor the health of soldiers in-theater (an extremely frequent justification for grants in mobile computing research!), for situational awareness among security or rescue forces, or for remote control of weapons systems.

I think a hint comes, though, from the only concrete US government application I can find documented: in 2008, Sandia delivered the TacNet Tracker system to the DoE Office of Secure Transportation (OST). OST is responsible for the over-road transportation of nuclear weapons and nuclear materials in the United States. Put simply, they operate a fleet of armored trucks and accompanying security escorts. There is a fairly long history, back to at least the '70s, of Sandia developing advanced radio communications systems for use by OST convoys. Many of these radio systems seemed ahead of their time or at least state of the art, but they often failed to gain much traction outside of DoE. Perhaps this relates to DoE culture, perhaps to the extent to which private contractors have captured military purchasing.

Consider, for example, that Sandia developed a fairly sophisticated digital HF system for communication between OST convoys and control centers. It seemed rather more advanced than the military's ALE solution, but a decade or so later OST dropped it and went to using ALE like everyone else (likely for interoperability with the large HF ALE networks operated by the FBI and CBP for domestic security use, although at some point the DoE itself also procured its own ALE network). A whole little branch of digital HF technology that just sort of fizzled out in the nuclear weapons complex. There's a lot of things like that, it's what you get when you put an enormous R&D capability into a particularly insular and secretive part of the executive branch.

Sandia clearly hoped to find other applications for the system. A 2008 Sandia physical security manual for nuclear installations recommends that security forces consider the TacNet Tracker as a situational awareness solution. It was pitched for several military applications. It's a little hard to tell because the name "TacNet" is a little too obvious, but it doesn't seem that the Sandia device ever gained traction in the military.

As it does with many technical developments that don't go very far, Sandia licensed the technology out. A company called Homeland Integrated Security Systems (HISS) bought it, a very typical name for a company that sells licensed government technology. HISS partnered with a UK-based company called Arcom to manufacture the TacNet Tracker as a commercial product, and marketed it to everyone from the military to search and rescue teams.

HISS must have found that the most popular application of the Tracker was asset tracking. It makes sense, the Tracker device itself lacked a display, under the assumption that it would be in a dock or used with an accessory body-worn display. By the late 2000s, HISS had rebranded the TacNet Tracker as the CyberTracker, and re-engineered it around a Motorola iDEN board. I doubt they actually did much engineering on this product, it seems to have been pretty much an off-the-shelf Motorola iDEN radio that HISS just integrated into their tracking platform. It was advertised as a deterrent to automotive theft and a way to track hijacked school buses in real time---the Chowchilla kidnapping was mentioned.

And that's the curve of millennial mobile computing: a cutting-edge R&D project around special-purpose national security requirements, pitched as a general purpose tactical device, licensed to a private partner, turned into yet another commodity anti-theft tracker. Like if LoJack had started out for nuclear weapons. Just a little story about telecommunications history.

Sandia applied for a patent on the Tracker in 2009, so it's probably still in force (ask a patent attorney). HISS went through a couple of restructurings but, as far as I can tell, no longer exists. The same goes for Arcom, a company by the same name that makes cable TV diagnostic equipment seems to be unrelated. Like the OLPC again, all that is left of the Tracker is a surprising number of used units for sale. I'm not sure who ever used the commercial version, but they sure turn up on eBay. I bought one, of course. It'll make a good paperweight.

--------------------------------------------------------------------------------

>>> 2024-01-06 usb on the go

USB, the Universal Serial Bus, was first released in 1996. It did not achieve widespread adoption until some years later; for most of the '90s RS-232-ish serial and its awkward sibling the parallel port were the norm for external peripheral. It's sort of surprising that USB didn't take off faster, considering the significant advantages it had over conventional serial. Most significantly, USB was self-configuring: when you plugged a device into a host, a negotiation was performed to detect a configuration supported by both ends. No more decoding labels like 9600 8N1 and then trying both flow control modes!

There are some significant architectural differences between USB and conventional serial that come out of autoconfiguration. Serial ports had no real sense of which end was which. Terms like DTE and DCE were sometimes used, but they were a holdover from the far more prescriptive genuine RS-232 standard (which PCs and most peripherals did not follow) and often inconsistently applied by manufacturers. All that really mattered to a serial connection is that one device's TX pin went to the other device's RX pin, and vice versa. The real differentiation between DCE and DTE was the placement of these pins: in principle, a computer would have them one way around, and a peripheral the other way around. This meant that a straight-through cable would result in a crossed-over configuration, as expected.

In practice, plenty of peripherals used the same DE-9 wiring convention as PCs, and sometimes you wanted to connect two PCs to each other. Some peripherals used 8p8c modular jacks, some peripherals used real RS-232 connectors, and some peripherals used monstrosities that could only have emerged from the nightmares of their creators. The TX pin often ended up connected to the TX pin and vice versa. This did not work. The solution, as we so often see in networking, was a special cable that crossed over the TX and RX wires within the cable (or adapter). For historical reasons this was referred to as a null modem cable.

One of the other things that was not well standardized with serial connections was the gender of the connectors. Even when both ends features the PC-standard DE-9, there was some inconsistency over the gender of the connectors on the devices and on the cable. Most people who interact with serial with any regularity probably have a small assortment of "gender changers" and null-modem shims in their junk drawer. Sometimes you can figure out the correct configuration from device manuals (the best manuals provide a full pinout), but often you end up guessing, stringing together adapters until the genders fit and then trying with and without a null modem adapter.

You will notice that we rarely go through this exercise today. For that we can thank USB's very prescriptive standards for connectors on devices and cables. The USB standard specifies three basic connectors, A, B, and C. There are variants of some connectors, mostly for size (mini-B, micro-B, even a less commonly used mini-A and micro-A). For the moment, we will ignore C, which came along later and massively complicated the situation. Until 2014, there was only A and B. Hosts had A, and devices had B.

Yes, USB fundamentally employs a host-device architecture. When you connect two things with USB, one is the host, and the other is the device. This differentiation is important, not just for the cable, but for the protocol itself. USB prior to 3, for example, does not feature interrupts. The host must poll the device for new data. The host also has responsibility for enumeration of devices to facilitate autoconfiguration, and for flow control throughout a tree of USB devices.

This architecture makes perfect sense for USB's original 1990s use-case of connecting peripherals (like mice) to hosts (like PCs). In fact, it worked so well that once USB1.1 addressed some key limitations it became completely ubiquitous. Microsoft used the term "legacy-free PC" to identify a new generation of PCs at the very end of the '90s and early '00s. While there were multiple criteria for the designation, the most visible to users was the elimination of multiple traditional ports (like the game port! remember those!) in favor of USB.

Times change, and so do interconnects. The consumer electronics industry made leaps and bounds during the '00s and "peripheral" devices became increasingly sophisticated. The introduction of portables running sophisticated operating systems pushed the host-device model to a breaking point. It is, of course, tempting to talk about this revolution in the context of the iPhone. I never had an iPhone though, so the history of the iDevice doesn't have quite the romance to me that it has to so many in this space [1]. Instead, let's talk about Nokia. If there is a Windows XP to Apple's iPhone, it's probably Nokia. They tried so hard, and got so far, but [...].

The Nokia 770 Internet Tablet was not by any means the first tablet computer, but it was definitely a notable early example. Introduced in 2005, it premiered the Linux-based Maemo operating system beloved by Nokia fans until iOS and Android killed it off in the 2010s. The N770 was one of the first devices to fall into a new niche: with a 4" touchscreen and OMAP/ARM SoC, it wasn't exactly a "computer" in the consumer sense. It was more like a peripheral, something that you would connect to your computer in order to load it up with your favorite MP3s. But it also ran a complete general-purpose operating system. The software was perfectly capable of using peripherals itself, and MP3s were big when you were storing them on MMC. Shouldn't you be able to connect your N770 to a USB storage device and nominate even more MP3s as favorites?

Obviously Linux had mainline USB mass storage support in 2005, and by extension Maemo did. The problem was USB itself. The most common use case for USB on the N770 was as a peripheral, and so it featured a type-B device connector. It was not permitted to act as a host. In fact, every PDA/tablet/smartphone type device with sophisticated enough software to support USB peripherals would encounter the exact same problem. Fortunately, it was addressed by a supplement to the USB 2.0 specification released in 2001.

The N770 did not follow the supplement. That makes it fun to talk about, both because it is weird and because it is an illustrative example of the problems that need to be solved.

The N770 featured an unusual USB transceiver on its SoC, seemingly unique to Nokia and called "Tahvo." The Tahvo controller exposed an interface (via sysfs in the Linux driver) that allowed the system to toggle it between device mode (its normal configuration) and host mode. This worked well enough with Maemo's user interface, but host mode had a major limitation. The N770 wouldn't provide power on the USB port; it didn't have the necessary electrical components. Instead, a special adapter cable was needed to provide 5v power from an alternate source.

So there are several challenges for a USB device to operate as host or device:

Note that "special cable" involved in host mode for the N770. You might think this was the ugliest part of the situation. You're not wrong, but it's also not really the hack. For many years to follow, the proper solution to this problem would also involve a special cable.

As I mentioned, since 2001 there has been a supplement USB specification called USB On-The-Go, commonly referred to as USB OTG, perhaps because On-The-Go is an insufferably early '00s name. It reminds me of, okay, here goes a full-on anecdote.

Anecdote

I attended an alternative middle school in Portland that is today known as the Sunnyside Environmental School. I could tell any number of stories about the bizarre goings-on at this school that you would scarcely believe, but it also had its merits. One of them, which I think actually came from the broader school district, was a program in which eighth graders were encouraged to "job shadow" someone in a profession they were interested in pursuing. By good fortune, a friend's father was an electrical engineer employed at Intel's Jones Farm campus, and agreed to be my host. I had actually been to Jones Farm a number of times on account of various extracurricular programs (in that era, essentially every STEM program in the Pacific Northwest operated on the largess of either Intel or Boeing, if not both). This was different, though: this guy had a row of engraved brass patent awards lining his cubicle wall and showed me through labs where technicians tinkered with prototype hardware. Foreshadowing a concerning later trend in my career, though, the part that stuck with me most was the meetings. I attended meetings, including one where this engineering team was reporting to leadership on the status of a few of their projects.

I am no doubt primed to make this comparison by the mediocre movie I watched last night, but I have to describe the experience as Wonka-esque. These EEs demonstrated a series of magical hardware prototypes to some partners from another company. Each was more impressive than the last. It felt like I was seeing the future in the making.

My host demonstrated his pet project, a bar that contained an array of microphones and used DSP methods to compare the audio from each and directionalize the source of sounds. This could be used for a sophisticated form of noise canceling in which sound coming from an off-axis direction could be subtracted, leaving only the voice of the speaker. If this sounds sort of unremarkable, that is perhaps a reflection of its success, as the same basic concept is now implemented in just about every laptop on the market. Back then, when the N770 was a new release, it was challenging to make work and my host explained that the software behind it usually crashed before he finished the demo, and sometimes it turned the output into a high pitched whine and he hadn't quite figured out why yet. I suppose that meeting was lucky.

But that's an aside. A long presentation, and then debate skeptical execs, revolved around a new generation of ultramobile devices that Intel envisioned. One, which I got to handle a prototype of, would eventually become the Intel Core Medical Tablet. It featured chunky, colorful design that is clearly of the same vintage as the OLPC. It was durable enough to stand on, which a lab technician demonstrated with delight (my host, I suspect tired of this feat, picked up some sort of lab interface and dryly remarked that he could probably stand on it too). The Core Medical Tablet shared another trait with the OLPC: the kind of failure that leaves no impact on the world but a big footprint at recyclers. Years later, as an intern at Free Geek, I would come across at least a dozen.

Another facet of this program, though, was the Mobile Metro. The Metro was a new category of subnotebook, not just small but thin. A period article compares its 18mm profile to the somewhat thinner Motorola Razr, another product with an outsize representation in the Free Geek Thrift Store. Intel staff were confident that it would appeal to a new mobile workforce, road warriors working from cars and coffee shops. The Mobile Metro featured SideShow, a small e-ink display (in fact, I believe, a full Windows Mobile system) on the outside of a case that could show notifications and media controls.

The Mobile Metro was developed around the same time as the Classmate PC, but seems to have been even less successful. It was still in the conceptual stages when I heard of it. It was announced, to great fanfare, in 2007. I don't think it ever went into production. It had WiMax. It had inductive charging. It only had one USB port. It was, in retrospect, prescient in many ways both good and bad.

The point of this anecdote, besides digging up middle school memories while attempting to keep others well suppressed, is that the mid-2000s were an unsettled time in mobile computing. The technology was starting to enable practical compact devices, but manufacturers weren't really sure how people would use them. Some innovations were hits (thin form factors). Some were absolute misses (SideShow). Some we got stuck with (not enough USB ports).

End of anecdote

As far as I can tell, USB OTG wasn't common on devices until it started to appear on Android smartphones in the early 2010s. Android gained OTG support in 3.1 (codenamed Honeycomb, 2011), and it quickly appeared in higher-end devices. Now OTG support seems nearly universal for Android devices; I'm sure there are lower-end products where it doesn't work but I haven't yet encountered one. Android OTG support is even admirably complete. If you have an Android phone, amuse yourself sometime by plugging a hub into it, and then a keyboard and mouse. Android support for desktop input peripherals is actually very good and operating mobile apps with an MX Pro mouse is an entertaining and somewhat surreal experience. On the second smartphone I owned, I hazily think a Samsung in 2012-2013, I used to take notes with a USB keyboard.

iOS doesn't seem to have sprouted user-exposed OTG support until the iPhone 12, although it seems like earlier versions probably had hardware support that wasn't exposed by the OS. I could be wrong about this; I can't find a straightforward answer in Apple documentation. The Apple Community Forums seem to be... I'll just say "below average." iPads seem to have gotten OTG support a lot earlier than the iPhone despite using the same connector, making the situation rather confusing. This comports with my general understanding of iOS, though, from working with bluetooth devices: Apple is very conservative about hardware peripheral support in iOS, and so it's typical for iOS to be well behind Android in this regard for purely software reasons. Ask me about how this has impacted the Point of Sale market. It's not positive.

But how does OTG work? Remember, USB specifies that hosts must have an A connector, and devices a B connector. Most smartphones, besides Apple products and before USB-C, sported a micro-B connector as expected. How OTG?

The OTG specification decouples, to some extent, the roles of A/B connector, power supply, and host/device role. A device with USB OTG support should feature a type AB socket that accommodates either an A or a B plug. Type AB is only defined for the mini and micro sizes, typically used on portable devices. The A or B connectors are differentiated not only by the shape of their shells (preventing a type-A plug being inserted into a B-only socket), but also electrically. The observant among you may have noticed that mini and micro B sockets and plugs feature five pins, while USB2.0 only uses four. This is the purpose of the fifth pin: differentiation of type A and B plugs.

In a mini or micro type B plug, the fifth pin is floating (disconnected). In a mini or micro type A plug, it is connected to the ground pin. When you insert a plug into a type AB socket, the controller checks for connectivity between the fifth pin (called the ID pin) and the ground. If connectivity is present, the controller knows that it must act as an OTG A-device---it is on the "A" end of the connection. If there is no continuity, the more common case, the controller will act as an OTG B-device, a plain old USB device [2].

The OTG A-device is always responsible for supplying 5v power (see exception in [2]). By default, the A-device also acts as the host. This provides a basically complete solution for the most common OTG use-case: connecting a peripheral like a flash drive to your phone. The connector you plug into your phone identifies itself as an A connector via the ID pin, and your phone thus knows that it must supply power and act as host. The flash drive doesn't need to know anything about this, it has a B connection and acts as a device as usual. This simple case only became confusing when you consider a few flash drives sold specifically for use with phones that had a micro-A connector right on them. These were weird and I don't like them.

In the more common situation, though, you would use a dongle: a special cable. A typical OTG cable, which were actually included in the package with enough Android phones of the era that I have a couple in a drawer without having ever purchased one, provides a micro-A connector on one end and a full-size A socket on the other. With this adapter, you can plug any USB device into your phone with a standard USB cable.

Here's an odd case, though. What if you plug two OTG devices into each other? USB has always had this sort of odd edge-case. Some of you may remember "USB link cables," which don't really have a technical name but tend to get called Laplink cables after a popular vendor. Best Buy and Circuit City used to be lousy with these things, mostly marketed to people who had bought a new computer and wanted to transfer their files. A special USB cable had two A connectors, which might create the appearance that it connected two hosts, but in fact the cable (usually a chunky bit in the middle) acted as two devices to connect to two different hosts. The details of how these actually worked varied from product to product, but the short version is "it was proprietary." Most of them didn't work unless you found the software that came with them, but there are some pseudo-standard controllers supported out of the box by Windows or Linux. I would strongly suggest that you protect your mental state by not trying to use one.

OTG set out to address this problem more completely. First, it's important to understand that this in no way poses an exception to the rule that a USB connection has an A end and a B end. A USB cable you use to connect two phones together might, at first glance, appear to be B-B. But, if you inspect closer, you will find that one end is mini or micro A, and the other is mini or micro B. You may have to look close, the micro connectors in particular have a similar shell!

If you are anything like me, you are most likely to have encountered such a cable in the box with a TI-84+. These calculators had a type AB connector and came with a micro A->B cable to link two units. You might think, by extension, that the TI-84+ used USB OTG. The answer is kind of! The USB implementation on the TI-84+ and TI-84+SE was very weird, and the OS didn't support anything other than TIConnect. Eventually the TI-84+CE introduced a much more standard USB controller, although I think support for any OTG peripheral still has to be hacked on to the OS. TI has always been at the forefront of calculator networking, and it has always been very weird and rarely used.

This solves part of the problem: it is clear, when you connect two phones, which should supply power and which should handle enumeration. The A-device is, by default, in charge. There are problems where this interacts with common USB devices types, though. One of the most common uses of USB with phones is mass storage (and its evil twin MTP). USB mass storage has a very strong sense of host and device at a logical level; the host can browse the devices files. When connecting two smartphones, though, you might want to browse from either end. Another common problem case here is that of the printer, or at least it would be if printer USB host support was ever usable. If you plug a printer into a phone, you might want to browse the phone as mass storage on the printer. Or you might want to use conventional USB printing to print a document from the phone's interface. In fact you almost certainly want to do the latter, because even with Android's extremely half-assed print spooler it's probably a lot more usable than the file browser your printer vendor managed to offer on its 2" resistive touchscreen.

OTG adds Host Negotiation Protocol, or HNP, to help in this situation. HNP allows the devices on a USB OTG connection to swap roles. While the A-device will always be the host when first connected, HNP can reverse the logical roles on demand.

This all sounds great, so where does it fall apart? Well, the usual places. Android devices often went a little off the script with their OTG implementations. First, the specification did not require devices to be capable of powering the bus, and phones couldn't. Fortunately that seems to have been a pretty short lived problem, only common in the first couple of generations of OTG devices. This wasn't the only limitation of OTG implementations; I don't have a good sense of scale but I've seen multiple reports that many OTG devices in the wild didn't actually support HNP, they just determined a role when connected based on the ID pin and could not change after that point.

Finally, and more insidiously, the whole thing about OTG devices having an AB connector didn't go over as well as intended. We actually must admire TI for their rare dedication to standards compliance. A lot of Android phones with OTG support had a micro-B connector only, and as a result a lot of OTG adapters use a micro-B connector.

There's a reason this was common; since A and B plugs are electrically differentiable regardless of the shape of the shell, the shell shape arguably doesn't matter. You could be a heavy OTG user with such a noncompliant phone and adapter and never notice. The problem only emerges when you get a (rare) standards-compliant OTG adapter or, probably more common, OTG A-B cable. Despite being electrically compatible, the connector won't fit into your phone. Of course this behavior feeds itself; as soon as devices with an improper B port were common, manufacturers of cables were greatly discouraged from using the correct A connector.

The downside, conceptually, is that you could plug an OTG A connector (with a B-shaped shell) into a device with no OTG support. In theory this could cause problems, in practice the problems don't seem to have been common since both devices would think they were B devices and (if standards compliant) not provide power. Essentially these improper OTG adapters create a B-B cable. It's a similar problem to an A-A cable but, in practice, less severe. Like an extension cord with two female ends. Home Depot might even help you make one of those.

While trying to figure out which iPhones had OTG support, I ran across an Apple Community thread where someone helpfully replied "I haven't heard of OTG in over a decade." Well, it's not a very helpful reply, but it's not exactly wrong either. No doubt the dearth of information on iOS OTG is in part because no one ever really cared. Much like the HDMI-over-USB support that a generation of Android phones included, OTG was an obscure feature. I'm not sure I have ever, even once, seen a human being other than myself make use of OTG.

Besides, it was completely buried by USB-C.

The thing is that OTG is not gone at all, in fact, it's probably more popular than ever before. There seems to be some confusion about how OTG has evolved with USB specifications. I came across more than one article saying that USB 3.1 Dual Role replaced OTG. This assertion is... confusing. It's not incorrect, but there's a good chance of it leading you int he wrong direction.

Much of the confusion comes from the fact that Dual-Role doesn't mean anything that specific. The term Dual-Role and various resulting acronyms like DRD and DRP have been applied to multiple concepts over the life of USB. Some vendors say "static dual role" to refer to devices that can be configured as either host or device (like the N770). Some vendors use dual role to identify chipsets that detect role based on the ID pin but are not actually capable of OTG protocols like HNP. Some articles use dual role to identify chipsets with OTG support. Subjectively, I think the intent of the changes in USB 3.1 were mostly to formally adopt the "dual role" term that was already the norm in informal use---and hopefully standardize the meaning.

For USB-C connectors, it's more complicated. USB-C cables are symmetric, they do not identify a host or device end in any way. Instead, the USB-C ports use resistance values to indicate their type. When either end indicates that it is only capable of the device role, the situation is simple, behaving basically the same way that OTG did: the host detects that the other end is a device and behaves as the host.

When both ends support the host role, things work differently: the Dual Role feature of USB-C comes into play. The actual implementation is reasonably simple; a dual-role USB-C controller will attempt to set up a connection both ways and go with whichever succeeds. There are some minor complications on top of this, for example, the controller can be configured with a "preference" for host or device role. This means that when you plug your phone into your computer via USB-C, the computer will assume the host role, because although it's capable of either the phone is configured with a preference for the device role. That matches consumer expectations. When both devices are capable of dual roles and neither specifies a preference, the outcome is random. This scenario is interesting but not all that common in practice.

The detection of host or device role by USB-C is based on the CC pins, basically a more flexible version of OTG's ID pin. There's another important difference between the behavior of USB-C and A/B: USB-C interfaces provide no power until they detect, via the CC pins, that the other device expects it. This is an important ingredient to mitigate the problem with A-A cables, that both devices will attempt to power the same bus.

The USB-C approach of using CC pins and having dual role controllers attempt one or the other at their preference is, for the most part, a much more elegant approach. There are a couple of oddities. First, in practice cables from C to A or B connectors are extremely common. These cables must provide the appropriate values on the CC pins to allow the USB-C controller to correctly determine its role, both for data and power delivery.

Second, what about role reversal? For type A and B connectors, this is achieved via HNP, but HNP is not supported on USB-C. Application notes from several USB controller vendors explain that, oddly enough, the only way to perform role reversal with USB-C is to implement USB Power Delivery (PD) and use the PD negotiation protocol to change the source of power. In other words, while OTG allows reversing host and device roles independently of the bus power source, USB-C does not. The end supplying power is always the host end. This apparent limitation probably isn't that big of a deal, considering that the role reversal feature of OTG was reportedly seldom implemented.

That's a bit of a look into what happens when you plug two USB hosts into each other. Are you confused? Yeah, I'm a little confused too. The details vary, and a lot more based on the capabilities of the individual devices rather than the USB version in use. This has been the malaise of USB for a solid decade now, at least: the specification has become so expansive, with so many non-mandatory features, that it's a crapshoot what capabilities any given USB port actually has. The fact that USB-C supports a bevy of alternate modes like Thunderbolt and HDMI only adds further confusion.

I sort of miss when the problem was just inappropriate micro-B connectors. Nonetheless, USB-C dual role support seems ubiquitous in modern smartphones, and that's the only place any of this ever really mattered. Most embedded devices still seem to prefer to just provide two USB ports: a host port and a device port. And no one ever uses the USB host support on their printer. It's absurd, no one ever would. Have you seen what HP thinks is a decent file browser? Good lord.

[1] My first smartphone was the HTC Thunderbolt. No one, not even me, will speak of that thing with nostalgia. It was pretty cool owning one of the first LTE devices on the market, though. There was no contention at all in areas with LTE service and I was getting 75+Mbps mobile tethering in 2011. Then everyone else had LTE too and the good times ended.

[2] There are actually several additional states defined by fixed resistances that tell the controller that it is the A-device but power will be supplied by the bus. These states were intended for Y-cables that allowed you to charge your phone from an external charger while using OTG. In this case neither device supplies power, the external charger does. The details of how this works are quite straightforward but will be confusing to keep adding as an exception, so I'm going to pretend the whole feature doesn't exist.

--------------------------------------------------------------------------------

>>> 2023-12-23 ITT Technical Institute

Programming note/shameless plug: I am finally on Mastodon.

The history of the telephone industry is a bit of an odd one. For the greatest part of the 20th century, telephony in the United States was largely a monopoly of AT&T and its many affiliates. This wasn't always the case, though. AT&T held patents on their telephone implementation, but Bell's invention was not the only way to construct a practical telephone. During the late 19th century, telephone companies proliferated, most using variations on the design they felt would fall outside of Ma Bell's patent portfolio. AT&T was aggressive in challenging these operations but not always successful. During this period, it was not at all unusual for a city to have multiple competing telephone companies that were not interconnected.

Shortly after the turn of the 20th century, AT&T moved more decisively towards monopoly. Theodore Newton Vail, president of AT&T during this period, adopted the term "Universal Service" to describe the targeted monopoly state: there would be one universal telephone system. One operated under the policies and, by implication, the ownership of AT&T. AT&T's path to monopoly involved many political and business maneuvers, the details of which have filled more than a few dissertations in history and economics. By the 1920s the deal was done, there would be virtually no (and in a legal sense literally no) long-distance telephone infrastructure in the United States outside of The Bell System.

But what of the era's many telephone entrepreneurs? For several American telephone companies struggling to stand up to AT&T, the best opportunities were overseas. A number of countries, especially elsewhere in the Americas, had telephone systems built by AT&T's domestic competitors. Perhaps the most neatly named was ITT, the International Telephone and Telegraph company. ITT was formed from the combination of Puerto Rican and Cuban telephone companies, and through a series of acquisitions expanded into Europe.

Telefónica, for example, is a descendent of an early ITT acquisition. Other European acquisitions led to wartime complications, like the C. Lorenz company, which under ITT ownership functioned as a defense contractor to the Nazis during WWII. Domestically, ITT also expanded into a number of businesses outside of the monopolized telephone industry, including telegraphy and international cables.

ITT had been bolstered as well by an effect of AT&T's first round of antitrust cases during the 1910s and 1920s. As part of one of several settlements, AT&T agreed to divest several overseas operations to focus instead on the domestic market. They found a perfect buyer: ITT, a company which already seemed like a sibling of AT&T and through acquisitions came to function as one.

ITT grew rapidly during the mid-century, and in the pattern of many industrial conglomerates of the time ITT diversified. Brands like Sheraton Hotels and Avis Rent-a-Car joined the ITT portfolio (incidentally, Avis would be spun off, conglomerated with others, and then purchased by previous CAB subject Beatrice Foods). ITT was a multi-billion-dollar American giant.

Elsewhere in the early technology industry, salesman Howard W. Sams worked for the P. R. Mallory Company in Indianapolis during the 1930s and 1940s. Mallory made batteries and electronic components, especially for the expanding radio industry, and as Sams sold radio components to Mallory customers he saw a common problem and a sales opportunity: radio technicians often needed replacement components, but had a hard time identifying them and finding a manufacturer. Under the auspices of the Mallory company Sams produced and published several books on radio repair and electronic components, but Mallory didn't see the potential that Sams did in these technical manuals.

Sams, driven by the same electronics industry fervor as so many telephone entrepreneurs, struck out on his own. Incorporated in 1946, the Howard W. Sams Company found quick success with its Photofact series. Sort of the radio equivalent of Haynes and Chilton in the auto industry, Photofact provided schematics, parts lists, and repair instructions for popular radio receivers. They were often found on the shelves of both technicians and hobbyists, and propelled the Sams Company to million-dollar revenues by the early 1950s.

Sams would expand along with the electronics industry, publishing manuals on all types of consumer electronics and, by the 1960s, books on the use of computers. Sams, as a technical press, eventually made its way into the ownership of Pearson. Through Pearson's InformIT, the Sams Teach Yourself series remains in bookstores today. I am not quite sure, but I think one of the first technical books I ever picked up was an earlier edition of Sams HTML in 24 Hours.

The 1960s were an ambitious era, and Sams was not content with just books. Sams had taught thousands electronics technicians through their books. Many radio technicians had demonstrated their qualifications and kept up to date by maintaining a membership in the Howard Sams Radio Institute, a sort of correspondence program. It was a natural extension to teach electronics skills in person. In 1963, Sams opened the Sams Technical Institute in Indianapolis. Shortly after, they purchased the Acme Institute of Technology (Dayton, Ohio) and the charmingly named Teletronic Technical Institute (Evansville, Indiana), rebranding both as Sams campuses.

In 1965, the Sams Technical Institute had 2,300 students across five locations. Sams added the Bramwell Business College to its training division, signaling a move into the broader world of higher education. It was a fast growing business; it must have looked like a great opportunity to a telephone company looking for more ways to diversify. In 1968, ITT purchased the entire training division from Sams, renaming it ITT Educational Services [1].

ITT approached education with the same zeal it had overseas telephone service. ITT Educational Services spent the late '60s and early '70s on a shopping spree, adding campus after campus to the ITT system. Two newly constructed campuses expanded ITT's business programs, and during the '70s ITT introduced formal curriculum standardization programs and a bureaucratic structure to support its many locations. Along with expansion came a punchier name: the ITT Technical Institute.

"Tri-State Businessmen Look to ITT Business Institute, Inc. for Graduates," reads one corner of a 1970 full-page newspaper ad. "ITT adds motorcycle repair course to program," 1973. "THE ELECTRONICS AGE IS HERE. If your eyes are on the future, ITT Technical institute can prepare you for a HIGH PAYING, EXCITING career in... ELECTRONICS," 1971. ITT Tech has always known the value of advertising, and ran everything from full-page "advertorials" to succinct classified ads throughout their growing region.

During this period, ITT Tech clearly operated as a vocational school rather than a higher education institution. Many of its programs ran as short as two months, and they were consistently advertised as direct preparation for a career. These sorts of job-oriented programs were very attractive to veterans returning from Vietnam, and ITT widely advertised to veterans on the basis of its approval (clearly by 1972 based on newspaper advertisements, although some sources say 1974) for payment under the GI Bill. Around the same time ITT Tech was approved for the fairly new federal student loan program. Many of ITT's students attended on government money, with or without the expectation of repayment.

ITT Tech flourished. By the mid-'70s the locations were difficult to count, and ITT had over 1,000 students in several states. ITT Tech was the "coding boot camp" of its day, advertising computer programming courses that were sure to lead to employment in just about six months. Like the coding boot camps of our day, these claims were suspect.

In 1975, ITT Tech was the subject of investigations in at least two states. In Indiana, three students complained to the Evansville municipal government after ITT recruiters promised them financial aid and federally subsidized employment during their program. ITT and federal work study, they were told, would take care of all their living expenses. Instead, they ended up living in a YWCA off of food stamps. The Indiana board overseeing private schools allowed ITT to keep its accreditation only after ITT promised to rework its entire recruiting policy---and pointed out that the recruiters involved had left the company. ITT refunded the tuition of a dozen students who joined the complaint, which no doubt helped their case with the state.

Meanwhile, in Massachusetts, the Boston Globe ran a ten-part investigative series on the growing for-profit vocational education industry. ITT Tech, they alleged, promised recruits to its medical assistant program guaranteed post-graduation employment. The Globe claimed that almost no students of the program successfully found jobs, and the Massachusetts Attorney General agreed. In fact, the AG found, the program's placement rate didn't quite reach 5%. For a settlement, ITT Tech agreed to change its recruiting practices and refund nearly half a million dollars in tuition and fees.

ITT continued to expand at a brisk pace, adding more than a dozen locations in the early '80s and beginning to offer associates degrees. Newspapers from Florida to California ran ads exhorting readers to "Make the right connections! Call ITT Technical Institute." As the 1990s dawned, ITT Tech enjoyed the same energy as the computer industry, and aspired to the same scale. In 1992, ITT Tech announced their "Vision 2000" master plan, calling for bachelor's programs, 80 locations, and 45,000 students for beginning of the new millennium. ITT Tech was the largest provider of vocational training the country.

In 1993, ITT Tech was one of few schools accepted into the first year of the Direct Student Loan program. The availability of these new loans gave enrollment another boost, as ITT Tech reached 54 locations and 20,000 students. In 1994, ITT Tech started to gain independence from its former parent: an IPO sold 17% ownership to the open market, with ITT retaining the remaining 83%. The next year, ITT itself went through a reorganization and split, with its majority share of ITT Tech landing in the new ITT Corporation.

As was the case with so many diversified conglomerates of the '90s (see Beatrice Foods again), ITT's reorganization was a bad portent. ITT Hartford, the spun-out financial services division, survives today as The Hartford. ITT Industries, the spun-out defense contracting division, survives today as well, confusingly renamed to ITT Corporation. But the third part of the 1995 breakup, the ITT Corporation itself, merged with Starwood Hotels and Resorts. The real estate and hospitality side-business of a telephone and telegraph company saw the end of its parent.

Starwood had little interest in vocational education, and over the remainder of the '90s sold off its entire share of ITT Tech. Divestment was a good idea: the end of the '90s hit hard for ITT Tech. Besides the general decline of the tech industry as the dot com bubble burst, ITT Tech's suspect recruiting practices were back. This time, they had attracted federal attention.

In 1999, two ITT Tech employees filed a federal whistleblower suit alleging that ITT Tech trained recruiters to use high-pressure sales tactics and outright deception to obtain students eligible for federal aid. Recruiters were paid a commission for each student they brought in, and ITT Tech obtained 70% of its revenue from federal aid programs. A federal investigation moved slowly, apparently protracted by the Department of Education's nervous approach following the criticism it received for shutting down similar operation Computer Learning Centers. In 2004, federal agents raided ITT Tech campuses across ten states, collecting records on recruitment and federal funding.

During the early 2000s ITT Tech students defaulted on $400 million in federal student loans. The result, a large portion of ITT Tech revenue coming from defaulted federal loans, attracted ongoing attention. ITT Tech was deft in its legal defense, though, and through a series of legal victories and, more often, settlements, ITT Tech stayed in business.

ITT Tech aggressively advertised throughout its history. In the late '90s and early '00s, ITT Tech's constant television spots filled a corner of my brain. "How Much You Know Measures How Far You Can Go," a TV spot proclaims, before ITT's distinctive block letter logo faded on screen in metallic silver. By the year 2000, International Telephone and Telegraph, or rather its scattered remains, no longer had any relationship with ITT Tech. Starwood agreed to license the name and logo to the independent public ITT Technical Institutes corporation, though, and with the decline of ITT's original business the ITT name and logo became associated far more with the for-profit college than the electronics manufacturer.

For-profit universities attracted a lot of press in the '00s---the wrong kind of press. ITT Tech was far from unique in suspicious advertising and recruiting, high tuition rates, and frequent defaults on the federal loans that covered that tuition. For-profit education, it seemed, was more of a scam on the taxpayer dollar than way to secure a promising new career. Publicly traded colleges like DeVry and the University of Phoenix had repeated scandals over their use, or abuse, of federal aid, and a 2004 criminal investigation into ITT Tech for fraud on federal student aid made its future murky.

ITT Tech was a survivor. The criminal case fell apart, the whistleblower lawsuit lead to nothing, and ITT Tech continued to grow. In 2009, ITT Tech acquired the formerly nonprofit Daniel Webster University, part of a wave of for-profit conversions of small colleges. ITT Tech explained the purchase as a way to expand their aeronautics offerings, but observers suspected other motives, ones that had more to do with the perceived legitimacy of what was once a nonprofit, regionally accredited institution. Today, regional accreditors re-investigate institutions that are purchased. There was a series of suspect expansions of small colleges to encompass large for-profit organizations during the '00s that lead to the tightening of these rules.

ITT Tech, numerically, achieved an incredible high. In 2014, ITT Tech reported a total cost of attendance of up to $85,000. I didn't spend that much on my BS and MS combined. Of course, I attended college in impoverished New Mexico, but we can make a comparison locally. ITT Tech operated here as well, and curiously, New Mexico tuition is specially listed in an ITT Tech cost estimate report because it is higher. At its location in Albuquerque's Journal Center office development, ITT Tech charged more than $51,000 in tuition alone for an Associate's in Criminal Justice. The same program at Central New Mexico Community College would have cost under $4,000 over the two years [2].

That isn't the most remarkable, though. A Bachelor's in Criminal Justice would run over $100,000---more than the cost of a JD at UNM School of Law, for an out-of-state student, today.

In 2014, more than 80% of ITT Tech's revenue came from federal student aid. Their loan default rate was the highest of even for-profit programs. With their extreme tuition costs and notoriously poor job placement rates, ITT Tech increasingly had the appearance of an outright fraud.

Death came swiftly for ITT Tech. In 2016, they were a giant with more than 130 campuses and 40,000 students. The Consumer Financial Protection Bureau sued. State Attorneys General followed, with New Mexico's Hector Balderas one of the first two. The killing blow, though, came from the Department of Education, which revoked ITT Tech's eligibility for federal student aid. Weeks later, ITT Tech stopped accepting applications. The next month, they filed for bankruptcy, chapter 7, liquidation.

Over the following years, the ITT Tech scandal would continue to echo. After a series of lawsuits, the Department of Education agreed to forgive the federal debt of ITT Tech attendees, although a decision by Betsy DeVos to end the ITT Tech forgiveness program produced a new round of lawsuits over the matter in 2018. Private lenders faced similar lawsuits, and made similar settlements. Between federal and private lenders, I estimate almost $4.5 billion in loans to pay ITT Tech tuition were written off.

The Department of Education decision to end federal aid to ITT Tech was based, in part, on ITT Tech's fraying relationship with its accreditor. The Accrediting Council for Independent Colleges and Schools (ACICS), a favorite of for-profit colleges, had its own problems. That same summer in 2016, the Department of Education ended federal recognition of ACICS. ACICS accreditation reviews had been cursory, and it routinely continued to accredit colleges despite their failure to meet even ACIC's lax standards. ITT Tech was not the only large ACIC-accredited institution to collapse in scandal.

Two years later, Betsy DeVos reinstated ACICS to federal recognition. Only 85 institutions still relied on ACICS, such august names as the Professional Golfers Career College and certain campuses of the Art Institutes that were suspect even by the norms of the Art Institutes (the Art Institutes folded just a few months ago following a similar federal loan fraud scandal). ACICS lost federal recognition again in 2022. Only time will tell what the next presidential administration holds for the for-profit college industry.

ITT endured a long fall from grace. A leading electronics manufacturer in 1929, a diversified conglomerate in 1960, scandals through the 1970s. You might say that ITT is distinctly American in all the best and worst ways. They grew to billions in revenue through an aggressive program of acquisitions. They were implicated in the CIA coup in Chile. They made telephones and radios and radars and all the things that formed the backbone of the mid-century American electronics industry.

The modern ITT Corporation, descended from spinoff company ITT Industries, continues on as an industrial automation company. They have abandoned the former ITT logo, distancing themselves from their origin. The former defense division became Exelis, later part of Harris, now part of L3, doomed to slowly sink into the monopolized, lethargic American defense industry. German tool and appliance company Kärcher apparently holds a license to the former ITT logo, although I struggle to find any use of it.

To most Americans, ITT is ITT Tech, a so-called college that was actually a scam, an infamous scandal, a sink of billions of dollars in federal money. Dozens of telephone companies around the world, tracing their history back to ITT, are probably better off distancing themselves from what was once a promising international telephone operator, a meaningful technical competitor to Western Electric. The conglomeration of the second half of the 20th century put companies together and then tore them apart; they seldom made it out in as good of condition as they went in. ITT went through the same cycle as so many other large American corporations. They went into hotels, car rentals, then into colleges. They left thousands of students in the lurch on the way out. When ITT Tech went bankrupt, everyone else had already started the semester. They weren't accepting applicants. They wouldn't accept transfer credit from ITT anyway; ITT's accreditation was suspect.

"What you don't know can hurt you," a 1990s ITT Tech advertisement declares. In Reddit threads, ITT Tech alums debate if they're better off telling prospective employers they never went to college at all.

[1] Sources actually vary on when ITT purchased Sams Training Institute, with some 1970s newspaper articles putting it as early as 1966, but 1968 is the year that ITT's involvement in Sams was advertised in the papers. Further confusing things, the former Sams locations continued to operate under the Sams Technical Institute name until around 1970, with verbiage like "part of ITT Educational Services" inconsistently appearing. ITT may have been weighing the value of its brand recognition against Sams but apparently made a solid decision during 1970, after which ads virtually always use the ITT name and logo above any other.

[2] Today, undergraduate education across all of New Mexico's public universities and community colleges is free for state residents. Unfortunately 2014 was not such an enlightened time. I must take every opportunity to brag about this remarkable and unusual achievement in our state politics.

--------------------------------------------------------------------------------

>>> 2023-12-05 vhf omnidirectional range

VORTAC site

The term "VHF omnidirectional range" can at first be confusing, because it includes "range"---a measurement that the technology does not provide. The answer to this conundrum is, as is so often the case, history. The "range" refers not to the radio equipment but to the space around it, the area in which the signal can be received. VOR is an inherently spatial technology; the signal is useless except as it relates to the physical world around it.

This use of the word "range" is about as old as instrument flying, dating back to the first radionavigation devices in the 1930s. We still use it today, in the somewhat abstract sense of an acronym that is rarely expanded: VOR.

This is Truth or Consequences VOR. Or, perhaps more accurately, the transmitter that defines the center of the Truth or Consequences VOR, which extends perhaps two hundred miles around this point. The range can be observed only by instruments, but it's there, a phase shift that varies like terrain.

The basic concept of VOR is reasonably simple: a signal is transmitted with two components, a 30Hz tone in amplitude modulation and a 30Hz in frequency modulation. The two tones are out of phase, by an amount that is determined by your position in the range, and more specifically by the radial from the VOR transmitter to your position. This apparent feat of magic, a radio signal that is different in different locations, is often described as "space modulation."

The first VOR transmitters achieved this effect the obvious way, by rapidly spinning a directional antenna in time with the electronically generated phase shift. Spinning anything quickly becomes a maintenance headache, and so VOR was quickly transitioned to solid-state techniques. Modern VOR transmitters are electronically rotated, by one of two techniques. They rotate in the same sense as images on a screen, a set of discrete changes in a solid state system that produce the effect of rotation.

Warning sign

The Truth or Consequences VOR operates on 112.7 MHz, near the middle of the band assigned for this use. Patterned after the nearby Truth or Consequences Airport, KTCS, it identifies itself by transmitting "TCS" in Morse code. Modern charts give this identifier in dots and dashes, an affordance to the poor level of Morse literacy among contemporary pilots.

In the airspace, it defines the intersection of several airways. They all go generally north-south, unsurprising considering that the restricted airspace of White Sands Missile Range prevents nearly all flight to the east. Flights following the Rio Grande, most north-south traffic in this area, will pass directly overhead on their way to VOR transmitters at Socorro or Deming or El Paso, where complicated airspace leads to two such sites very nearby.

This is the function that VORs serve: for the most part, you fly to or from them. Because the radial from the VOR to you remains constant, they provide a reliable and easy to use indication that you are still on the right track. A warning sign, verbose by tradition, articulates the significance:

This facility is used in FAA air traffic control. Loss of human life may result from service interruption. Any person who interferes with air traffic control or damages or trespasses on this property will be prosecuted under federal law.

The sign is backed up by a rustic wooden fence. Like most VOR transmitters, this one was built in the late 1950s or 1960s. The structure has seen only minimal changes since then, although the radio equipment has been improved and simplified.

Antennas

The central, omnidirectional antenna of a VOR transmitter makes for a distinctive silhouette. You have likely noticed one before. I must admit that I have somewhat simplified; most of the volume of the central antenna housing is actually occupied by the TACAN antenna. Most VOR sites in the US are really VORTAC sites, combining the civilian VOR and military TACAN systems into one facility. TACAN has several minor advantages over VOR for military use, but one big advantage: it provides not only a radial but a distance. The same system used by TACAN for distance information, based on an unusual radio modulation technique called "squitter," can be used by civilian aircraft as well in the form of DME. VORTAC sites thus provide VOR, DME, and TACAN service.

True VOR sites, rare in the US but plentiful across the rest of the world, have smaller central antennas. If you are not used to observing the ring of radial antennas, you might not recognize them as the same system.

The radial antennas are placed in a circle some distance away, to open space between them. This reduces, but does not eliminate, the effect of each antenna's radiated power being absorbed by its neighbors. They are often on the roof of the equipment building, and may be surrounded by a metallic ground plane that extends still further. Most US VORTAC sites, originally built before modern RF technology, rely on careful positioning on suitable terrain rather than a ground plane.

Intriguingly, the radial antennas are not directional designs. In a modern VOR site, the radial antennas transmit an in-phase signal. The phase shift used for space modulation is created by rapidly changing the omnidirectional antenna in use. The space modulation is created not by rotating the antenna, but by moving the antenna through a circular path and allowing the Doppler effect to vary the apparent phase of the received signal.

Central Antenna

The lower part of the central antenna, the more cone shaped part, is mostly empty. It encloses the structure that supports the cylindrical radome that houses the actual antenna elements. In newer installations it is often an exposed frame, but the original midcentury sites all provide a conical enclosure. I suspect the circular metallic sheathing simplified calculation of the effective radiation pattern at the time.

An access door can be used to reach the interior to service the antennas; the rope holding this one closed is not standard equipment but is perhaps also not very unusual. These are old facilities. When this cone was installed, adjacent Interstate 25 wasn't an interstate yet.

Monitor antennas

Aviation engineers leave little to chance, and almost never leave a system without a spare. Ground-based infrastructure is no exception. Each VOR transmitter is continuously tested by a monitoring system. A pair of antennas mounted on a post near the fence line feed redundant monitoring systems that ensure the static antennas receive the correct radial. If failure or a bad fix are detected, it switches the transmit antennas over to a second, redundant set of radio equipment. The problem is reported to the FAA, and Tech Ops staff are dispatched to investigate the problem.

Occasionally, the telephone lines VOR stations use to report problems are, themselves, unreliable. When Tech Ops is unable to remotely monitor a VOR station, they issue a NOTAM that it should not be relied upon.

Rear of building

The rear of the building better shows its age. The wall is scarred where old electrical service equipment has been removed; the weather-tight light fixture is a piece of incandescent history. It has probably been broken for longer than I have been alive.

A 1000 gallon propane tank to one side will supply the generator in the enclosure in case of a failure. Records of the Petroleum Storage Bureau of the New Mexico Environment Department show that an underground fuel tank was present at this site but has been removed. Propane is often selected for newer standby generator installations where an underground tank, no longer up to environmental safety standards, had to be removed.

It is indeed in its twilight years. The FAA has shut down about half of the VOR transmitters. TCS was spared this round, with all but one of the VOR transmitters in sparsely covered New Mexico. It is part of the "minimum operational network." It remains to be seen how long VOR's skeleton crew will carry on. A number of countries have now announced the end of VOR service. Another casualty to satellite PNT, joining LORAN wherever dead radio systems go.

Communications tower

The vastness and sparse population of southern New Mexico pose many challenges. One the FAA has long had to contend with is communications. Very near the Truth or Consequences VOR transmitter is an FAA microwave relay site. This tower is part of a chain that relays radar data from southern New Mexico to the air route traffic control center in Albuquerque.

When it was first built, the design of microwave communications equipment was much less advanced than it is today. Practical antennas were bulky and often pressurized for water tightness. Waveguides were expensive and cables were inefficient. To ease maintenance, shorten feedlines, and reduce tower loading, the actual antennas were installed on shelves near the bottom of the tower, pointing straight upwards. At the top of the tower, two passive reflectors acted like mirrors to redirect the signal into the distance. This "periscope" design was widely used by Western Union in the early days of microwave data networking.

Today, this system is partially retired, replaced by commercial fiber networks. This tower survives, maintained under contract by L3Harris. As the compound name suggests, half of this company used to Harris, a pioneer in microwave technology. The other half used to be L3, which split off from Lockheed Martin, which bought it when it was called Loral. Loral was a broad defense contractor, but had its history and focus in radar, another application of microwave RF engineering.

Two old radio sites, the remains of ambitious nationwide systems that helped create today's ubiquitous aviation. A town named after an old radio show. Some of the great achievements of radio history are out there in Sierra County.

--------------------------------------------------------------------------------
<- newer                                                                older ->