Programming note: the subscribe link was broken for a while because I am
bad at computers (yet another case of "forgot to enable the systemd unit").
It's fixed now. The unsubscribe link was also broken and is now fixed but,
you know, maybe that was a feature. Did wonders for reader retention.
You may have seen some recent press coverage about events surrounding the
Titanic and another notable loss at sea. I'm not going to rehash much of
anything around the Titan because it's sort of an exhaustively covered topic
in the mainstream press... although I will defend the Logitech controller by
noting that Playstation-style controllers are extremely popular interfaces in
robotics and 3D navigation (two symmetric analog sticks, unlike other major
game controllers), and considering the genuine PS4 controller's terrible
Bluetooth pairing UX with non-Playstation devices, the Logitech is probably a
more reliable choice. And they did have spares on board!
I actually want to talk a bit about remote sensing, but of a rather different
kind than I usually mention: hydrophones and wide-area sonar. This
little-discussed military surveillance technology played a major role in the
saga of the Titan, and it's one that seems poorly understood by both
journalists and internet randos. I've seen a lot of Bad Takes about the Navy's
involvement in Titan and I want to suggest a few things that might cause you
to interpret the situation differently.
Submarines are very difficult to detect. This is a bad property for tourist
ventures to the deep sea, but a very useful property to the military. Further,
radio communications underwater are extremely difficult. Salt water attenuates
radio signals very quickly, and while the effect decreases as you go to lower
frequencies, it never goes away. Even the US Navy's sophisticated VLF systems
require submarines to be relatively close to the surface (or rather use a wire
antenna relatively close to the surface) for reception---VLF signals only
penetrate seawater by up to about 40 meters. ELF offers better penetration into
hundreds of meters, but ELF facilities are extremely expensive to build and
operate and the receive antennas are formidably large, so the US Navy retired
its ELF infrastructure in 2004.
For this reason, submersibles like Titan communicate with their surface
support vessels via acoustic modems. This method is surprisingly reliable but
produces a very low bitrate, thus the limitation of text messaging. Similar
technology is used in deep-sea oil exploration, Titan likely used a
commercial product for the data link.
The thing that propagates best underwater, in fact far better than above water
and even better as you get deeper, is sound. The potential of sound for
detecting and locating submarines is well-known. The first prominent use of
this approach, widely called sonar, came about during the First World War when
an anti-submarine surface ship successfully detected a submarine directly below
it via reflected sound. This type of sonar works well for locating nearby
submarines, but it is an active technique. That is, an active sonar must
emit a sound in order to receive the reflection. This is actually quite
undesirable for many military applications, because emitting a sound reveals
the presence (and with sufficient receiving equipment, location) of the sonar
device. Anti-submarine ships stopped using active sonar on a regular basis
fairly quickly, since it prominently advertised their presence to all of the
submarines in the area.
Much more appealing is passive sonar, which works by listening for the sounds
naturally created by underwater vehicles. With a sensitive directional
hydrophone (an underwater microphone), you can hear the noise created by the
screws of a submarine. By rotating the directional hydrophone, you can find the
point of peak amplitude and thus the bearing to the submarine. This basic
submarine hunting technique remains the state of the art today, but the receiving
equipment has become far more capable and automated.
There is an arms race here, an arms race of quietness. I am resisting here the
urge to quote the entire monologue from the beginning of The Hunt for Red
October, but rest assured that [the Americans] will tremble again, at the
sound of [the Soviet's] silence. In practice the magnetohydrodynamic propulsion
technology depicted on the Red October has never proven very practical for
submarines, although it was demonstrated in one very futuristic surface vessel
built by Mitsubishi and called Yamato 1 (fortunately it fared better than the
battleship by that name). Instead, the battle of submarine silence has mostly
revolved around obscure technical problems of fluid dynamics, since one of the
loudest noises made by submarines is the cavitation around the screw. I don't
know if this is true today, but at least years ago the low-noise design of the
screw on modern US submarines was classified, and so the screw was covered by a
sheath whenever a submarine was out of the water.
Passive sonar can be performed from ships and even aircraft-deployed buoys, but
for the purpose of long-term maritime sovereignty it makes sense to install
permanent hydrophones that function as a defensive perimeter. Just such a
system was designed in the 1950s by (who else?) AT&T. AT&T had the expertise
not only in acoustic electronics, but also undersea cable laying, a key
component of any practical underwater surveillance system. Large arrays of
hydrophones, spaced along cables, were laid on the ocean floor. The sounds
detected by these hydrophones were printed on waterfall diagrams and inspected
by intelligence analysts, who relied on experience and no small amount of
educated guessing to recognize different types of marine life, geological
phenomena, and vessels at sea.
This system, called SOSUS for Sound Surveillance System, remained secret until
1991. The secrecy of SOSUS is no great surprise, as it was one of the most
important military intelligence systems of the Cold War. It presented a problem
as well, though, as few in the Navy were aware of the details of the system and
ship crews sometimes felt the abbreviated, zero-detail intelligence messages
from SOSUS to be confusing and unreliable. They were being told of likely
submarine detections, but knowing nothing about the system they had come from,
they didn't know whether or not to take them seriously.
By the 1960s, SOSUS consisted of hundreds of individual hydrophones installed
in long, cable-tethered arrays. Cables connected the hydrophone arrays to
highly secured terminal facilities on the coast, which the Navy explained with
a rather uninspiring cover story about undefined survey work. Over the
following decades, computers were applied to the task, automatically detecting
and classifying acoustic signatures. This early automation work inspired
significant research and development on signal processing and pattern matching
in both the military and Bell Laboratories, creating early precedents for the
modern field of machine learning. Additionally, computer and telecommunications
advancements allowed for remote control of the arrays, significantly reducing
the staff required for the program and leading to the eventual closure of many
of the terminal naval facilities.
In 1984, SOSUS was renamed to IUSS, the Integrated Underwater Surveillance
System. This new name reflected not only the increasing automation, but also
the inclusion of several surface vessels in the system. These vessels,
initially the USNS Stalwart and USNS Worthy, functioned as mobile IUSS
arrays and could be moved around to either expand coverage or provide more
accurate locating of a suspected target.
The existence of IUSS was finally declassified in 1991, although it was well
known before that point due to several prominent press mentions. Since the
declassification of IUSS it has enjoyed a dual-use role with the scientific
research community, and IUSS is one of the primary sources of hydrophone
data for marine biology. Today, IUSS automatically detects and classifies
both submarines and whales.
The potential of passive sonar systems to detect submarine accidents is
well-known. The 1968 loss of Soviet submarine K-129 was detected by SOSUS,
and the location estimate produced by SOSUS facilitated the recovery of K-129
by the Hughes Glomar Explorer, one of the most fascinating naval intelligence
operations of American history. 1968 was a bad year for submarines with four
lost with all hands, and SOSUS data was used to locate at two of them (Soviet
K-129 and US Scorpion. French Minerve and Israeli Dakar would not be
found for decades).
This all brings us to the modern era. Titan was lost on, presumably, the
18th of June. It was not located on the sea floor until the 22nd, four days
later. Press reporting after the discovery included a Navy statement that
IUSS had detected and located the implosion.
This has lead to a somewhat common internet hot take: that the Navy had
definitive information on the fate of Titan and, for some reason, suppressed
it for four days. I believe this to be an unwarranted accusation, and the
timing of the location of the wreck and the statement on IUSS are readily
explainable.
First, we must consider the nature of remote sensing. Remote sensing systems,
whether space-based or deep underwater, produce a large volume of data. The
primary source of actionable information in modern real-time remote sensing
are computer systems that use machine learning and other classification
methods to recognize important events. These computer systems must be trained
on those events, using either naturally or artificially created samples, in
order to correctly classify them. A major concern in naval intelligence is the
collection of up-to-date acoustic signatures for contemporary vessels so that
IUSS can correctly identify them.
A secondary method is retrospective analysis, in which human intelligence
analysts review historic data to look for events that were not classified by
automation when they occurred. Retrospective analysis, particularly with new
signature information, can often yield additional detections. Consider the case
I have previously discussed of the Chinese spy balloons: once signature
information (almost certainly RF emissions) were collected, retrospective
analysis yielded several earlier incidents that were not detected at the time
due to the lack of signatures.
Like the RF spectrum, the ocean contains a lot of noises. They come from
wildlife, from geological processes, and from commercial shipping, all besides
naval operations. The Navy does not rigorously investigate every sound
underwater, it can't possibly do so.
When the Navy became aware of the missing Titan, analysts almost certainly
began a retrospective analysis of IUSS data for anything that could indicate
its fate. They apparently detected loud noises and were able to locate the
source as near the Titanic wreckage, probably fairly quickly after the
Titan was first reported missing.
Here is the first challenge, though: the Titan was a new submersible of novel
(if not necessarily well thought out) construction. The Navy has some
familiarity with the acoustic signatures of imploding military submarines based
on incidentally lost submarines and, in at least one case, the intentional
torpedoing of a submarine to record the resulting acoustics (the Sterlet).
This data is used to produce a signature against which new signals can be
compared. Because of the significant differences in size and construction
between Titan and military submarines, the Navy likely had very low
confidence that known acoustic signatures of catastrophic losses were
applicable. The total number of submarines to have ever imploded underwater is
quite small, and none were of similar size and construction to Titan. The
point is that while intelligence analysts likely suspected they had evidence
of implosion, they probably had low confidence in that conclusion.
It is unwise, in the course of a search and rescue operation, to report that
you think the vessel was irrecoverably lost. Doing so can compromise search
operations by creating political pressure to end them, while making the
situation of families and friends worse. It is customary to be very cautious
with the release of inconclusive information in events like this. The problems
are exemplified by the Coast Guard's announcement that another passive sonar
system had detected possible banging sounds, which motivated a lot of reporting
making wild conclusions based on acoustic signatures that were likely
unrelated.
The more damning accusation, though, is this: did the Navy withhold information
on the detection from searchers out of concern for secrecy? Setting aside that
this makes little sense considering that SOSUS and its capabilities have been
widely known to the public for decades, and the search site was well within
historically published coverage estimates for SOSUS, this accusation doesn't
align with the timeline of the search.
The first search vessel capable of deep undersea exploration, the ROV Pelagic
Odysseus 6k, arrived on the scene on the morning of the 22nd. Just five hours
later, Odysseus had located the wreckage. Considering that the descent to depth
alone would have taken Odysseus over an hour, the wreckage was located extremely
quickly in the challenging undersea environment. One reason is obvious:
the wreckage of Titan was close to the Titanic, although the Titanic debris
field is large and searching it all would have taken hours. The second reason
became known shortly after: when Odysseus began its search, they had almost
certainly already been tipped off by the Navy as to the location of the possible
implosion.
The Navy did not withhold information on the detection for four days out of some
concern for secrecy. Instead, the information was not known to the public for
four days because that was when the search team was first able to actually
investigate the Navy's possible detection.
Indeed, the idea that the Navy suppressed the information seems to come only
from the rumor mill and internet repetition of half-read headlines. The
original press coverage of the IUSS detection, from the WSJ, states that the
Navy reported the finding to the Navy commander on-scene at the search effort
immediately. It does include the amusing sentence that "the Navy asked that the
specific system used not be named, citing national security concerns." This
might seem like a huge cover up to those unfamiliar with intelligence programs,
but it's perfectly in line with both normal military concerns around classified
systems (which are often known by multiple names which must be kept
compartmentalized for unclassified contracting) and the specific history of
IUSS, which during its period of secrecy had problems with being accidentally
named in unclassified reports multiple times.
IUSS is now a smaller system than it once was, although with improving
technology its coverage has probably expanded rather than contracted. It still
serves as a principal method of detecting submarines near the US, an
important concern since submarines are one of the main delivery mechanisms
for nuclear weapons. IUSS is just one of several semi-secret underwater sensing
systems used by the Navy.
A not totally related system that will nonetheless be of interest to many of my
readers (who I suspect to be somewhat concentrated in the San Francisco Bay
Area) is the San Francisco Magnetic Silencing Range. A small building in the
parking lot of Marina Green, complete with a goofy little control tower from
the era of manned operation, is the above-water extent of this system that uses
underwater magnetometers to measure the magnetic field of Navy vessels passing
through the Golden Gate. Since underwater mines are often triggered by
magnetometers, the Navy ensures that the magnetization of vessel hulls does not
exceed a certain limit. If it does, the vessel can be degaussed at one of
several specially-equipped Navy berths---inspiration for at least one episode
of The Next Generation. Similar arrays exist at several major US ports.
The building itself is long-disused, and the array is now fully remote
controlled. When I lived in San Francisco it was abandoned, but I see that it
has apparently been restored to function as the harbormaster's office. I
appreciate the historic preservation effort but something is lost with the
removal of the Navy's sun-faded signage.
Occasionally, research into the history of telephony takes you into some
strange places. There are conspiracy theories, of course, and there are people
who insist on their version of events so incessantly that details of dates and
places can become heated arguments. There is also the basic nature of the
internet: the internet has a wealth of historical information but it is
scattered across many sources of varying quality. Part of the role of the
historian has always been assessing the credibility of sources, but this is
particularly difficult in fields like technology history where so much
information comes from archived AOL customer homepages (perhaps the best
sources there are) and Usenet discussions (rarely correct about anything).
And then there is the Beatrice Foods Company.
I first discovered this oddity of the internet sometime last year, but I was
reminded of it on a recent trip in Canada (I will probably write something
about this, but the objective was mostly to hike on mountains, and urban^wrural
exploration of an AT&T microwave relay site was only incidental). In Canada, at
least in eastern British Columbia, Beatrice is a major dairy brand. Like A&W
(charming slogan: "American Food"), Canadian Beatrice had the good sense to
become completely independent its American parent in 1978. It did not have
quite the success that A&W did but after a series of acquisitions the Beatrice
brand name (and its charmingly '80s futurist logo) remains commonplace under
the Canadian division of french dairy conglomerate Lactalis.
Here in these United States, though, the situation is quite different.
The Beatrice Creamery was established in 1894, Wikipedia tells us, named after
its home town of Beatrice, Nebraska. This Beatrice, in the late 19th to early
20th centuries, was a runaway success. A 1913 move to Chicago, and ongoing
acquisitions of dairies and food processors, made Beatrice a major food company
across the nation. Ongoing ambition through the Second World War and post-war
expansion made Beatrice, like many large companies of its era, a conglomerate.
At its peak, in perhaps 1980, Beatrice was one of the great institutions of
American industry. Consumer brands owned and operated by Beatrice in the '70s
and '80s included food brands like Tropicana, Meadow Gold, Shedd's (now Country
Crock), and Hunt-Wesson (perhaps best known for their ketchup). Beatrice seemed
to find more revenue, though, in non-food consumer brands. After their 1984
acquisition of consumer conglomerate Esmark, the Beatrice empire included Avis
car rental, consumer electronics manufacturer Jensen, automotive brand STP, and
diversified women's goods manufacturer Playtex. Beatrice introduced their own
logo to advertising for their many products, and sponsored auto racing teams.
The company seemed unstoppable but, as was the case with many of its peers,
Beatrice had achieved impressive growth on the back of escalating debt.
In 1985, Beatrice was struggling to service the debt they had taken on to
acquire Esmark. The conglomerate started to break up. Beatrice's effort to save
themselves by divesting their entire chemical division (brands include Stahl
and STP) and a few odd items like World Dryer (manufacturer of those older hot
air hand dryers found in every rest stop bathroom) weren't enough to salvage
the company's stock value. Investment firm KKR mounted a leveraged buyout---the
largest in US history at the time---in 1986. The purchase price was $8.7
billion, and KKR's plan was a fire sale.
They sold off Beatrice's bottled water division (Arrowhead, Ozarka), back to
Coca-Cola under whose license they had operated. They sold Playtex to a group
of investors who spun it out. They sold Beatrice's entire dairy division, the
historical core of the company, to Borden (another large dairy conglomerate
known for having invented condensed milk). A hodgepodge of consumer brands
(Samsonite, Culligan, Day-Timer) were combined into a new entity and sold.
Tropicana was sold. International operations (besides Canada) were combined and
sold to a new conglomerate. Canadian operations, independent for decades, were
sold into Canadian ownership. Finally, in 1987, KKR took most of what remained
sold it to Conagra.
Up to this point I have basically been summarizing Wikipedia and this is where
it ends. Wikipedia reassures us that most Beatrice brands still exist under
different ownership, but it leaves hanging an interesting question: what of the
Beatrice Foods Company itself? The article offers only a mystery: "The original
Beatrice Companies went dormant in the late 1980s, but was revived in 2007."
Today, the Beatrice website (beatriceco.com) presents a strange front. A hero
image shows a wide variety of consumer food brands that were once owned by
Beatrice, but aren't any more. At first, I thought the website might simply
have been carried forward from before divestiture, but no... the banner image
contains products that Beatrice hasn't owned since the late '80s, in much newer
packaging. The "Contact Us" page lists five offices and phone numbers, several
noting the consumer brand that used to operate from that office. What brings
me there, though, is "The Porticus Centre:" a reasonably useful (although not
especially unique) archive of historic materials on AT&T and the Bell System,
besides Beatrice Foods itself, and Borden, who acquired the dairy division.
One of the oldest news releases on the Beatrice Companies website, dated 2009,
announces the merger of Beatrice and The Porticus Centre. Porticus apparently
has a long history (in internet terms), as the release mentions its 2003 award
from USA Today. This makes Porticus older than the current Beatrice website,
which seems to have appeared for the first time in 2008. The press release
describes Beatrice as a multi-brand food manufacturer (as far as I can tell, in
2007, there were no food brands operating under the Beatrice name or
ownership), but also as an installer of Avaya PBX systems and structured
cabling.
Here's the complication: the Beatrice Companies of today may (attempt to) trade
on the reputation and brands of the historic Beatrice Companies, but it bears
almost no relationship whatsoever to them.
The Porticus Centre, and specifically its "Bell System Memorial" website
(bellsystemmemorial.com), dates back to at least 2002 and I am told really to
1997. It was originally maintained by Dave Massey, but for whatever reason he
handed the reigns to Ben Jackson in 2005. Around a year later, the website
changed into an announcement that it had "changed IP addresses" (I am confused
by this language as well), and redirected viewers to a copy hosted by The
Porticus Center. By 2007, both Porticus and Beatrice listed one DeWitt Hoopes
as their president.
I had hoped to explain exactly how Hoopes, a resident of Phoenix AZ, came to be
the President and CEO of a once multi-billion dollar company, but this key
detail has stubbornly eluded me. He seems to have had a relationship with
several people in the Arizona food industry and at some points has called
himself an investor, and I speculate that he may have bought the largely
forgotten brand for a bargain. Press releases on Beatrice's website start in
2008, somberly noting the death of the creator of a line of herbal salad
dressing. This salad dressing seems to be one of the only food products
ever manufactured by Beatrice under Hoopes.
A press release the next year announces that Beatrice's structured cabling
business is being divested, to satisfy a decision by the board of directors to
eliminate Beatrice's non-food ventures. Beatrice still advertises structured
cabling today and so it's not clear if this sale actually happened. The release
does raise an obvious question: who is on the board of directors? Not easy to
answer, as most corporate entities related to Beatrice went defunct in the
'80s. An active Arizona foreign entity registration gives an address in
Chicago, but the Illinois Beatrice company is long defunct. An few active
Wyoming corporations with Beatrice in their names clearly related (listing the
same Chicago address) but list only Hoopes as director and incorporator.
What do we know about DeWitt Hoopes? The website of his personal business,
DeWitt Hoopes LLC, gives the headline "Mac & Linux Programming and Service." He
recommends Ubuntu, which apparently can make the internet up to three times
faster. Hoopes' background seems to explain Beatrice's odd focus on technology
products, and the decision to divest from technology was apparently reversed as
later press releases show Beatrice doubling down.
A 2011 release, two years after Beatrice announced the end of its structured
cabling business, tells us that it has selected Anixter as its structured
cabling supplier. Anixter is a major electrical supply house and this situation
seems not dissimilar to my announcing that I have selected The Home Depot as my
new structured cabling supplier, and this theory of making your shopping trip
into a press release seems backed by a quote from Hoopes: "With our previous
vendor, we just were not getting the support and pricing that Beatrice needed,
and it was only getting worse." Take that, Graybar. The same year, Beatrice
announced that the Porticus Centre and its Avaya PBX business were being moved
from Beatrice Consumer Products, Inc. to Beatrice Technologies, Inc. Both are
incorporated in Wyoming, but oddly, Beatrice Technologies was made inactive on
account of overdue tax filings. None list any directors other than Hoopes.
I must offer the disclaimer that, while I cannot locate a good-standing
corporate registration for Beatrice Companies, Inc., it is possible that one
exists. Because of the many states involved, researching corporations in the
US can be tricky, and the fact that Beatrice's once great history leads to
inactive foreign filings in nearly every US state with dates ranging from the
1930s to the 1980s only makes it more difficult. What I can tell you is this:
Beatrice seems to exist in and do business almost exclusively in Arizona, where
it has only a foreign entity registration (but one in good standing). Most
Beatrice entities seem to have been incorporated in Wyoming, although many are
now inactive. All give the same address, in Chicago, which is the type of
address that appears on many dozens of business registrations (the building
contains at least a couple of law firms, the suite number on the corporate
filings likely belongs to a registered agent or incorporation service). A
surprising number of these corporate entities show filing dates of 2017, well
after they are mentioned in Beatrice press releases. That may very well just be
a quirk of Wyoming's online entity search, but it's certainly odd.
The full set of corporate entities associated with Beatrice and Hoopes are hard
to follow. The customary "About Beatrice" trailer on every press release seems
to list a different set of subsidiaries almost every time, but sometimes omits
any details and instead says only "eight business units" or similar. I have also
found corporate filings for entities that I have never seen mentioned on the
Beatrice website or press releases. The number of business lines attached to
Beatrice seems large and ever-changing. A 2013 press release announces that
Beatrice Technologies is being reorganize, and as a result will no longer offer
residential cabling or computer maintenance.
Perhaps like Beatrice of the '80s, Beatrice of the '00s seemed to have
overextended. A series of press releases announces the divestiture of its web
design business, its PBX business, and the "physical assets" of the Porticus
Centre (this seems to have included various historic documents from Bell
companies). It's hard to tell what business ventures were actually operational
in that time period. Most of the actual business information I can find online
relates to retirement and pension benefits for former Beatrice employees---the
acquisition of Beatrice seems to have included a responsibility to service some
of these retirement benefits. Beatrice does seem to have owned some sort of
wholesale food supply business, perhaps more than one, that may have been
serving legacy customers.
In 2016, Beatrice expanded in the food market, announcing its new Gourmet
Popcorn brand. A later press release lauds that Beatrice found a retailer to
carry the product. It lasted until 2020, when Beatrice Gourmet Popcorn was sold
to a small operation called 2Di4. During the same time period, and perhaps
today, a subsidiary called Beatrice Distribution seems to have been dealing in
commercial cleaning supplies.
Sometime around 2018, Beatrice must have completely reversed its 2008 board's
policy on food vs. technology. Although it proudly announced having put two
recipes on its website in 2017 and relaunched its salad dressing business in
2018 (it was apparently shuttered again two years later), the future of
Beatrice would be Bittium.
Bittium may be familiar to some. A Finnish company, Bittium has several product
lines in the vague area of software and technology, but has attracted press for
its secure mobile device offering. Frankly, it's just one "encrypted
white-label Android phone" offering in a crowded market with a reputation for
crime rings and FBI stings, but it's probably one of the more reputable.
Beatrice announced in 2019 that it had become a reseller of Bittium's phone and
VPN products. Later, Beatrice Technologies entered a partnership and then
merged with a Canadian Bittium reseller called DEC (Digitally Encrypted
Communications, not Digital Equipment Corporation). Indeed, Bittium is pretty
much the only concrete product that Beatrice seems to offer today, although
numerous references to structured cabling still hang around.
More recently, well, I'm not sure what happened. The most recent press release
from Beatrice, dated 2020, is headed "We Are Witnessing History in The Making –
The Devolution of Western Culture." It spans five pages, making it the longest
press release by far, and calls for a making the ten commandments mandatory
study material in universities.
There are other "odd" details. One subsidiary of Beatrice, Beatrice Premier
Foods, was renamed to Cuppedia and made independent in 2016. Confusingly,
Cuppedia seems to have two completely unrelated online presences. One of them
matches the branding of other older (late-2000s) Beatrice websites and is
presumably operated by Hoopes. The other is operated by a woman who sometimes
styles herself as "HRH Queen Dr. Anna Carter" and claims to hold an exclusive
contract for construction supply to Neom, the Saudi city of the future. She
also claims to be a minister, have founded a garden club, and, well, evidently
to be the queen of something. I leave it to the reader to evaluate the
credibility of these various titles.
The connection between the two is, surprisingly, validated by the Beatrice
press release announcing the spin-out of Cuppedia. "Our decision to make this
change is reflected in our review of Beatrice assets and business structure,
and the Beatrice Premier Foods business would be better suited under the direct
control of both Ms. Carter and Mr. Ligidakis," Hoopes said. Ligidakis is a
Phoenix-area resterateur who has apparently held various titles at Beatrice,
although The Carter-operated Cuppedia doesn't seem to mention him today.
This has taken an odd turn, hasn't it? Somehow, in the cultural climate of
2023, it all made more sense to me when I discovered that Hoopes is active in
politics, and a very specific subset of them. "Is the Arizona Human Trafficking
Council Preventing Child Trafficking, or Facilitating it?," reads the headline
of an article written around statements Hoopes made to the Arizona Human
Trafficking Council. The council was formed in 2015, offering a bit of
political context to those familiar. After an introduction referring to his
illustrious family history, Hoopes was to the point: "I am addressing the
Council today on behalf of my friend and business colleague Neal David Sutz,
who has been trying for over two years to blow the whistle on child sex abuse
and trafficking in our state among powerful business leaders and members of the
Mormon Church."
And that sort of puts a button on this story. As best I can tell, what remained
of Beatrice (I suspect intellectual property with no active product lines, but
possibly the active distributing businesses) was purchased by Hoopes, who hoped
to use its brand and reputation to promote his various business ventures. This
effort seems to have been plagued by his indecision about whether he was
running a food company or a technology company---fitting since, it seems, much
of the downfall of Beatrice was its over-diversification. After years of Hoopes
using the Beatrice Company more or less as a personal homepage, it went in an
inevitable direction.
Political conspiracy theories are sort of like dust on the internet. Leave
anything untouched for long enough and it is prone to accumulate them. "Of
course," I said aloud in a coffee shop. "Of course Beatrice Foods is a QAnon
thing now." Deeper corners of the internet fill in the picture further. Hoopes
says that "my company" (Beatrice?) launched a conservative social media and
video streaming platform called "Inkd Social." As usual for "conservative
social media," Inkd seems to have been short lived, perhaps a precursor to the
baffling "streaming service" BView still available today that consists of
videos embedded from YouTube and Vimeo under Beatrice branding. Inkd is one of
two failed conservative social media projects by Hoopes, the other called Right
Social. On his own social media, he posts Islamophobic Chick tracts punctuated
by strong opinions on the corporate history of AT&T. Oh, yes, the telephone
history.
Perhaps I am being unfair. There is no doubt a kernel of truth to Hoope's
claims of human trafficking, as there is no doubt a kernel of truth to his
claims of running a multi-brand consumer food business. The tagline of the
Beatrice Companies is "a Reputation thru all the Nation," a claim that was true
(in a good way) in 1980 and true (in a bad way) in 1987. In the 2000s, Beatrice
was simply forgotten. And then DeWitt Hoopes came along, apparently moving his
small IT business into Beatrice the way a hermit crab moves into a shell.
I felt like I had to write something about this situation; it has occupied so
much of my brain for the last few weeks. But when it came time to put pen to
paper (or keyboard to vim, as it were), I really struggled. Where to start?
Where to end? What details are even worth mentioning? What actually is the
story here?
Because there isn't really a story, at least not a remarkable one. Hoopes has a
website. Of course there's an archive of historic Bell System documents, why
not? Of course John 14:6 is quoted in ten languages under a photo spread of
packaged food products, why not? Of course there's a press release about the
decline of western society. Of course there's a tangential connection to Neom,
and of course there's an unprompted complaint about the customer service at
Graybar. It's just another weird website on the internet, just like a million
others.
The only thing that makes this one stand out is that, by accident of history,
it shares a name and logo with the milk at the Real Canadian Superstore.
I'd like to leave you with a quote from the January 1, 2005 edition of "Food &
Teach Newsletter," a publication of Almus Nutrition and Health Sciences, a
division of Beatrice Companies that I think once supplied wholesale food to
K-12 districts:
The world is in crisis and we are threatened from many sources. The threat is
as great from outside our country as it is from within. The threat is from
outside ourselves and correspondingly within ourselves. We are afraid of
attacks from outside but in responding to our fears, the fear from within can
be as dangerous and as immobilizing as from outside.
This comes from Gladys McGarey, a "holistic doctor" who's still writing at the
age of 102, so something she's doing must be working. But not for Beatrice any
more, the Almus name seems to have been dropped and Beatrice Nutrition & Health
Sciences last published "Feed & Teach" in 2015. They're still in the publishing
business, though. Beatrice's online store carries four products: books by Gene
Hoopes, founding member of the American Legion and great grandfather of DeWitt.
Because of course it does. I wonder how the Board of Directors feels about that
line of business.
I currently find myself on vacation in the Canadian Rockies, where internet is
hard to come by. But here's something short while I'm temporarily back in the
warm embrace of 5G: more about burglar alarms. I recently gave a presentation
on this topic and I'll probably record it for YouTube when I'm back home, but
I think the time has finally come to write a post on a specific and niche
element of intrusion alarms that I find particularly interesting: alarm
reporting protocols.
Let's briefly recap the architecture of a typical intrusion alarm. An intrusion
alarm system (and essentially the same goes for fire alarms as well) consists
of a controller that monitors sensor zones. When the controller detects that a
sensor has been violated while the system is alarmed, it enters the alarm
state. Once this happens, it reports the alarm to a Central Alarm Receiver
(CAR) at a Central Alarm Station (CAS). The CAS is then responsible for
dispatching emergency services (or private security) to respond to the alarm.
Early intrusion alarms relied on dedicated wiring to report to the CAS. In a
major city, the municipal government often operated alarm infrastructure for
fire (most commonly a Gamewell fire telegraph system), but for security private
companies were more common. One of the biggest names in alarms to this day,
ADT, clearly reflects this heritage: the acronym stands for American District
Telegraph. The company originally provided stock quotations over their private
telegraph networks, but later made a lot of money using their telegraph
infrastructure for alarm monitoring.
The oldest alarm systems reported to the CAS using the "polarity reversal"
scheme, which could be used either over privately owned wiring or a leased
telephone line specified as "dry" (meaning that the telephone exchange did not
apply battery power or dial tone). The burglar alarm controller normally put a
voltage on this pair. When the alarm was triggered, the polarity of the voltage
was reversed. In the CAS, the change in polarity caused a metal flag held in
place by an electromagnet coil to drop down, informing the CAS operator that an
alarm had occurred. The major advantage of the polarity reversal scheme is
that, with an appropriately designed system of coils around the flag, the CAS
operator could tell whether the polarity reversed (an alarm) or the voltage
went away entirely (a trouble). During an armed state both of these conditions
merited a response, but knowing the difference was useful for troubleshooting
the alarm and reporting infrastructure.
The concept of applying a voltage to the monitoring line at all times is a
simple form of supervision. It means that any interruption in the connection
can be detected. Supervision is one of the most important concepts in technical
security [1]: supervision is a set of techniques that allow an alarm system to
ensure that it has not malfunctioned, been damaged, or been intentionally
tampered with. Supervision is the key thing that differentiates a "life safety
system" from other types of electronics: fire alarms (for safety reasons) and
intrusion alarms (for security reasons) have to be highly tamper- and
failure-evident.
Polarity reversal is a simple and effective scheme for alarm reporting, and
it's still used in some institutional environments where the CAS is on-site and
the alarm wiring from each building to the CAS is already in place. It suffers
a major limitation, though: there is no support for multiplexing. In other
words, every alarm system has to be connected to the CAS by its own dedicated
pair of wires, or the CAS will not be able to differentiate where an alarm came
from.
In the early 20th century, telegraph technology was producing the first digital
communications protocols. Pulse-based telegraph systems could be connected to
mechanical receivers that used clockwork mechanisms to count pulses,
differentiating which signal had been received. For example, while early fire
telegraphs required station personnel to interpret the paper tape manually, by
the 1930s some fire stations were equipped with telegraph receivers that could
count out pulse patterns to identify certain box calls and sound a bell in the
station automatically. Similarly, railroads began to use "coded" signal
circuits where multiple signals were connected to a shared bus (wired in
parallel) and counted pulses to recognize addressed commands. These geartrain
telegraph receivers, steampunk by modern standards, were the genesis of digital
communications. The same methods were applied to intrusion alarms.
While the Gamewell system was designed for fire reporting, some areas used
Gamewell telegraphs for intrusion reporting as well, and many proprietary
intrusion reporting systems were substantially similar to the Gamewell design.
A Gamewell fire box had a "hook" that, when pulled, would compress a spring.
The spring then released its force into a clockwork mechanism that rotated a
notched wheel---emitting a pulse onto the telegraph line every time a notch
passed a switch. The internal design was fairly similar to a telephone dial,
but with a different interface since the pulse train that was sent was fixed
for each box, determined by the position of the notches on the wheel.
Originally these Gamewell boxes were mounted streetside where passersby could
pull the hook, but Gamewell systems proved surprisingly durable and were in
service well into the era of electronic fire alarms. Gamewell sold boxes which
were electrically activated, meaning that they could be wired to a fire alarm
system so that the "hook dropped" automatically when the fire alarm sounded.
Look around in back alleys of a city and it is not unusual to find lonely
Gamewell boxes still mounted on the backs of buildings, often a legacy of when
they functioned as the reporting system for the fire alarm.
This basic design became common in life safety and intrusion alarms. In modern
terms, the electrically-activated Gamewell box functioned as a "communicator"
to report the alarm to a CAS when activated. Many intrusion alarms used very
similar designs, with an electromechanical telegraph communicator triggered by
a voltage output from the alarm controller. During the era of telegraph systems,
central alarm receivers (CARs)---the equipment that actually receives the alarm
signal---became progressively more complex, producing paper tape logs of all
calls in addition to activating appropriate signals to CAS operators based on
the received code.
One might wonder how supervision worked with telegraph systems, since it's not
possible for the CAS to simply monitor for the presence of voltage when
multiple alarm controllers share the same lines. Telegraph systems introduced
period supervision: the alarm periodically sent a signal, and the CAS
interpreted the lack of any message over a certain time period as being
indication of trouble. Early periodic supervision was actually very simple, as
it was common for early alarm systems to report any entry (authorized or not),
as well as arming, to the CAS. In a common bank vault alarm, for example, the
CAS would be informed when the vault was closed at night, and when it was
opened in the morning, regardless of armed/disarmed status [2]. Operators at
these CASs often had a checklist of sorts, where they expected to see each bank
vault being opened in the morning. If they didn't, it likely indicated trouble
with the reporting system.
Later on, timers were used to send "supervision reports" at configurable
intervals. For dedicated alarm wiring, supervision might be very frequent,
perhaps multiple times per hour. As telegraphic alarm reporting systems
matured, though, dedicated wiring started to fade away. This wasn't a total
abandonment of dedicated alarm reporting infrastructure, which can still be
found in commercial areas of some cities and is especially common on
institutional campuses where dedicated fiber lines might be run for fire and
burglar alarm monitoring. But during the '60s and '70s, burglar alarms caught
on in the home, and for home users a leased telephone line (or the cost of
building out private alarm wiring) would significantly drive up the cost.
Just about every home had a telephone line, though, and telephones were
already the most common way to reach emergency services.
The vast majority of home burglar alarms, until perhaps the last 15 years, were
installed with telephone communicators. Like the old Gamewell boxes these were
dedicated modular devices, but they were reduced over time to a single PCB
mounted in the alarm controller cabinet. The telephone communicator would be
connected to a phone jack so that, in the event of an alarm, it could dial a
call to the CAS and report the event.
There's a surprising amount of nuance to telephone communicators, perhaps
unsurprisingly since they were in common use for a period of some fifty years.
First, many telephone communicators could be configured for use with a
dedicated telephone line (with the advantage of much more frequent supervision
reporting without tying up the phone) or for use on a line shared with
telephones (a much lower-cost option, typical for residential installations).
Sharing a phone line with telephones posed a problem, though. Say a fire or
breakin (especially the activation of a panic button) happened when a phone was
off-hook. Fire alarm codes in particular required that central reporting still
work in this situation. The solution is simple but also surprisingly obscure:
a special telephone jack.
Many homes built in the late '60s through the '80s, the golden era of
residential intrusion alarms, include an "RJ31X" or burglar alarm telephone
jack. It's usually found in a strange place for a telephone jack, like the
master bedroom closet [3]. The RJ31X jack (this is actually a technically
correct description of the jack, unlike most modern uses of the "RJ"
identifications) is an 8P8C modular connector (same as Ethernet) with two phone
lines terminated to it. One phone line goes directly to the telco via the
network interface device (demarc), while the other goes to the house's internal
telephone wiring. An RJ31X jack has a special shorting bar inside the jack
housing that bridges the outside and inside phone lines together. When a plug
is inserted, it pushes the shorting bar out of the way, disconnecting the
inside and outside phone lines and routing them to the alarm communicator
instead.
The alarm communicator now has complete control of the household phone line.
All telephones in the house are connected to the "inside" wiring. Normally the
communicator connects the inside and outside lines together the same way that
the jack's shorting bar had, but when an alarm occurs, the communicator's "line
seizure relay" disconnects the internal phone wiring from the telco. After
waiting a moment for the telephone exchange's line card to reset the line to
its on-hook state (assuming a call might have just been cut off), the alarm
communicator can go back off-hook and dial its own call.
RJ31X jacks are now mostly a thing of history, but like many parts of history
they sometimes protrude into the present. It's not unusual for scratchy,
intermittent phone lines to be tracked down to an RJ31X in a closet with a
loose or dirty shorting bar. Today the jack is usually removed rather than
fixed.
Once the alarm communicator has dialed a call and wait for the far end (a CAR)
to pick up, it has to transmit the details of what has happened. There is a
confusing range of different standards here. Older communicators often used
DTMF, sending a series of digits. A common example is "Contact ID," developed
by Ademco and standardized by the Security Industry Association (SIA). When
the CAR picks up the phone line, it is expected to send a pulse of 1400Hz,
silence, and then a pulse of 2300Hz. This informs the alarm communicator that
it has indeed reached a Contact ID endpoint, an important issue since some
residential alarms especially were also configured to call the homeowner
directly and supported other methods like voice recordings for non-Contact ID
endpoints. After hearing the pickup tones, the alarm communicator sends 16
digits by DTMF. After the 16 digits, the CAR responds with a 1400Hz tone to
confirm receipt.
The 16 digits consist of a 4-digit account ID (used to identify the specific
alarm), a 2-digit message type (typically 18 which identifies the SIA Contact
ID standard), a one digit event type, and three digits of event detail that
typically indicate the type of zone that was triggered. The message ends with a
2-digit partition ID (used for multi-partition alarms such as in multifamily
housing), a 2-digit zone ID, and then a checksum digit.
The 4-digit account ID might be a bit surprising. Some of the older alarm
protocols really didn't support that large of a namespace, and early on most
CAS were local operations with relatively small customer counts. As CAS became
increasingly monopolized, many CAS had to have a large number of incoming phone
lines so that they could differentiate alarms by the phone number they were
configured to dial as well. One motivating factor in the development of more
advanced alarm reporting protocols was to increase the address space and make
reporting calls shorter, both of which allowed for more alarms reporting to the
same phone line.
Indeed, other telephone reporting protocols used more advanced digital methods
similar to data modems. For example, another SIA-standardized protocol (often
referred to just as "SIA") uses frequency-shift keying compatible with the Bell
103 modem. SIA messages are short ASCII sequences with a several-character
preamble identifying the protocol and giving an address for the intended
receiver (a way to allow multiple logical receivers to share phone numbers), an
account ID of up to 16 characters of hexadecimal, and then event and zone IDs.
Once again, the message ends with a checksum to confirm correct receipt, and
the communicator will retransmit if it does not receive an acknowledge tone
from the CAR.
The use of digital modems over telephone lines starts to sound a lot like
dial-up internet, and you might wonder if intrusion alarms used similar
techniques. The answer is yes, but in several different ways.
One of the most interesting innovations in alarm reporting was a system called
DCX, for Derived Channel MultipleX. I think I've mentioned previously that
"derived channel" is a common term in the telecommunications industry for the
use of any technique to get an additional data channel out of a medium. The
most widely known example of a derived channel on the telephone network is DSL,
and indeed DCX works very similarly.
A DCX communicator is connected to the telephone line much like a conventional
telephone communicator, but it doesn't dial. Instead, it sends high-frequency
FSK messages regardless of the state of the telephone line. Much like DSL, the
FSK is outside of the normal voice passband of the phone system, so telephone
calls won't interfere with it. That said, DCX communicators weren't completely
inaudible like DSL---they used low enough frequencies that, if a DCX
communicator happened to send a message while you were on the phone, you would
hear it. This was a much more likely situation because DCX took advantage of
the lower connection setup cost from not having to dial a call: the biggest
advantage of DCX was significantly more frequent supervision, with DCX
communicators reporting status to the CAS as often as every twenty minutes.
DCX signals can't pass through the telephone network, so just like DSL, DCX
requires the customer's telephone line to be directly connected to a DCX
receiver. CAS that used the DCX technology arranged to install receivers in
telephone exchanges, and these DCX receivers used leased lines to send
real-time reports to the CAS. Altogether the system was fairly elegant, but
the need for specialized phone exchange equipment meant that DCX was only ever
available in certain cities. It was mostly popular with commercial customers,
where insurance companies often required frequent supervision intervals that
made it impractical for the alarm to share a phone line with the normal
business phones.
DCX has fallen out of use, but I can't resist the urge to share a charming
detail of the implementation. The DCX receiver system was implemented as
software on a normal IBM-compatible PC, but UL standards for burglar alarms
require a hardware failsafe on all central alarm reporting and receiving
systems. DCX's solution is the kind of wonderful PC-era computer accessory you
rarely see today: a small box that went inline with the computer power supply
and connected to a serial port. The DCX software pulsed a line on the serial
connection (I suspect, from experience with this kind of thing, not even a data
pin but likely a control pin), and the box functioned as a watchdog timer,
probably using a simple delay relay. If too long elapsed without a pulse, the
accessory box cut power to the PC for a few seconds. This is, of course,
mundane, and today we are still configuring switches to cycle PoE on ports when
ping checks fail. What really delighted me about it is the fact that this
device is the only custom hardware involved in the receiver system, so the
manual really sells it as a major DCX innovation. It has a blinking LED, so you
know it's working.
DCX's major limitation, and what killed it off today, is the fact that it
functions much like an internet connection but without the benefit of providing
IP transit---or even coexisting with it, since DCX and DSL cannot be combined
on the same line due to near-overlapping frequency ranges. By the mid-2000s,
alarm communicators were making the transition to the internet.
Most modern intrusion alarm systems use SIA DC-09, the SIA-standardized
protocol for alarm reporting over either TCP or UDP. DC-09 optionally supports
AES encryption, and it's reassuring to know that the connection is optionally
secure. DC-09 is very similar to the SIA FSK protocol in terms of the message
structure; no surprise since it was designed in part for easy implementation in
existing communicator and receiver systems. DC-09 is extremely simple and not
really all that interesting of a protocol. The alarm communicator sends a
single packet containing the message and waits for an acknowledge packet from
the receiver.
What is interesting about DC-09 is the substantial benefit that intrusion
alarms get from our modern world of pervasive IP connectivity. One of the key
threat models to intrusion alarms has always been isolation. Going back
decades, films have depicted burglars cutting the phone line before entering a
house. It's well known that alarm communicators can be disconnected from the
CAS, and with the long supervision intervals used by many alarm systems the
disappearance of the alarm isn't likely to be noticed before the burglars have
finished their work. The best protection against isolation is redundancy. Alarm
communicators that support multiple redundant connections are called
dual-path, and dual-path communicators weren't common in residential
installations until the internet era... there weren't many options besides a
second phone line, and that would connect to the house by the same drop cable
anyway.
Many burglar alarm systems today will attempt reporting events both through the
homeowner's internet service and a cellular carrier, which makes it a lot
harder for an intruder to isolate the alarm system. It also provides valuable
redundancy for life-safety purposes, making it more likely that a fire or
flooding alarm will be reported even if there has already been damage to the
building or infrastructure in the area. IoT cellular service has gotten so
inexpensive that there's no reason not to have dual-path reporting in most
alarm installations.
And the fact that internet connections are inherently multiplexed yields a big
advantage when it comes to supervision. One of the big problems with consumer
burglar alarms was their infrequent supervision intervals... since a
supervision report tied up the household phone line, many alarms were
configured to never send supervision reports at all. This meant that a cut
phone line or, perhaps more likely, a malfunctioning communicator could go
unnoticed until it was too late.
Unfortunately, central alarm reporting suffers a lot from its legacy. Modern
alarm systems often report surprisingly few events and supervise surprisingly
infrequently, considering that the IP connection is inexpensive and has very
little contention. One of the issues that I most often complain about is the
rarity of disarm supervision.
If a burglar enters a house and is able to locate and destroy the alarm
controller before it reports the alarm (which is typically after the entry
delay of 30-60 seconds and a post-alarm reporting delay of 20-60 seconds), the
CAS may never know that there was a problem. Hiding the alarm controller and
surrounding it with immediate zones is the traditional solution to this
problem, but an obvious modern one (that has been used in high-security
environments going back decades) is to have the alarm report that it has begun
the entry delay immediately, and then report when it has been disarmed. If the
CAS receives an entry delay message and then doesn't receive a disarm message
within a minute or so, it can assume that the communicator has failed and treat
the situation as an alarm. Many intrusion alarms and receivers support this
functionality, but vendors have inconsistent names for it (when they advertise
it at all) and it's not often enabled. Hesitation to use disarm supervision is
understandable when each message requires dialing a phone call, but today we
have the internet and packets are cheap.
Burglar alarms aren't, though, and that's part of the problem. Consumer
interest in burglar alarms has decidedly moved to the low-end, with even cheap
wireless systems failing to produce the sales volume of burglar alarms in the
'70s. Consumers have little awareness of the practicalities of alarm reporting,
and alarm vendors advertise their smartphone apps and home automation features
but barely mention the actual security properties of the system.
Unsurprisingly, those properties are usually poor, and a lot of burglar alarms
today have limited value against an informed actor who has planned ahead.
That's the thing, though---not many home burglars plan ahead, and so the most
primitive of alarm systems can do a pretty good job.
[1] The terminology in security can be confusing, especially since "cyber
security" has come into the landscape and coopted a lot of the existing
terminology. "Technical security" tends to refer to more "old-school" forms of
electronic security, particularly intrusion detection and technical
counter-intelligence.
[2] This concept is still common in high-security institutional environments
like military installations, where many alarms have no sense of "disarming" and
will report any entry at all to the CAS. Authorized users are expected to
contact the CAS either before or immediately after entering to identify
themselves and explain their purpose. This can be a much more secure
arrangement since it allows the CAS to audit every entry against work orders or
duty assignments, discouraging insider theft.
[3] For various reasons, mostly for "smash-resistance," it's a good idea to
install the alarm controller in a somewhat hidden location. Of course most home
builders and alarm installers were not very creative, and so the hidden
location is virtually always a closet, most often the closet of the master
bedroom. Fortunately, due to the longstanding architectural principle of
private-public space separation, the master bedroom closets is often one of the
furthest points from an entrance to the house. Unfortunately, every burglar
knew that's where the alarm controller was most likely to be, which made the
master bedroom window an appealing point of entry. This highlights the
importance of "immediate" alarm zones on nonstandard entry points like windows
and motion sensors in rooms without exterior doors.
Like many people in my generation, my memories of youth are heavily defined by
cable television. I was fortunate enough to have a premium cable package in
my childhood home, Comcast's early digital service based on Motorola equipment.
It included a perk that fascinated me but never made that much sense: Music
Choice. Music Choice was around 20 channels, somewhere in the high numbers, of
still images with music. It was really ad-free, premium radio, but in the era
before widespread adoption of SiriusXM that wasn't an easy product to explain.
And SiriusXM, of course, has found its success selling services to moving
customers. Music Choice was stuck in your home. The vast majority of Music
Choice customers must have had it only as part of a cable package, and part of
it that they probably barely even noticed.
This kind of thing seems to happen a lot with consumer products: a
little-noticed value-add that starts a rabbit hole into the history of consumer
music technology. Music Choice is an odd and, it seems, little-loved aspect of
premium cable packages, but with a history stretching back to 1987, it also
claims to be the first consumer digital music streaming technology... and I
think they're even right about that claim.
The '80s was an exciting time in consumer audio. The Compact Disc was becoming
the dominant form of music distribution, and CDs offered a huge improvement in
sound quality. Unlike all of the successful consumer audio formats before it,
CDs were digital. This meant no signal noise in the playback process and an
outstanding frequency response.
Now, some have expressed surprise at the fact that CDs were a digital audio
format and yet weren't recognized as a practical way to store computer data for
years after. There are a few reasons for this, but one detail worth remembering
is that audio playback is a fairly fault-tolerant application. Despite error
correction, CD players will sometimes fail to decode a specific audio sample.
They just skip it and move along, and the problem isn't all that noticeable to
listeners. Of course this kind of failure is much more severe with computer
data and so more robust error tolerance was needed.
That's a bit besides the point except that it illustrates a very convenient
property of music as an application for digital storage and transmission: it's
inherently fault tolerant, and digital decoding errors in audio can come off
much the same way that noise and other playback faults did with analog formats.
Music is a fairly comfortable way to try the waters of digital distribution,
and the CD was a hugely successful experiment. Digital audio became an everyday
experience for many consumers, and suddenly analog distribution formats like
radio were noticeably inferior.
It was quite natural that various parts of the consumer electronics industry
started to investigate digital radio. Digital radio has a troublesome history
in the United States and has only really seen daylight in the form of the
in-band on-channel HD Radio protocol, which I have discussed previously.
HD Radio launched in 2002, so it was a latecomer to the radio scene (probably
a big part of its lackluster adoption). Satellite radio, also digital, didn't
launch until 2001. So there was a wide gap, basically all of the '90s, where
consumers were used to digital audio from CDs but had no way of receiving
digital broadcasts.
This was just the opportunity for Jerrold Communications.
Jerrold Communications is not likely to be a name you've heard before, despite
the company's huge role in cable TV industry. Jerrold was a very early cable
television operator and developed a lot of their own equipment. Eventually,
equipment (head end transmitters and set-top boxes) became Jerrold's main
business, and most of the modern technological landscape of cable TV has
heritage in Jerrold designs. The reason you've never heard of them is because
of acquisitions: in 1967, Jerrold became part of General Instrument. In 1997,
General Instrument fractured into several companies, and the cable equipment
business was purchased by Motorola in 2000. In 2012, the Motorola business unit
that produced cable equipment became part of ARRIS. In 2019, ARRIS was acquired
by CommScope, ironically one of the other fragments that spun off of General
Instrument in '97.
What matters to us is that, for whatever reason, General Instrument continued
to use the Jerrold brand name on some of their cable TV products into the '90s
[1].
In 1987 Jerrold announced their new "Digital Cable Radio," which apparently had
pilot installations in Deland FL, Sacramento CA, and Willow Grove PA. They
expected expanded service in 1989.
In fact, Jerrold was not alone in this venture. At the same time, International
Cablecasting Technologies announced its similar service "CD-8" (it's like
having eight CD players, seems to have been the explanation for this name,
which was later changed to CD-18 to reflect additional channels before they
dropped the scheme). CD-8 launched in Las Vegas, and we will discuss it more
later, as it survived into the 21st century under a different name. Finally,
a company called Digital Radio launched "The Digital Radio Channel" in Los
Angeles.
All three of these operations were discussed together in a number of syndicated
newspaper pieces that ran in 1987 to present the future of radio. They reflect,
it seems, just about the entire digital radio industry of the '80s.
Digital Radio, the company, is a bit of a mystery. Perhaps mostly due to their
extremely generic name, it's hard to find much information about the company or
its fate. Los Angeles had a relatively strong tradition of conventional cable
radio (meaning analog radio delivered over cable TV lines), so it may have
helped The Digital Radio Channel gain adoption even without the multi-channel
variety of the competition. My best guess is that Digital Radio of California
did not survive long and failed to expand out of the LA market. I have so far
failed to find any advertisements or press mentions after 1987, and the press
coverage in '87 was extremely slim.
This left us with two late-'80s competitors for the new digital cable radio
market: Jerrold's "Digital Cable Radio" and ICT's "CD-8." Both of these
services worked on a very similar basis. A dedicated set-top box would be
connected to a consumer's cable line, either with a passive splitter or
daisy-chained with the television STB. The STB functioned like a radio tuner
for a component stereo system, allowing the listener to select a channel which
was then sent to their stereo amplifier (or hi-fi receiver, etc) as analog
audio. CD-8 went an impressive step beyond Digital Cable Radio, offering a
remote with a small LCD matrix display that showed the artist and track title
(this was apparently an added-cost upgrade).
I have seen mention that the STBs for these services cost around $100. That's
$270 in today's so-called money, not necessarily unreasonable for a hi-fi
component but still no doubt a barrier to adoption. On top of that, neither
service seems to have been bundled with cable plans. Instead, they were
separate subscriptions. Monthly subscriptions seem to have been in the range of
$6-8, reasonably comparable to SiriusXM subscriptions today. But once again we
have to ponder the customer persona.
SiriusXM is a relatively obscure service but still runs a reasonable profit on
the back of new cars with bundled plans, long-haul truckers, and business jet
pilots (SiriusXM has a live weather data service that is popular with the
business aviation crowd, besides the ability to offer SiriusXM music to
passengers). In other words, satellite radio is attractive to people who are in
motion, especially since the same channels are available across different radio
markets and even in the middle of nowhere (except underpasses). I'm not sure
I'll renew my SiriusXM service once I get onto normal post-promotion rates, but
still, there is undeniably something magical about SiriusXM working fine in a
canyon in the Mojave desert when I have no phone service and Spotify has
mysteriously lost all of my downloaded tracks again.
I'm unconvinced that digital audio quality is really that much of a selling
point to most SiriusXM customers. Instead, the benefit is coverage: even "in
town" here in Albuquerque, SiriusXM offers more consistent coverage than many
of the commercial radio stations that have seen some serious cost-cutting in
their transmitter operations. But digital radio over cable television doesn't
move... it's only available in the home. I don't think a lot of people ever
signed up for it as a dedicated subscription.
Still, the industry marched on. By 1990, The Digital Radio Channel seems to
have disappeared. But there is some good news: Jerrold's Digital Cable Radio
is still a contender and now offers 17 channels. CD-8 has been rebranded as
CD-18 and then rebranded again as Digital Music Express, or DMX. And there is
a new contender, Digital Planet. It is actually possible, although I don't find
it especially likely due to the lack of mentions of this history, that Digital
Planet is the same company as Digital Radio. It also operated exclusively in
Los Angeles, but had an impressive 26 channels.
Let's dwell a little more on DMX, because there is something interesting here
that represents a broader fact about this digital cable radio industry. CD-8,
later CD-18 (or CD/18 depending on where you look), was launched by
International Cablecasting Technologies or ICT. Based on newspaper coverage in
the 1990s, it quickly became apparent that DMX's best customers were
businesses, not consumers. In 1993, DMX cost consumers $4.95 a month (plus $5 a
month in equipment rental if the customer did not buy the set-top box outright
for $100). Businesses, though, paid $50-75 a month for a DMX appliance that
would provide background music from specially programmed channels. DMX was a
direct competitor to Muzak, and by the late '90s one of the biggest companies
in the background music market.
Background music makes a whole lot more sense for this technology. There's a
long history of "alternative" broadcast audio formats, like leased telephone
lines and FM radio subcarriers, being used to deliver background music to
businesses. Muzak had a huge reputation in this industry, dating back to
dedicated distribution wiring in the 1930s, but by the 1980s was increasingly
perceived as stuffy and old-fashioned. Much of this related to Muzak's
programming choices: Muzak was still made up mostly of easy-listening covers of
popular tracks, hastily recorded by various contracted bands. DMX, though,
offered something fresh and new: the popular tracks, in their original form.
Even better, DMX focused on the start on offering multiple channels, so that
businesses could choose a genre that would appeal to their clients. There was
smooth jazz for dentists, and rock and roll for hip retailers. The end of
"elevator music" as a genre was directly brought about by DMX and its
contemporary background music competitor, AEI.
Several late-'90s newspaper pieces describe the overall competitive landscape
of background music as consisting of Muzak, DMX, and Audio Environments Inc
(AEI). Unsurprisingly, given the overall trajectory of American business, these
three erstwhile competitors would all unify into one wonderful monopoly. The
path there was indirect, though. Various cable carriers took stakes in DMX, and
by the late '90s it was being described as a subsidiary of Turner Cable and
AT&T. Somehow, the details are stubbornly unclear, DMX and AEI would join
forces in the late '90s. By 2000 they were no longer discussed as competitors.
I have really tried to figure out what exactly happened, but an afternoon with
newspaper archives has not revealed to me the truth. Here is speculation:
AEI appears to have used satellite distribution for their background music from
the start, while DMX, born of the cable industry, relied on cable television.
In the late '90s, though, advertorials for DMX start to say that it is available
via cable or satellite. I believe that at some point in '98 or '99, DMX and
AEI merged. They unified their programming, but continued to operate both the
cable and satellite background music services under the DMX brand.
For about the next decade, the combined DMX/AEI Music would compete with Muzak.
In 2011-2012, Canadian background music (now usually called "multisensory
marketing") firm Mood Media bought both Muzak and DMX/AEI, combining them all
into the Mood Media brand. This behemoth would enjoy nearly complete control of
the background music industry, were it not for the cycle of technology bringing
in IP-based competitors like Pandora for Business. Haha, no, I am kidding,
Pandora for Business is also a Mood Media product. This is the result of
essentially a licensing agreement on the brand name; Pandora itself is a
SiriusXM subsidiary. Pandora for Business is a wholly different product sold by
Mood Media "in partnership with" Pandora, and seems to be little more than a
rebranding of the DMX service to match its transition to IP. Actually SiriusXM
and DMX used to have shared ownership as well (DMX/AEI, by merger with Liberty
Media, had half ownership of SiriusXM, as well as Live Nation concert
promoting, Formula One racing, etc), although they don't seem to currently. The
American media industry is like this, it's all just one big company with an
aggressive market-segment brand strategy.
So what about those set-top boxes, though? Digital Cable Radio and DMX both
relied on special hardware, while the service of my youth did not. Well, the
problem doesn't seem to have so much been the special hardware as the whole
concept of a separate subscription for digital cable radio. By the end of the
'90s, Jerrold and DMX were both transitioning to the more traditional structure
of the cable TV industry. They sold their product not to consumers but to cable
carriers, who then bundled it into cable subscriptions. This meant that shipping
users dedicated hardware was decidedly impractical, but the ATSC digital cable
standard offered a promising new approach.
This might be surprising in terms of timeline. ATSC wasn't all that common
over-the-air until the hard cutover event in 2009. This slow implementation was
a result of the TV tuners built into OTA consumers televisions, though. Cable
companies, since the genesis of cable TV, had been in the habit of distributing
their own set-top boxes (STB) even though many TVs had NTSC (and later ATSC)
tuners built-in. Carrier-provided STBs were a functional necessity due to
"scrambling" or encryption of cable channels, done first to prevent "cable
theft" (consumers reconnecting their cable drop to the distribution amplifier
even though they weren't paying a bill) and later to enable multiple cable rate
tiers.
The pattern of renting STBs meant that cable carriers had a much greater degree
of control over the equipment their customers would use to receive cable, and
that allowed the cable industry to "go digital" much earlier. The first ATSC
standard received regulatory approval in 1996 and spread relatively quickly
into the cable market after that. By the end of the '90s, major carriers like
Comcast had begun switching their customers over to digital ATSC STBs, mostly
manufactured by Motorola Mobility Home Solutions---the direct descendent of
Jerrold Communications.
Digital cable meant that everything was digital, including the audio. Suddenly
a "digital cable radio" station could just be a normal digital cable station.
And that's what they did: Jerrold and DMX both dropped their direct-to-consumer
services and instead signed deals to distribute their channels to entire cable
companies. Along with this came rebranding: Jerrold's Digital Cable Radio
adopted the name "Music Choice," while DMX kept the DMX name for some carriers
and adopted the brand "Sonic Tap" for at least DirecTV and possibly others.
As an aside, Sonic Tap's twitter account
is one of those internet history gems that really makes me smile. Three tweets
ever, all in one day in 2013. Follows DirecTV and no one else. 33 followers, a
few of which even appear to be real. These are the artifacts of our
contemporary industrialists: profoundly sad Twitter profiles.
Music Choice had always enjoyed a close relationship with the cable industry.
It was born at General Instrument, the company that manufactured much of the
equipment in a typical cable network, and that ownership transitioned to
Motorola. As Music Choice expanded in the late '90s and '00s, it began to give
equity out to cable carriers and other partners in exchange for expanded
distribution. Today, Music Choice is owned by Comcast, Charter, Cox, EMI,
Microsoft, ARRIS (from Motorola), and Sony. Far from its '80s independent
identity, it's a consortium of the cable industry, maintained to provide a
service to the carriers that own it. Music Choice is carried today by Comcast
(Xfinity), Spectrum, Cox, Verizon, and DirecTV, among others. It is the
dominant cable music service, but not the only!
A few cable companies have apparently opted to side with Stingray instead.
Stingray has so far not featured in this history at all. It's a Canadian
company, and originated as the Canadian Broadcasting Corporation's attempt at
digital cable radio, called Galaxie. I will spare a full corporate history of
Stingray, in part because the details are sort of fuzzy, but it seems to be a
parallel story to what happened in the US. Galaxie eventually merged with
competing service Max Trax, and then the CBC seems to have divested Stingray
(which had operated Galaxie as a subsidiary of the CBC). In the late 2010s,
Stingray started an expansion into the US. Amusingly, Comcast apparently
delivered Stingray instead of Music Choice for several years (despite being
part owner of Music Choice!). Stingray does seem to still exist on a handful
of smaller US cable carriers, although the company seems invested in a switch
to internet streaming.
Cable is dying. Not just because of the increasing number of "cord cutters"
abandoning their $80 cable bill in favor of $90 worth of streaming subscription
services, but because the cable industry itself is slowly abandoning ATSC. In
the not too far future, conventional cable broadcasting will disappear,
replaced by "over the top" (OTT) IPTV services like Xfinity Flex. This
transition will allow the cable carriers full freedom in bandwidth planning,
enabling DOCSIS cable internet to achieve the symmetric multi-Gbps speeds the
protocol is capable of [2].
Consumers today get virtually all of their music over IP. The biggest
competitor to Music Choice is Spotify, and the two are not especially
comparable businesses. The "linear broadcast" format seems mostly dead, and
while Music Choice does offer on-demand services, it will probably never get
ahead of the companies that started out with an on-demand model. That's sort of
funny, in a way. The cable industry and advanced ATSC features especially
introduced the on-demand content library concept, but the cable industry is far
behind the companies that launched with the same idea a decade later... but
with the benefit of the internet and agility.
It's sad, in a way. I love coaxial cable networks, it's a fascinating way to
distribute data. I am a tireless defender of DOCSIS, constantly explaining to
people that we don't need to eliminate cable internet---there's no reason to,
DOCSIS offers better real-world performance than common PON (optical)
internet distributive systems. What we need to get rid of is the cable
industry. While giants like Comcast do show some signs of catching up to the
21st century, they remain legacy companies with a deeply embedded rent-seeking
attitude. Major improvements to cable networks across the country are underway,
but they started many years too late and proceed too slowly now, a result of
severe under-investment in outside plant.
I support community internet, I'm just saying that maybe, just maybe, municipal
governments would achieve far more by ending cable franchises and purchasing
the existing cable plant than by installing new fiber. "Fiber" internet isn't
really about "fiber" at all. "Fiber" is used as a political euphemism for "not
a legacy utility" (somewhat ironic since one of the largest fiber internet
providers, Verizon FiOS, is now very much a legacy utility). In fact, good old
cable TV is a remarkably capable medium. It brought us the first digital music
broadcasting. It brought us the first on-demand media streaming. Cable is now
posed to deliver 5Gbps+ internet service over mostly existing infrastructure.
The problem with cable internet is not technical; it's political. Send me your
best picket signs for the cable revolution.
[1] The history here is a little confusing. It seems like GI mostly retired the
Jerrold name as GI-branded set-top boxes are far more common than Jerrold ones.
But for whatever reason, when GI launched their cable digital radio product in
1987, it was the Jerrold name that they put on the press releases.
[2] Existing speed limitations on DOCSIS internet service, such as the 35Mbps
upload limit on Xfinity internet service in most markets, are a result of
spectrum planning problems in the cable network rather than limitations in
DOCSIS. DOCSIS 3.1, the version currently in common use, is easily cable of
symmetric 1Gbps. DOCSIS 4.0, currently being introduced, is easily capable of
symmetric 5Gbps. The problem is that upstream capacity in particular is
currently limited by the amount of "free space" available outside of delivering
television channels, a problem that is made particularly acute by legacy STBs
(mostly Motorola branded, of Jerrold heritage) that have fixed channel
requirements for service data like the program guide. These conflict with
DOCSIS 3.0+ upstream channels, such that DOCSIS cannot achieve Gbps upstream
speed until these legacy Motorola STBs are replaced. Comcast has decided to
skip the ATSC STB upgrade entirely by switching customers over to the all-IP
Flex platform. I believe they will need to apply for regulatory approval to end
their ATSC service and go all-IP, so this is probably still at least a few
years out.
First, a disclaimer of sorts: I am posting another article on UAPs, yet I am
not addressing the recent claims by David Grusch. This is for a couple of
reasons. First, I am skeptical of Grusch. He is not the first seemingly
well-positioned former intelligence official to make such claims, and I think
there's a real possibility that we are looking at the next Bob Lazar. Even
without impugning his character by comparison to Lazar, Grusch claims only
secondhand knowledge and some details make me think that there is a real
possibility that he is mistaken or excessively extrapolating. As we have seen
previously with the case of Luis Elizondo, job titles and responsibilities in
the intelligence community are often both secretive and bureaucratically
complex. It is very difficult to evaluate how credible a former member of the
IC is, and the media complicates this by overemphasizing weak signals.
Second, I am hesitant to state even my skepticism as Grusch's claims are very
much breaking news. It will take at least a month or two, I think, for there to
be enough information to really evaluate them. The state of media reporting on
UAP is extremely poor, and I already see Grusch's story "growing legs" and
getting more extreme in the retelling. The state of internet discourse on UAP
is also extremely poor, the conversation almost always being dominated by the
most extreme of both positions. It will be difficult to really form an opinion
on Grusch until I have been able to do a lot more reading and, more
importantly, an opportunity has been given for both the media and the
government to present additional information.
It is frustrating to say that we need to be patient, but our first impressions
of individuals like Grusch are often dominated by our biases. The history of
UFOlogy provides many cautionary tales: argumentation based on first
impressions has both lead to clear hoaxes gaining enormous hold in the UFO
community (profoundly injuring the credibility of UFO research) and to UAP
encounters being ridiculed, creating the stigma that we are now struggling to
reverse. In politics, as in science, as in life, it takes time to understand a
situation. We have to keep an open mind as we work through that process.
Previously on Something Up There
I have previously written
twoparts in which
I present an opinionated history of our current era of UAP research. To present
it in tight summary form: a confluence of factors around the legacy of WWII,
the end of the Cold War, and American culture created a situation in which UFOs
were ridiculed. Neither the civilian government nor the military performed any
meaningful research on the topic, and throughout the military especially a
culture of suppression dominated. Sightings of apparent UAPs were almost
universally unreported, and those reports that existed were ignored.
This situation became untenable in the changing military context of the 21st
century. A resurgence of military R&D in Russia and the increasing capabilities
of the Chinese defense establishment have made it increasingly likely that
rival nations secretly possess advanced technology, much like the US fielded
several advanced military technologies, in secret, during the mid-20th century.
At the same time, the lack of any serious consideration of unusual aerial
phenomena meant that the US had near zero capability to detect these systems,
outside of traditional espionage methods which must be assumed to be limited
(remember that the despite the Soviet Union's considerable intelligence
apparatus, the US managed to field significant advancements without their
knowledge).
As a result of this alarming situation, the DoD began to rethink its
traditional view of UFOs. Unfortunately, early DoD funding for UAP research was
essentially hijacked by Robert Bigelow, an eccentric millionaire and friend of
the powerful Senator Reid with a hobby interest in the paranormal (not just
UFOs but ghosts, etc). Bigelow has a history of similar larks, and his UAP
research program (called AATIP) ended the same way his previous paranormal
ventures have: with a lot of press coverage but no actual results. A
combination of typical DoD secrecy and, I suspect, embarrassment over the
misspent funds resulted in very little information on this program reaching the
public until Bigelow and surprise partner Tom DeLonge launched a publicity
campaign in an effort to raise money.
AATIP was replaced by the All-Domain Anomaly Resolution Office (AARO), a more
standard military intelligence program, which has only seriously started their
work in the last two years. The AARO has collected and analyzed over 800
reports of UAPs, unsurprisingly finding that the majority are uninteresting
(i.e. most likely a result of some well-known phenomenon), but finding that a
few have properties which cannot be explained by known aviation technology.
The NASA UAP advisory committee
The activities of the AARO have not been sufficient to satisfy political
pressure for the government to Do Something about UAPs. This was already true
after the wave of press generated by DeLonge's bizarre media ventures, but
became even more as the Chinese spy balloon made the limitations of US
airspace sovereignty extremely apparent.
Moreover, many government personnel studying the UAP question agree that one of
the biggest problems facing UAP research right now is stigma. The military has
a decades-old tradition of suppressing any reports that might be classified as
"kookie," and the scientific community has not always been much more
open-minded. This is especially true in the defense industry, where Bigelow's
lark did a great deal of reputational damage to DoD UAP efforts. In short,
despite AAROs efforts, many were not taking AARO seriously.
Part of the problem with AARO is its slow start and minimal public work product
to date. Admittedly, most of this is a result of some funding issues and then
the secretive nature of work carried out within military intelligence
organizations. But that underscores the point: AARO is an intelligence
organization that works primarily with classified sources and thus produces
classified products. UAPs, though, have become a quite public issue. Over the
last two years it has become increasingly important to not only study UAPs but
to do so in a way that provides a higher degree of public assurance and public
information. That requires an investigation carried out by a non-intelligence
organization. The stigmatized nature of UAP research also demands that any
serious civilian investigation be carried by an organization with credibility
in aerospace science.
The aerospace industry has faced a somewhat similar problem before: pilots not
reporting safety incidents for fear of negative impacts on their careers. It's
thought that a culture of suppressing safety incidents in aviation lead to
delayed discovery of several aircraft design and manufacturing faults. The best
solution that was found to this problem of under-reporting was the introduction
of a neutral third-party. The third-party would need to have the credibility to
be considered a subject-matter expert in aerospace, but also needed to be
removed from the regulatory and certification process to reduce reporters fears
of adverse action being taken in response to their reports. The best fit was
NASA: a federal agency with an aerospace science mission and without direct
authority over civil aviation.
The result is the Aviation Safety Reporting System, which accepts reports of
aviation safety incidents while providing confidentiality and even a degree of
immunity to reporters. Beyond the policy protections around ASRS, it is widely
believed that NASA's brand reputation has been a key ingredient in its
success. NASA is fairly well regarded in both scientific and industry circles
as a research agency, and besides, NASA is cool. NASA operates ASRS to this
day.
I explain this little bit of history because I suspect it factored into the
decision to place a civilian, public UAP investigation in NASA. With funding
from a recent NDAA, NASA announced around this time last year that it would
commission a federal advisory committee to make recommendations on UAP research.
As a committee formed under the Federal Advisory Committee Act, the "UAP Study
Team" would work in public, with unclassified information, and produce a public
report as its final product.
It is important to understand that the scope of the UAP study team is limited.
Rather than addressing the entire UAP question, the study team was tasked with
a first step: examining the data sources and analytical processes available to
investigate UAPs, and making recommendations on how to advance UAP research.
Yes, an advisory committee to make recommendations on future research is an
intensely bureaucratic approach to such a large question, but this is NASA
we're talking about. This is how they work.
In October of last year, NASA announced the composition of the panel. Its
sixteen members consist of aerospace experts drawn primarily from universities,
although there are some members from think tanks and contractors. Most members
of the committee have a history of either work with NASA or work in aerospace
and astrophysical research. The members are drawn from fairly diverse fields,
ranging from astronaut Scott Kelly to oceanographer Paula Bontempi. Some
members of the committee are drawn from other federal agencies, for example
Karlin Toner, an FAA executive.
On May 31st, the UAP Study Team held its first public meeting. Prior to this
point the members of the committee had an opportunity to gather and study
information about data sources, government programs, and UAP reports. This
meeting, although it is the first public event, is actually relatively close
to the end of the committee's work: they are expecting to produce their final
report, which will be public, in July. This has the advantage that the meeting
is far enough along that the members have had the opportunity to collect a lot
of information and form initial positions, so there was plenty to discuss.
The meeting was four hours long if you include the lunch break (the NASA
livestream did!), so you might not want to watch all of it. Fear not, for I
did. And here are my thoughts.
The Public Meeting on Unidentified Anomalous Phenomena
The meeting began on an unfortunate note. First one NASA administrator, and
then another, gave a solemn speech: NASA stands by the members of the panel
despite the harassment and threats they have received.
UAPs have become sort of a cultural third rail. You can find almost any online
discussion related to UAPs and observe extreme opinions held by extreme people,
stated so loudly and frequently that they drown out anything else. If I could
make one strong statement to the collective world of people interested in UAP,
it would be this:
Calm the fuck down.
It is completely unacceptable the extent to which any UAP discourse inevitably
devolves into allegations of treachery. Whether you believe that the government
is in long-term contact with aliens that it is covering up, or you believe that
the entire UAP phenomenon of the 21st century is fabrication for political
reasons, accusing anyone who dares speak aloud of UAPs of being a CIA plant or
an FSB plant or just a stooge of the New World Order is perpetuating the
situation that you fear.
The reason that so much of UAP research seems suspect, seems odd, is because
political and cultural forces have suppressed any meaningful UAP research since
approximately 1970. The reason for that is the tendency of people with an
opinion, one way or the other, to doubt not only the integrity or loyalty but
even the identity of anyone who ventures onto the topic. UFOlogy is a deeply
troubled field, and many of those troubles have emerged from within, but just
as many have been imposed by the outrageously over-the-top reactions that UFO
topics produce. This kind of thing is corrosive to any discourse whatsoever,
including the opinions you agree with.
I will now step down from my soapbox and return to my writing couch.
NASA administrator Daniel Evans, the assistant deputy associate administrator
(what a title!) responsible for the committee, provides a strong pull quote:
"NASA believes that the study of unidentified anomalous phenomena represents an
exciting step forwards in our quest to uncover the mysteries of the world
around us." While emphasizing the panel's purpose of "creating a roadmap" for
future research as well as NASA's intent to operate its research in the most
public and transparent way possible, he also explains an oddity of the
meeting's title.
UAP, as we have understood it, meant Unidentified Aerial Phenomena. The recent
NDAA changed the meaning of UAP, within numerous federal programs, to
Unidentified Anomalous Phenomena. The intent of this change seems to have been
to encompass phenomena observed on land or at (and even under) the sea, but in
the eyes of NASA, one member points out, it also becomes more inclusive of the
solar system and beyond. That said, the panel was formed before that change and
consists mostly (but not entirely) of aerospace experts, and so understandably
the panel's work focuses on aerial observations. Later in the meeting one panel
member points out that there are no known reports of undersea UAPs by federal
channels, although it is clear that some panel members are aware of the
traditional association between UFOs and water.
Our first featured speaker is David Spergel, chair of the committee. Spergel is
a highly respected researcher in astrophysics, and also, we learn, a strong
personality. He presents a series of points which will be echoed throughout
the meetings.
First, federal efforts to collect information on UAPs are scattered,
uncoordinated, and until recently often nonexistent. It is believed that many
UAP events are unreported. For example, there are indications that a strong
stigma remains which prevents commercial pilots reporting UAP incidents through
formal channels. This results in an overall dearth of data.
Second, of the UAP data that does exist, the majority takes the form of
eyewitness reports. While eyewitness reports do have some value as broad trends
can be gleaned from them, they lack enough data (especially quantitative data)
to be useful for deeper analysis. Some reports do come with more solid data
such as photos and videos, but these are almost always collected with consumer
or military equipment that has not been well-studied for scientific use. As a
result, the data is uncalibrated---that is, the impacts of the sensor system on
the data are unknown. This makes it difficult to use these photos and videos
for any type of scientific analysis. This point is particularly important since
it is well known that many photos and videos of UAP are the result of defects
or edge-case behaviors of cameras. Without good information on the design and
performance of the sensor, it's hard to know if a photo reflects a UAP at all.
Finally, Spergel emphasizes the value of the topic. "Anomalies are so often the
engine of discovery," one of the other panel members says, to which Spergel
adds that "if it's something that's anomalous, that makes it interesting and
worthy of study." This might be somewhat familiar to you, if you have read my
oldest UFO writings, as it echos a fundamental part of the "psychosocial
theory" of UFOs: whether UFOs are "real" or not, the fact that they are a
widely reported phenomena makes them interesting. Even if nothing unusual has
ever graced the skies of this earth, the fact that people keep saying they saw
UFOs makes them real, in a way. That's what it means to be a phenomenon, and
much of science has involved studying phenomena in this sense.
Besides: while there's not a lot of evidence, there is an increasing portion of
modern evidence suggesting that there is something to some UAP sightings,
even if it's most likely to be of terrestrial origin. This is still
interesting! Even if you find the theory that UAPs represent extraterrestrial
presence to be utterly beyond reason (a feeling that I largely share), there is
good reason to believe that some people have seen something. One ought to
be reminded of sprites, an atmospheric phenomenon so rarely observed that their
existence was subject to a great deal of doubt until the first example was
photographed in 1989. What other rare atmospheric phenomena remain to be
characterized?
The next speaker is Sean Kirkpatrick, director of the AARO. He presents to the
committee the same slides that he recently presented to a congressional panel,
so while they are not new, the way he addresses them to this committee is
interesting. He explains that a current focus of the AARO is the use of
physical testing and modeling to determine what types of objects or phenomena
could produce the types of sightings AARO has received.
The AARO has received some reports that it struggles to explain, and has summed
up these reports to provide a very broad profile of a "typical" UAP: 1-4 meters
in size, moving between Mach 0 and 2, emitting short and medium-wave infrared,
intermittent X-band radar returns, and emitting RF radiation in the 1-3GHz and
8-12GHz ranges (the upper one is the X-band, very typical of compact radar
systems). He emphasizes that this data is based on a very limited set of
reports and is vague and preliminary. Of reported UAPs, the largest portion
(nearly half) are spherical. Reports come primarily from altitudes of 15-25k
feet and the coasts of the US, Europe, and East Asia, although he emphasizes
that these location patterns are almost certainly a result of observation bias.
They correlate with common altitudes for military aircraft and regions with
significant US military operations.
To make a point about the limitations of the available data, he shows a video.
There's a decent chance you've seen it, it's the recently released video of an
orb, taken by the weapons targeting infrared camera on a military aircraft. It
remains unexplained by the AARO and is considered one of the more anomalous
cases, he says, but the video---just a few seconds long---is all there is. We
can squint at the video, we can play it on repeat at 1/4 speed, but it is the
sum total of the evidence. To Determine whether the object is a visitor from
the planet Mars or a stray balloon that has caught the sunlight just right will
require more data from better sensors.
The AARO has so far had a heavy emphasis on understanding the types of sensor
systems that collected the best-known UAP sightings. Military sensor systems,
Kirkpatrick explains, are very distinct from intelligence or scientific sensor
systems. They are designed exclusively for acquiring and tracking targets for
weapons, and so the data they produce is of poor resolution and limited
precision compared to the sensors used in the scientific and intelligence
communities. Moreover, they are wholly uncalibrated: for the most part, their
actual sensitivity, actual resolution, actual precision remains unstudied. Even
the target-tracking and stabilization behavior of gimbal-mounted cameras is not
always well understood by military intelligence. The AARO is in the process of
characterizing some of these sensor systems so that more quantitative analysis
can be performed of any future UAP recordings.
Kirkpatrick says, as will many members of the committee later, that it is
unlikely that anyone will produce conclusive answers about UAP without data
collected by scientific instruments. The rarity of UAPs and limited emissions
mean that this will likely require "large scale, ground-based scientific
instruments" that collect over extended periods of time. Speaking directly to
the committee, he hopes that NASA will serve a role. The intelligence community
is limited, by law, in what data they can collect over US soil. They cannot use
intelligence remote sensing assets to perform long-term, wide-area observations
of the California coast. For IC sensors to produce any data on UAP, they will
probably need real-time tipping and cueing[1] from civilian systems.
Additionally, it is important to collect baseline data. Many UAP incidents
involve radar or optical observation of something that seems unusual, but there
isn't really long-term baseline data to say how unusual it actually is. Some
UAPs may actually be very routine events that are just rarely noticed, as has
happened historically with atmospheric phenomena. He suggests, for example,
that a ground-based sensor system observing the sky might operate 24x7 for
three months at a time in order to establish which events are normal, and
which are anomalous.
There is movement in the intelligence community: they receive 50 to 100 new
reports a month, he says, and have begun collaboration with the FVEYES
community. The stigma in the military seems reduced, he says, but notes
that unfortunately AARO staff have also been subject to harassment and
threats online.
By way of closing remarks, he says that "NASA should lead the scientific
discourse." The intelligence community is not scientific in its goal, and
cannot fill a scientific function well because of the requirements of
operational secrecy. While AARO intends to collaborate with scientific
investigation, for there to be any truly scientific investigation at all
it must occur in a civilian organization.
The next speaker, Mike Freie, comes from the FAA to tell us a bit about what is
normal in the skies: aircraft. There are about 45,000 flights each day around
the world, he says, and at peak hours there are 5,400 aircraft in the sky at
the same time. He shows a map of the coverage of the FAAs radar network: for
primary (he uses the term non-cooperative) radar systems, coverage of the
United States at 10,000 feet AGL is nearly complete. At 1,000 feet AGL, the map
resembles T-Mobile coverage before they were popular. While ADSB coverage of
the country is almost complete as low as 1,500 AGL, there are substantial areas
in which an object without a transponder can fly undetected as high as 5,000
AGL. These coverage maps are based on a 1-meter spherical target [2], he notes,
and while most aircraft are bigger than this most sUAS are far smaller.
Answering questions from the committee, he explains that the FAA does have a
standard mechanism to collect UAP reports from air traffic controllers and
receives 4-5 each month. While the FAA does operate a large radar network, he
explains that only data displayed to controllers is archived, and that
controllers have typically configured their radar displays to hide small
objects and objects moving at low speeds. In short, the radar network is
built and operated for tracking aircraft, not for detecting UAPs. If it is to
be used for UAP detection it will need modifications, and the FAA doesn't have
the money to pay for them.
Rounding out these meeting, we begin to hear from some of the panel members who
want to address specific topics. Nadia Drake, a journalist, offers a line that
is eminently quotable: "It is not our job to define nature, but to study it in
ways that lets nature reveal itself to us." She is explaining that "UAP" has
not been precisely defined, and probably can't be. Still, many members clearly
bristle at the change from "Aerial" to "Anomalous." The new meaning of UAP is
broad that it is difficult to define the scope of UAP research, and that was
already a difficult problem when it was only concerned with aerial effects.
Federica Bianco, of the University of Delaware among other institutions, speaks
briefly on the role of data science in UAP research. The problem, she repeats,
is the lack of data and the lack of structure in the data that is available.
Understanding UAPs will require data collected by well-understood sensors under
documented conditions, and lots of it. That data needs to be well-organized and
easily retrievable. Eyewitness reports, she notes, are useful but cannot
provide the kind of numerical observations required for large-scale analysis.
What UAP research needs is persistent, multi-sensor systems.
She does have good news: some newer astronomical observatories, designed for
researching near-earth objects, are capable of detecting and observing moving
targets. There is also some potential in crowdsourcing, if technology can be
used to enable people to take observations with consistent collection of
metadata. I imagine a sort of TikTok for UFOs, that captures not only a short
video but characteristics of the phone camera and device sensor data.
Later speakers take more of a whirlwind pace as the meeting starts to fall
behind schedule. David Grinspoon of the Planetary Institute speaks briefly of
exobiology, biosignatures, and technosignatures. Exobiology has suggested
various observables to indicate the presence of life, he explains. Likely of
more interest to UAP research, though, is the field of technosignatures:
remotely observable signatures that suggest the presence of technology. The
solar system has never really been searched for technosignatures, he explains,
and largely because technosignatures have been marginalized with the rest of
UAP research. If researchers can develop possible technosignatures, it may be
possible to equip future NASA missions to detect them as a secondary function.
While unlikely to conclusively rule extraterrestrial origin of UAPs out, there
is a chance it might rule them in, and that seems worth pursuing.
Drawing an analogy to the FAA's primary and secondary radar systems, he
explains that "traditional SETI" research has focused only on finding
extraterrestrial intelligence that is trying to be found. They have been
listening for radio transmissions, but no meaningful SETI program has ever
observed for the mere presence of technology.
Karlin Toner of the FAA talks about reporting. Relatively few reports come in,
likely due to stigma, but there is also an issue of reporting paths not being
well-defined. She suggests that NASA study cultural and social barriers to
UAP reporting and develop ways to reduce them.
Joshua Semeter of Boston University talks a bit about photo and video evidence.
It comes almost exclusively from Navy aviators, he says, and specifically from
radar and infrared targeting sensors. He uses the "gofast" video as an example
to explain the strengths and limitations of this data. The "gofast" video, a
well known example of a UAP caught on video, looks like a very rapidly moving
object. By using the numerical data superimposed on the image, though, it is
possible to calculate the approximate position of the object relative to the
aircraft and the ground. Doing so reveals that the object in the "gofast" video
is only moving about 40mph---typical of the wind at altitude over the ocean.
It is most likely just something blowing in the wind, even though the parallax
motion against the ocean far below makes it appear to move with extreme speed.
The AARO's work to characterize these types of sensors should provide a much
better ability to perform this kind of analysis in the future.
There's still a good hour to the meeting, with summarization of plans for the
final report and public questions, but all of the new material has now been
said. During questions, panel members once again emphasize NASA's intent to be
open and transparent in its work, the lack of data to analyze, and the need for
standardized, consistent, calibrated data collection.
There you have it: the current state of United States UAP research in a four
hour formal public meeting, the kind of thing that only NASA can bring us. The
meeting today might be a small step, but it really is a step towards a new era
of UAP research. NASA has made no commitments, and can't without congressional
authorization, but multiple panel members called for a permanent program of UAP
research within NASA and for the funding of sensor systems tailor-made to
detect and characterize UAPs.
We will have to wait for the committee's final report to know their exact
recommendations, and then there is (as ever) the question of funding. Still,
it's clear that congress has taken an interest, and we can make a strong guess
from this meeting that the recommendations will include long-term observing
infrastructure. I think it's quite possible that within the next few years we
will see the beginning of a "UAP observatory." How do you think I get a job
there? Please refer to the entire months of undergraduate research work in
astronomical instrumentation which you will find on my resume, and yes I retain
a surprising amount of knowledge of reading and writing both FITS and HDF5. No,
I will not work in the "lightweight Java scripting" environment Beanshell, ever
again. This is on the advice of my therapist.
[1] Tip and cue is a common term of art in intelligence remote sensing. It
refers to the use of real-time communications to coordinate multiple sensor
systems. For example, if a ground-based radar system detects a possible missile
launch, it can generate a "tip" that will "cue" a satellite-based optical
sensor to observe the launch area. This is a very powerful idea that allows
multiple remote sensing systems to far exceed the sum of their abilities.
[2] This approximation of a sphere of given diameter is a common way to discuss
radar cross section. While fighter jets are all a lot bigger than one meter,
many are, for radar purposes, about equal to quite a bit smaller than a 1-meter
sphere due to the use of "stealth" or low-radar-profile techniques. The latest
F-22 variants have a radar cross section as small as 1cm^2, highlighting what
is possible when an aircraft is designed to be difficult to detect. Such an
aircraft may not be detected at all by the FAA's radar network, even at high
altitudes and speeds.