Like many people in my generation, my memories of youth are heavily defined by
cable television. I was fortunate enough to have a premium cable package in
my childhood home, Comcast's early digital service based on Motorola equipment.
It included a perk that fascinated me but never made that much sense: Music
Choice. Music Choice was around 20 channels, somewhere in the high numbers, of
still images with music. It was really ad-free, premium radio, but in the era
before widespread adoption of SiriusXM that wasn't an easy product to explain.
And SiriusXM, of course, has found its success selling services to moving
customers. Music Choice was stuck in your home. The vast majority of Music
Choice customers must have had it only as part of a cable package, and part of
it that they probably barely even noticed.
This kind of thing seems to happen a lot with consumer products: a
little-noticed value-add that starts a rabbit hole into the history of consumer
music technology. Music Choice is an odd and, it seems, little-loved aspect of
premium cable packages, but with a history stretching back to 1987, it also
claims to be the first consumer digital music streaming technology... and I
think they're even right about that claim.
The '80s was an exciting time in consumer audio. The Compact Disc was becoming
the dominant form of music distribution, and CDs offered a huge improvement in
sound quality. Unlike all of the successful consumer audio formats before it,
CDs were digital. This meant no signal noise in the playback process and an
outstanding frequency response.
Now, some have expressed surprise at the fact that CDs were a digital audio
format and yet weren't recognized as a practical way to store computer data for
years after. There are a few reasons for this, but one detail worth remembering
is that audio playback is a fairly fault-tolerant application. Despite error
correction, CD players will sometimes fail to decode a specific audio sample.
They just skip it and move along, and the problem isn't all that noticeable to
listeners. Of course this kind of failure is much more severe with computer
data and so more robust error tolerance was needed.
That's a bit besides the point except that it illustrates a very convenient
property of music as an application for digital storage and transmission: it's
inherently fault tolerant, and digital decoding errors in audio can come off
much the same way that noise and other playback faults did with analog formats.
Music is a fairly comfortable way to try the waters of digital distribution,
and the CD was a hugely successful experiment. Digital audio became an everyday
experience for many consumers, and suddenly analog distribution formats like
radio were noticeably inferior.
It was quite natural that various parts of the consumer electronics industry
started to investigate digital radio. Digital radio has a troublesome history
in the United States and has only really seen daylight in the form of the
in-band on-channel HD Radio protocol, which I have discussed previously.
HD Radio launched in 2002, so it was a latecomer to the radio scene (probably
a big part of its lackluster adoption). Satellite radio, also digital, didn't
launch until 2001. So there was a wide gap, basically all of the '90s, where
consumers were used to digital audio from CDs but had no way of receiving
digital broadcasts.
This was just the opportunity for Jerrold Communications.
Jerrold Communications is not likely to be a name you've heard before, despite
the company's huge role in cable TV industry. Jerrold was a very early cable
television operator and developed a lot of their own equipment. Eventually,
equipment (head end transmitters and set-top boxes) became Jerrold's main
business, and most of the modern technological landscape of cable TV has
heritage in Jerrold designs. The reason you've never heard of them is because
of acquisitions: in 1967, Jerrold became part of General Instrument. In 1997,
General Instrument fractured into several companies, and the cable equipment
business was purchased by Motorola in 2000. In 2012, the Motorola business unit
that produced cable equipment became part of ARRIS. In 2019, ARRIS was acquired
by CommScope, ironically one of the other fragments that spun off of General
Instrument in '97.
What matters to us is that, for whatever reason, General Instrument continued
to use the Jerrold brand name on some of their cable TV products into the '90s
[1].
In 1987 Jerrold announced their new "Digital Cable Radio," which apparently had
pilot installations in Deland FL, Sacramento CA, and Willow Grove PA. They
expected expanded service in 1989.
In fact, Jerrold was not alone in this venture. At the same time, International
Cablecasting Technologies announced its similar service "CD-8" (it's like
having eight CD players, seems to have been the explanation for this name,
which was later changed to CD-18 to reflect additional channels before they
dropped the scheme). CD-8 launched in Las Vegas, and we will discuss it more
later, as it survived into the 21st century under a different name. Finally,
a company called Digital Radio launched "The Digital Radio Channel" in Los
Angeles.
All three of these operations were discussed together in a number of syndicated
newspaper pieces that ran in 1987 to present the future of radio. They reflect,
it seems, just about the entire digital radio industry of the '80s.
Digital Radio, the company, is a bit of a mystery. Perhaps mostly due to their
extremely generic name, it's hard to find much information about the company or
its fate. Los Angeles had a relatively strong tradition of conventional cable
radio (meaning analog radio delivered over cable TV lines), so it may have
helped The Digital Radio Channel gain adoption even without the multi-channel
variety of the competition. My best guess is that Digital Radio of California
did not survive long and failed to expand out of the LA market. I have so far
failed to find any advertisements or press mentions after 1987, and the press
coverage in '87 was extremely slim.
This left us with two late-'80s competitors for the new digital cable radio
market: Jerrold's "Digital Cable Radio" and ICT's "CD-8." Both of these
services worked on a very similar basis. A dedicated set-top box would be
connected to a consumer's cable line, either with a passive splitter or
daisy-chained with the television STB. The STB functioned like a radio tuner
for a component stereo system, allowing the listener to select a channel which
was then sent to their stereo amplifier (or hi-fi receiver, etc) as analog
audio. CD-8 went an impressive step beyond Digital Cable Radio, offering a
remote with a small LCD matrix display that showed the artist and track title
(this was apparently an added-cost upgrade).
I have seen mention that the STBs for these services cost around $100. That's
$270 in today's so-called money, not necessarily unreasonable for a hi-fi
component but still no doubt a barrier to adoption. On top of that, neither
service seems to have been bundled with cable plans. Instead, they were
separate subscriptions. Monthly subscriptions seem to have been in the range of
$6-8, reasonably comparable to SiriusXM subscriptions today. But once again we
have to ponder the customer persona.
SiriusXM is a relatively obscure service but still runs a reasonable profit on
the back of new cars with bundled plans, long-haul truckers, and business jet
pilots (SiriusXM has a live weather data service that is popular with the
business aviation crowd, besides the ability to offer SiriusXM music to
passengers). In other words, satellite radio is attractive to people who are in
motion, especially since the same channels are available across different radio
markets and even in the middle of nowhere (except underpasses). I'm not sure
I'll renew my SiriusXM service once I get onto normal post-promotion rates, but
still, there is undeniably something magical about SiriusXM working fine in a
canyon in the Mojave desert when I have no phone service and Spotify has
mysteriously lost all of my downloaded tracks again.
I'm unconvinced that digital audio quality is really that much of a selling
point to most SiriusXM customers. Instead, the benefit is coverage: even "in
town" here in Albuquerque, SiriusXM offers more consistent coverage than many
of the commercial radio stations that have seen some serious cost-cutting in
their transmitter operations. But digital radio over cable television doesn't
move... it's only available in the home. I don't think a lot of people ever
signed up for it as a dedicated subscription.
Still, the industry marched on. By 1990, The Digital Radio Channel seems to
have disappeared. But there is some good news: Jerrold's Digital Cable Radio
is still a contender and now offers 17 channels. CD-8 has been rebranded as
CD-18 and then rebranded again as Digital Music Express, or DMX. And there is
a new contender, Digital Planet. It is actually possible, although I don't find
it especially likely due to the lack of mentions of this history, that Digital
Planet is the same company as Digital Radio. It also operated exclusively in
Los Angeles, but had an impressive 26 channels.
Let's dwell a little more on DMX, because there is something interesting here
that represents a broader fact about this digital cable radio industry. CD-8,
later CD-18 (or CD/18 depending on where you look), was launched by
International Cablecasting Technologies or ICT. Based on newspaper coverage in
the 1990s, it quickly became apparent that DMX's best customers were
businesses, not consumers. In 1993, DMX cost consumers $4.95 a month (plus $5 a
month in equipment rental if the customer did not buy the set-top box outright
for $100). Businesses, though, paid $50-75 a month for a DMX appliance that
would provide background music from specially programmed channels. DMX was a
direct competitor to Muzak, and by the late '90s one of the biggest companies
in the background music market.
Background music makes a whole lot more sense for this technology. There's a
long history of "alternative" broadcast audio formats, like leased telephone
lines and FM radio subcarriers, being used to deliver background music to
businesses. Muzak had a huge reputation in this industry, dating back to
dedicated distribution wiring in the 1930s, but by the 1980s was increasingly
perceived as stuffy and old-fashioned. Much of this related to Muzak's
programming choices: Muzak was still made up mostly of easy-listening covers of
popular tracks, hastily recorded by various contracted bands. DMX, though,
offered something fresh and new: the popular tracks, in their original form.
Even better, DMX focused on the start on offering multiple channels, so that
businesses could choose a genre that would appeal to their clients. There was
smooth jazz for dentists, and rock and roll for hip retailers. The end of
"elevator music" as a genre was directly brought about by DMX and its
contemporary background music competitor, AEI.
Several late-'90s newspaper pieces describe the overall competitive landscape
of background music as consisting of Muzak, DMX, and Audio Environments Inc
(AEI). Unsurprisingly, given the overall trajectory of American business, these
three erstwhile competitors would all unify into one wonderful monopoly. The
path there was indirect, though. Various cable carriers took stakes in DMX, and
by the late '90s it was being described as a subsidiary of Turner Cable and
AT&T. Somehow, the details are stubbornly unclear, DMX and AEI would join
forces in the late '90s. By 2000 they were no longer discussed as competitors.
I have really tried to figure out what exactly happened, but an afternoon with
newspaper archives has not revealed to me the truth. Here is speculation:
AEI appears to have used satellite distribution for their background music from
the start, while DMX, born of the cable industry, relied on cable television.
In the late '90s, though, advertorials for DMX start to say that it is available
via cable or satellite. I believe that at some point in '98 or '99, DMX and
AEI merged. They unified their programming, but continued to operate both the
cable and satellite background music services under the DMX brand.
For about the next decade, the combined DMX/AEI Music would compete with Muzak.
In 2011-2012, Canadian background music (now usually called "multisensory
marketing") firm Mood Media bought both Muzak and DMX/AEI, combining them all
into the Mood Media brand. This behemoth would enjoy nearly complete control of
the background music industry, were it not for the cycle of technology bringing
in IP-based competitors like Pandora for Business. Haha, no, I am kidding,
Pandora for Business is also a Mood Media product. This is the result of
essentially a licensing agreement on the brand name; Pandora itself is a
SiriusXM subsidiary. Pandora for Business is a wholly different product sold by
Mood Media "in partnership with" Pandora, and seems to be little more than a
rebranding of the DMX service to match its transition to IP. Actually SiriusXM
and DMX used to have shared ownership as well (DMX/AEI, by merger with Liberty
Media, had half ownership of SiriusXM, as well as Live Nation concert
promoting, Formula One racing, etc), although they don't seem to currently. The
American media industry is like this, it's all just one big company with an
aggressive market-segment brand strategy.
So what about those set-top boxes, though? Digital Cable Radio and DMX both
relied on special hardware, while the service of my youth did not. Well, the
problem doesn't seem to have so much been the special hardware as the whole
concept of a separate subscription for digital cable radio. By the end of the
'90s, Jerrold and DMX were both transitioning to the more traditional structure
of the cable TV industry. They sold their product not to consumers but to cable
carriers, who then bundled it into cable subscriptions. This meant that shipping
users dedicated hardware was decidedly impractical, but the ATSC digital cable
standard offered a promising new approach.
This might be surprising in terms of timeline. ATSC wasn't all that common
over-the-air until the hard cutover event in 2009. This slow implementation was
a result of the TV tuners built into OTA consumers televisions, though. Cable
companies, since the genesis of cable TV, had been in the habit of distributing
their own set-top boxes (STB) even though many TVs had NTSC (and later ATSC)
tuners built-in. Carrier-provided STBs were a functional necessity due to
"scrambling" or encryption of cable channels, done first to prevent "cable
theft" (consumers reconnecting their cable drop to the distribution amplifier
even though they weren't paying a bill) and later to enable multiple cable rate
tiers.
The pattern of renting STBs meant that cable carriers had a much greater degree
of control over the equipment their customers would use to receive cable, and
that allowed the cable industry to "go digital" much earlier. The first ATSC
standard received regulatory approval in 1996 and spread relatively quickly
into the cable market after that. By the end of the '90s, major carriers like
Comcast had begun switching their customers over to digital ATSC STBs, mostly
manufactured by Motorola Mobility Home Solutions---the direct descendent of
Jerrold Communications.
Digital cable meant that everything was digital, including the audio. Suddenly
a "digital cable radio" station could just be a normal digital cable station.
And that's what they did: Jerrold and DMX both dropped their direct-to-consumer
services and instead signed deals to distribute their channels to entire cable
companies. Along with this came rebranding: Jerrold's Digital Cable Radio
adopted the name "Music Choice," while DMX kept the DMX name for some carriers
and adopted the brand "Sonic Tap" for at least DirecTV and possibly others.
As an aside, Sonic Tap's twitter account
is one of those internet history gems that really makes me smile. Three tweets
ever, all in one day in 2013. Follows DirecTV and no one else. 33 followers, a
few of which even appear to be real. These are the artifacts of our
contemporary industrialists: profoundly sad Twitter profiles.
Music Choice had always enjoyed a close relationship with the cable industry.
It was born at General Instrument, the company that manufactured much of the
equipment in a typical cable network, and that ownership transitioned to
Motorola. As Music Choice expanded in the late '90s and '00s, it began to give
equity out to cable carriers and other partners in exchange for expanded
distribution. Today, Music Choice is owned by Comcast, Charter, Cox, EMI,
Microsoft, ARRIS (from Motorola), and Sony. Far from its '80s independent
identity, it's a consortium of the cable industry, maintained to provide a
service to the carriers that own it. Music Choice is carried today by Comcast
(Xfinity), Spectrum, Cox, Verizon, and DirecTV, among others. It is the
dominant cable music service, but not the only!
A few cable companies have apparently opted to side with Stingray instead.
Stingray has so far not featured in this history at all. It's a Canadian
company, and originated as the Canadian Broadcasting Corporation's attempt at
digital cable radio, called Galaxie. I will spare a full corporate history of
Stingray, in part because the details are sort of fuzzy, but it seems to be a
parallel story to what happened in the US. Galaxie eventually merged with
competing service Max Trax, and then the CBC seems to have divested Stingray
(which had operated Galaxie as a subsidiary of the CBC). In the late 2010s,
Stingray started an expansion into the US. Amusingly, Comcast apparently
delivered Stingray instead of Music Choice for several years (despite being
part owner of Music Choice!). Stingray does seem to still exist on a handful
of smaller US cable carriers, although the company seems invested in a switch
to internet streaming.
Cable is dying. Not just because of the increasing number of "cord cutters"
abandoning their $80 cable bill in favor of $90 worth of streaming subscription
services, but because the cable industry itself is slowly abandoning ATSC. In
the not too far future, conventional cable broadcasting will disappear,
replaced by "over the top" (OTT) IPTV services like Xfinity Flex. This
transition will allow the cable carriers full freedom in bandwidth planning,
enabling DOCSIS cable internet to achieve the symmetric multi-Gbps speeds the
protocol is capable of [2].
Consumers today get virtually all of their music over IP. The biggest
competitor to Music Choice is Spotify, and the two are not especially
comparable businesses. The "linear broadcast" format seems mostly dead, and
while Music Choice does offer on-demand services, it will probably never get
ahead of the companies that started out with an on-demand model. That's sort of
funny, in a way. The cable industry and advanced ATSC features especially
introduced the on-demand content library concept, but the cable industry is far
behind the companies that launched with the same idea a decade later... but
with the benefit of the internet and agility.
It's sad, in a way. I love coaxial cable networks, it's a fascinating way to
distribute data. I am a tireless defender of DOCSIS, constantly explaining to
people that we don't need to eliminate cable internet---there's no reason to,
DOCSIS offers better real-world performance than common PON (optical)
internet distributive systems. What we need to get rid of is the cable
industry. While giants like Comcast do show some signs of catching up to the
21st century, they remain legacy companies with a deeply embedded rent-seeking
attitude. Major improvements to cable networks across the country are underway,
but they started many years too late and proceed too slowly now, a result of
severe under-investment in outside plant.
I support community internet, I'm just saying that maybe, just maybe, municipal
governments would achieve far more by ending cable franchises and purchasing
the existing cable plant than by installing new fiber. "Fiber" internet isn't
really about "fiber" at all. "Fiber" is used as a political euphemism for "not
a legacy utility" (somewhat ironic since one of the largest fiber internet
providers, Verizon FiOS, is now very much a legacy utility). In fact, good old
cable TV is a remarkably capable medium. It brought us the first digital music
broadcasting. It brought us the first on-demand media streaming. Cable is now
posed to deliver 5Gbps+ internet service over mostly existing infrastructure.
The problem with cable internet is not technical; it's political. Send me your
best picket signs for the cable revolution.
[1] The history here is a little confusing. It seems like GI mostly retired the
Jerrold name as GI-branded set-top boxes are far more common than Jerrold ones.
But for whatever reason, when GI launched their cable digital radio product in
1987, it was the Jerrold name that they put on the press releases.
[2] Existing speed limitations on DOCSIS internet service, such as the 35Mbps
upload limit on Xfinity internet service in most markets, are a result of
spectrum planning problems in the cable network rather than limitations in
DOCSIS. DOCSIS 3.1, the version currently in common use, is easily cable of
symmetric 1Gbps. DOCSIS 4.0, currently being introduced, is easily capable of
symmetric 5Gbps. The problem is that upstream capacity in particular is
currently limited by the amount of "free space" available outside of delivering
television channels, a problem that is made particularly acute by legacy STBs
(mostly Motorola branded, of Jerrold heritage) that have fixed channel
requirements for service data like the program guide. These conflict with
DOCSIS 3.0+ upstream channels, such that DOCSIS cannot achieve Gbps upstream
speed until these legacy Motorola STBs are replaced. Comcast has decided to
skip the ATSC STB upgrade entirely by switching customers over to the all-IP
Flex platform. I believe they will need to apply for regulatory approval to end
their ATSC service and go all-IP, so this is probably still at least a few
years out.
First, a disclaimer of sorts: I am posting another article on UAPs, yet I am
not addressing the recent claims by David Grusch. This is for a couple of
reasons. First, I am skeptical of Grusch. He is not the first seemingly
well-positioned former intelligence official to make such claims, and I think
there's a real possibility that we are looking at the next Bob Lazar. Even
without impugning his character by comparison to Lazar, Grusch claims only
secondhand knowledge and some details make me think that there is a real
possibility that he is mistaken or excessively extrapolating. As we have seen
previously with the case of Luis Elizondo, job titles and responsibilities in
the intelligence community are often both secretive and bureaucratically
complex. It is very difficult to evaluate how credible a former member of the
IC is, and the media complicates this by overemphasizing weak signals.
Second, I am hesitant to state even my skepticism as Grusch's claims are very
much breaking news. It will take at least a month or two, I think, for there to
be enough information to really evaluate them. The state of media reporting on
UAP is extremely poor, and I already see Grusch's story "growing legs" and
getting more extreme in the retelling. The state of internet discourse on UAP
is also extremely poor, the conversation almost always being dominated by the
most extreme of both positions. It will be difficult to really form an opinion
on Grusch until I have been able to do a lot more reading and, more
importantly, an opportunity has been given for both the media and the
government to present additional information.
It is frustrating to say that we need to be patient, but our first impressions
of individuals like Grusch are often dominated by our biases. The history of
UFOlogy provides many cautionary tales: argumentation based on first
impressions has both lead to clear hoaxes gaining enormous hold in the UFO
community (profoundly injuring the credibility of UFO research) and to UAP
encounters being ridiculed, creating the stigma that we are now struggling to
reverse. In politics, as in science, as in life, it takes time to understand a
situation. We have to keep an open mind as we work through that process.
Previously on Something Up There
I have previously written
twoparts in which
I present an opinionated history of our current era of UAP research. To present
it in tight summary form: a confluence of factors around the legacy of WWII,
the end of the Cold War, and American culture created a situation in which UFOs
were ridiculed. Neither the civilian government nor the military performed any
meaningful research on the topic, and throughout the military especially a
culture of suppression dominated. Sightings of apparent UAPs were almost
universally unreported, and those reports that existed were ignored.
This situation became untenable in the changing military context of the 21st
century. A resurgence of military R&D in Russia and the increasing capabilities
of the Chinese defense establishment have made it increasingly likely that
rival nations secretly possess advanced technology, much like the US fielded
several advanced military technologies, in secret, during the mid-20th century.
At the same time, the lack of any serious consideration of unusual aerial
phenomena meant that the US had near zero capability to detect these systems,
outside of traditional espionage methods which must be assumed to be limited
(remember that the despite the Soviet Union's considerable intelligence
apparatus, the US managed to field significant advancements without their
knowledge).
As a result of this alarming situation, the DoD began to rethink its
traditional view of UFOs. Unfortunately, early DoD funding for UAP research was
essentially hijacked by Robert Bigelow, an eccentric millionaire and friend of
the powerful Senator Reid with a hobby interest in the paranormal (not just
UFOs but ghosts, etc). Bigelow has a history of similar larks, and his UAP
research program (called AATIP) ended the same way his previous paranormal
ventures have: with a lot of press coverage but no actual results. A
combination of typical DoD secrecy and, I suspect, embarrassment over the
misspent funds resulted in very little information on this program reaching the
public until Bigelow and surprise partner Tom DeLonge launched a publicity
campaign in an effort to raise money.
AATIP was replaced by the All-Domain Anomaly Resolution Office (AARO), a more
standard military intelligence program, which has only seriously started their
work in the last two years. The AARO has collected and analyzed over 800
reports of UAPs, unsurprisingly finding that the majority are uninteresting
(i.e. most likely a result of some well-known phenomenon), but finding that a
few have properties which cannot be explained by known aviation technology.
The NASA UAP advisory committee
The activities of the AARO have not been sufficient to satisfy political
pressure for the government to Do Something about UAPs. This was already true
after the wave of press generated by DeLonge's bizarre media ventures, but
became even more as the Chinese spy balloon made the limitations of US
airspace sovereignty extremely apparent.
Moreover, many government personnel studying the UAP question agree that one of
the biggest problems facing UAP research right now is stigma. The military has
a decades-old tradition of suppressing any reports that might be classified as
"kookie," and the scientific community has not always been much more
open-minded. This is especially true in the defense industry, where Bigelow's
lark did a great deal of reputational damage to DoD UAP efforts. In short,
despite AAROs efforts, many were not taking AARO seriously.
Part of the problem with AARO is its slow start and minimal public work product
to date. Admittedly, most of this is a result of some funding issues and then
the secretive nature of work carried out within military intelligence
organizations. But that underscores the point: AARO is an intelligence
organization that works primarily with classified sources and thus produces
classified products. UAPs, though, have become a quite public issue. Over the
last two years it has become increasingly important to not only study UAPs but
to do so in a way that provides a higher degree of public assurance and public
information. That requires an investigation carried out by a non-intelligence
organization. The stigmatized nature of UAP research also demands that any
serious civilian investigation be carried by an organization with credibility
in aerospace science.
The aerospace industry has faced a somewhat similar problem before: pilots not
reporting safety incidents for fear of negative impacts on their careers. It's
thought that a culture of suppressing safety incidents in aviation lead to
delayed discovery of several aircraft design and manufacturing faults. The best
solution that was found to this problem of under-reporting was the introduction
of a neutral third-party. The third-party would need to have the credibility to
be considered a subject-matter expert in aerospace, but also needed to be
removed from the regulatory and certification process to reduce reporters fears
of adverse action being taken in response to their reports. The best fit was
NASA: a federal agency with an aerospace science mission and without direct
authority over civil aviation.
The result is the Aviation Safety Reporting System, which accepts reports of
aviation safety incidents while providing confidentiality and even a degree of
immunity to reporters. Beyond the policy protections around ASRS, it is widely
believed that NASA's brand reputation has been a key ingredient in its
success. NASA is fairly well regarded in both scientific and industry circles
as a research agency, and besides, NASA is cool. NASA operates ASRS to this
day.
I explain this little bit of history because I suspect it factored into the
decision to place a civilian, public UAP investigation in NASA. With funding
from a recent NDAA, NASA announced around this time last year that it would
commission a federal advisory committee to make recommendations on UAP research.
As a committee formed under the Federal Advisory Committee Act, the "UAP Study
Team" would work in public, with unclassified information, and produce a public
report as its final product.
It is important to understand that the scope of the UAP study team is limited.
Rather than addressing the entire UAP question, the study team was tasked with
a first step: examining the data sources and analytical processes available to
investigate UAPs, and making recommendations on how to advance UAP research.
Yes, an advisory committee to make recommendations on future research is an
intensely bureaucratic approach to such a large question, but this is NASA
we're talking about. This is how they work.
In October of last year, NASA announced the composition of the panel. Its
sixteen members consist of aerospace experts drawn primarily from universities,
although there are some members from think tanks and contractors. Most members
of the committee have a history of either work with NASA or work in aerospace
and astrophysical research. The members are drawn from fairly diverse fields,
ranging from astronaut Scott Kelly to oceanographer Paula Bontempi. Some
members of the committee are drawn from other federal agencies, for example
Karlin Toner, an FAA executive.
On May 31st, the UAP Study Team held its first public meeting. Prior to this
point the members of the committee had an opportunity to gather and study
information about data sources, government programs, and UAP reports. This
meeting, although it is the first public event, is actually relatively close
to the end of the committee's work: they are expecting to produce their final
report, which will be public, in July. This has the advantage that the meeting
is far enough along that the members have had the opportunity to collect a lot
of information and form initial positions, so there was plenty to discuss.
The meeting was four hours long if you include the lunch break (the NASA
livestream did!), so you might not want to watch all of it. Fear not, for I
did. And here are my thoughts.
The Public Meeting on Unidentified Anomalous Phenomena
The meeting began on an unfortunate note. First one NASA administrator, and
then another, gave a solemn speech: NASA stands by the members of the panel
despite the harassment and threats they have received.
UAPs have become sort of a cultural third rail. You can find almost any online
discussion related to UAPs and observe extreme opinions held by extreme people,
stated so loudly and frequently that they drown out anything else. If I could
make one strong statement to the collective world of people interested in UAP,
it would be this:
Calm the fuck down.
It is completely unacceptable the extent to which any UAP discourse inevitably
devolves into allegations of treachery. Whether you believe that the government
is in long-term contact with aliens that it is covering up, or you believe that
the entire UAP phenomenon of the 21st century is fabrication for political
reasons, accusing anyone who dares speak aloud of UAPs of being a CIA plant or
an FSB plant or just a stooge of the New World Order is perpetuating the
situation that you fear.
The reason that so much of UAP research seems suspect, seems odd, is because
political and cultural forces have suppressed any meaningful UAP research since
approximately 1970. The reason for that is the tendency of people with an
opinion, one way or the other, to doubt not only the integrity or loyalty but
even the identity of anyone who ventures onto the topic. UFOlogy is a deeply
troubled field, and many of those troubles have emerged from within, but just
as many have been imposed by the outrageously over-the-top reactions that UFO
topics produce. This kind of thing is corrosive to any discourse whatsoever,
including the opinions you agree with.
I will now step down from my soapbox and return to my writing couch.
NASA administrator Daniel Evans, the assistant deputy associate administrator
(what a title!) responsible for the committee, provides a strong pull quote:
"NASA believes that the study of unidentified anomalous phenomena represents an
exciting step forwards in our quest to uncover the mysteries of the world
around us." While emphasizing the panel's purpose of "creating a roadmap" for
future research as well as NASA's intent to operate its research in the most
public and transparent way possible, he also explains an oddity of the
meeting's title.
UAP, as we have understood it, meant Unidentified Aerial Phenomena. The recent
NDAA changed the meaning of UAP, within numerous federal programs, to
Unidentified Anomalous Phenomena. The intent of this change seems to have been
to encompass phenomena observed on land or at (and even under) the sea, but in
the eyes of NASA, one member points out, it also becomes more inclusive of the
solar system and beyond. That said, the panel was formed before that change and
consists mostly (but not entirely) of aerospace experts, and so understandably
the panel's work focuses on aerial observations. Later in the meeting one panel
member points out that there are no known reports of undersea UAPs by federal
channels, although it is clear that some panel members are aware of the
traditional association between UFOs and water.
Our first featured speaker is David Spergel, chair of the committee. Spergel is
a highly respected researcher in astrophysics, and also, we learn, a strong
personality. He presents a series of points which will be echoed throughout
the meetings.
First, federal efforts to collect information on UAPs are scattered,
uncoordinated, and until recently often nonexistent. It is believed that many
UAP events are unreported. For example, there are indications that a strong
stigma remains which prevents commercial pilots reporting UAP incidents through
formal channels. This results in an overall dearth of data.
Second, of the UAP data that does exist, the majority takes the form of
eyewitness reports. While eyewitness reports do have some value as broad trends
can be gleaned from them, they lack enough data (especially quantitative data)
to be useful for deeper analysis. Some reports do come with more solid data
such as photos and videos, but these are almost always collected with consumer
or military equipment that has not been well-studied for scientific use. As a
result, the data is uncalibrated---that is, the impacts of the sensor system on
the data are unknown. This makes it difficult to use these photos and videos
for any type of scientific analysis. This point is particularly important since
it is well known that many photos and videos of UAP are the result of defects
or edge-case behaviors of cameras. Without good information on the design and
performance of the sensor, it's hard to know if a photo reflects a UAP at all.
Finally, Spergel emphasizes the value of the topic. "Anomalies are so often the
engine of discovery," one of the other panel members says, to which Spergel
adds that "if it's something that's anomalous, that makes it interesting and
worthy of study." This might be somewhat familiar to you, if you have read my
oldest UFO writings, as it echos a fundamental part of the "psychosocial
theory" of UFOs: whether UFOs are "real" or not, the fact that they are a
widely reported phenomena makes them interesting. Even if nothing unusual has
ever graced the skies of this earth, the fact that people keep saying they saw
UFOs makes them real, in a way. That's what it means to be a phenomenon, and
much of science has involved studying phenomena in this sense.
Besides: while there's not a lot of evidence, there is an increasing portion of
modern evidence suggesting that there is something to some UAP sightings,
even if it's most likely to be of terrestrial origin. This is still
interesting! Even if you find the theory that UAPs represent extraterrestrial
presence to be utterly beyond reason (a feeling that I largely share), there is
good reason to believe that some people have seen something. One ought to
be reminded of sprites, an atmospheric phenomenon so rarely observed that their
existence was subject to a great deal of doubt until the first example was
photographed in 1989. What other rare atmospheric phenomena remain to be
characterized?
The next speaker is Sean Kirkpatrick, director of the AARO. He presents to the
committee the same slides that he recently presented to a congressional panel,
so while they are not new, the way he addresses them to this committee is
interesting. He explains that a current focus of the AARO is the use of
physical testing and modeling to determine what types of objects or phenomena
could produce the types of sightings AARO has received.
The AARO has received some reports that it struggles to explain, and has summed
up these reports to provide a very broad profile of a "typical" UAP: 1-4 meters
in size, moving between Mach 0 and 2, emitting short and medium-wave infrared,
intermittent X-band radar returns, and emitting RF radiation in the 1-3GHz and
8-12GHz ranges (the upper one is the X-band, very typical of compact radar
systems). He emphasizes that this data is based on a very limited set of
reports and is vague and preliminary. Of reported UAPs, the largest portion
(nearly half) are spherical. Reports come primarily from altitudes of 15-25k
feet and the coasts of the US, Europe, and East Asia, although he emphasizes
that these location patterns are almost certainly a result of observation bias.
They correlate with common altitudes for military aircraft and regions with
significant US military operations.
To make a point about the limitations of the available data, he shows a video.
There's a decent chance you've seen it, it's the recently released video of an
orb, taken by the weapons targeting infrared camera on a military aircraft. It
remains unexplained by the AARO and is considered one of the more anomalous
cases, he says, but the video---just a few seconds long---is all there is. We
can squint at the video, we can play it on repeat at 1/4 speed, but it is the
sum total of the evidence. To Determine whether the object is a visitor from
the planet Mars or a stray balloon that has caught the sunlight just right will
require more data from better sensors.
The AARO has so far had a heavy emphasis on understanding the types of sensor
systems that collected the best-known UAP sightings. Military sensor systems,
Kirkpatrick explains, are very distinct from intelligence or scientific sensor
systems. They are designed exclusively for acquiring and tracking targets for
weapons, and so the data they produce is of poor resolution and limited
precision compared to the sensors used in the scientific and intelligence
communities. Moreover, they are wholly uncalibrated: for the most part, their
actual sensitivity, actual resolution, actual precision remains unstudied. Even
the target-tracking and stabilization behavior of gimbal-mounted cameras is not
always well understood by military intelligence. The AARO is in the process of
characterizing some of these sensor systems so that more quantitative analysis
can be performed of any future UAP recordings.
Kirkpatrick says, as will many members of the committee later, that it is
unlikely that anyone will produce conclusive answers about UAP without data
collected by scientific instruments. The rarity of UAPs and limited emissions
mean that this will likely require "large scale, ground-based scientific
instruments" that collect over extended periods of time. Speaking directly to
the committee, he hopes that NASA will serve a role. The intelligence community
is limited, by law, in what data they can collect over US soil. They cannot use
intelligence remote sensing assets to perform long-term, wide-area observations
of the California coast. For IC sensors to produce any data on UAP, they will
probably need real-time tipping and cueing[1] from civilian systems.
Additionally, it is important to collect baseline data. Many UAP incidents
involve radar or optical observation of something that seems unusual, but there
isn't really long-term baseline data to say how unusual it actually is. Some
UAPs may actually be very routine events that are just rarely noticed, as has
happened historically with atmospheric phenomena. He suggests, for example,
that a ground-based sensor system observing the sky might operate 24x7 for
three months at a time in order to establish which events are normal, and
which are anomalous.
There is movement in the intelligence community: they receive 50 to 100 new
reports a month, he says, and have begun collaboration with the FVEYES
community. The stigma in the military seems reduced, he says, but notes
that unfortunately AARO staff have also been subject to harassment and
threats online.
By way of closing remarks, he says that "NASA should lead the scientific
discourse." The intelligence community is not scientific in its goal, and
cannot fill a scientific function well because of the requirements of
operational secrecy. While AARO intends to collaborate with scientific
investigation, for there to be any truly scientific investigation at all
it must occur in a civilian organization.
The next speaker, Mike Freie, comes from the FAA to tell us a bit about what is
normal in the skies: aircraft. There are about 45,000 flights each day around
the world, he says, and at peak hours there are 5,400 aircraft in the sky at
the same time. He shows a map of the coverage of the FAAs radar network: for
primary (he uses the term non-cooperative) radar systems, coverage of the
United States at 10,000 feet AGL is nearly complete. At 1,000 feet AGL, the map
resembles T-Mobile coverage before they were popular. While ADSB coverage of
the country is almost complete as low as 1,500 AGL, there are substantial areas
in which an object without a transponder can fly undetected as high as 5,000
AGL. These coverage maps are based on a 1-meter spherical target [2], he notes,
and while most aircraft are bigger than this most sUAS are far smaller.
Answering questions from the committee, he explains that the FAA does have a
standard mechanism to collect UAP reports from air traffic controllers and
receives 4-5 each month. While the FAA does operate a large radar network, he
explains that only data displayed to controllers is archived, and that
controllers have typically configured their radar displays to hide small
objects and objects moving at low speeds. In short, the radar network is
built and operated for tracking aircraft, not for detecting UAPs. If it is to
be used for UAP detection it will need modifications, and the FAA doesn't have
the money to pay for them.
Rounding out these meeting, we begin to hear from some of the panel members who
want to address specific topics. Nadia Drake, a journalist, offers a line that
is eminently quotable: "It is not our job to define nature, but to study it in
ways that lets nature reveal itself to us." She is explaining that "UAP" has
not been precisely defined, and probably can't be. Still, many members clearly
bristle at the change from "Aerial" to "Anomalous." The new meaning of UAP is
broad that it is difficult to define the scope of UAP research, and that was
already a difficult problem when it was only concerned with aerial effects.
Federica Bianco, of the University of Delaware among other institutions, speaks
briefly on the role of data science in UAP research. The problem, she repeats,
is the lack of data and the lack of structure in the data that is available.
Understanding UAPs will require data collected by well-understood sensors under
documented conditions, and lots of it. That data needs to be well-organized and
easily retrievable. Eyewitness reports, she notes, are useful but cannot
provide the kind of numerical observations required for large-scale analysis.
What UAP research needs is persistent, multi-sensor systems.
She does have good news: some newer astronomical observatories, designed for
researching near-earth objects, are capable of detecting and observing moving
targets. There is also some potential in crowdsourcing, if technology can be
used to enable people to take observations with consistent collection of
metadata. I imagine a sort of TikTok for UFOs, that captures not only a short
video but characteristics of the phone camera and device sensor data.
Later speakers take more of a whirlwind pace as the meeting starts to fall
behind schedule. David Grinspoon of the Planetary Institute speaks briefly of
exobiology, biosignatures, and technosignatures. Exobiology has suggested
various observables to indicate the presence of life, he explains. Likely of
more interest to UAP research, though, is the field of technosignatures:
remotely observable signatures that suggest the presence of technology. The
solar system has never really been searched for technosignatures, he explains,
and largely because technosignatures have been marginalized with the rest of
UAP research. If researchers can develop possible technosignatures, it may be
possible to equip future NASA missions to detect them as a secondary function.
While unlikely to conclusively rule extraterrestrial origin of UAPs out, there
is a chance it might rule them in, and that seems worth pursuing.
Drawing an analogy to the FAA's primary and secondary radar systems, he
explains that "traditional SETI" research has focused only on finding
extraterrestrial intelligence that is trying to be found. They have been
listening for radio transmissions, but no meaningful SETI program has ever
observed for the mere presence of technology.
Karlin Toner of the FAA talks about reporting. Relatively few reports come in,
likely due to stigma, but there is also an issue of reporting paths not being
well-defined. She suggests that NASA study cultural and social barriers to
UAP reporting and develop ways to reduce them.
Joshua Semeter of Boston University talks a bit about photo and video evidence.
It comes almost exclusively from Navy aviators, he says, and specifically from
radar and infrared targeting sensors. He uses the "gofast" video as an example
to explain the strengths and limitations of this data. The "gofast" video, a
well known example of a UAP caught on video, looks like a very rapidly moving
object. By using the numerical data superimposed on the image, though, it is
possible to calculate the approximate position of the object relative to the
aircraft and the ground. Doing so reveals that the object in the "gofast" video
is only moving about 40mph---typical of the wind at altitude over the ocean.
It is most likely just something blowing in the wind, even though the parallax
motion against the ocean far below makes it appear to move with extreme speed.
The AARO's work to characterize these types of sensors should provide a much
better ability to perform this kind of analysis in the future.
There's still a good hour to the meeting, with summarization of plans for the
final report and public questions, but all of the new material has now been
said. During questions, panel members once again emphasize NASA's intent to be
open and transparent in its work, the lack of data to analyze, and the need for
standardized, consistent, calibrated data collection.
There you have it: the current state of United States UAP research in a four
hour formal public meeting, the kind of thing that only NASA can bring us. The
meeting today might be a small step, but it really is a step towards a new era
of UAP research. NASA has made no commitments, and can't without congressional
authorization, but multiple panel members called for a permanent program of UAP
research within NASA and for the funding of sensor systems tailor-made to
detect and characterize UAPs.
We will have to wait for the committee's final report to know their exact
recommendations, and then there is (as ever) the question of funding. Still,
it's clear that congress has taken an interest, and we can make a strong guess
from this meeting that the recommendations will include long-term observing
infrastructure. I think it's quite possible that within the next few years we
will see the beginning of a "UAP observatory." How do you think I get a job
there? Please refer to the entire months of undergraduate research work in
astronomical instrumentation which you will find on my resume, and yes I retain
a surprising amount of knowledge of reading and writing both FITS and HDF5. No,
I will not work in the "lightweight Java scripting" environment Beanshell, ever
again. This is on the advice of my therapist.
[1] Tip and cue is a common term of art in intelligence remote sensing. It
refers to the use of real-time communications to coordinate multiple sensor
systems. For example, if a ground-based radar system detects a possible missile
launch, it can generate a "tip" that will "cue" a satellite-based optical
sensor to observe the launch area. This is a very powerful idea that allows
multiple remote sensing systems to far exceed the sum of their abilities.
[2] This approximation of a sphere of given diameter is a common way to discuss
radar cross section. While fighter jets are all a lot bigger than one meter,
many are, for radar purposes, about equal to quite a bit smaller than a 1-meter
sphere due to the use of "stealth" or low-radar-profile techniques. The latest
F-22 variants have a radar cross section as small as 1cm^2, highlighting what
is possible when an aircraft is designed to be difficult to detect. Such an
aircraft may not be detected at all by the FAA's radar network, even at high
altitudes and speeds.
Programming note: In an effort to introduce an exciting new social aspect to
Computers Are Bad (a functional necessity to appease early-stage investor
demands for "engagement")¸ I am launching a Matrix room for CAB readers.
You can join it! Do whatever you do to join rooms in your client with #computer.rip:waffle.tech
A few months ago I found out (via the rare sort of mailing list I actually stay
subscribed to) that the Center for Land Use Interpretation
was holding a Memorial Day open house at their Swansea location. I always have
a hard time describing CLUI, but most people that are interested in domestic
military, telecom, or cold war history are probably at least peripherally aware
of them. Their Land Use Database functions as a less
SEO-driven (and somehow both more and less pretentious) version of Atlas
Obscura, cataloging a huge number of unusual and historic sites.
I had a four-day weekend off work and, as I think I have said here before, I
will drive twelve hours and several deserts over to the Mojave with the least
excuse. The opportunity to see CLUI's Swansea facility and its subject, Owens
Lake, certainly qualified. And so we set off, westbound into the sunset on
I-40.
The drive west is always a bit shocking for the sharp change in tone on
crossing the border into Arizona. I'm not sure why this happens, perhaps a
difference in the economic history of the two states or an artifact of the very
different politics today. It's obvious, though: as soon as you cross the
border, dubious "Indian Crafts" shops become a major form of roadside gas
station. "PP by the TP," one such stop advertises. The cultural connection
between the area's Navajo, Hopi, and Pueblo people and the tipi is not
especially strong, but that's not important. What is important here, or at
least used to be, is the connection between the tipi and the "Indian" in the
mind of a tourist from New York.
The crass commercial exploitation of these roadside attractions is fairly
unusual in New Mexico today, but ubiquitous in Arizona. While some of these
stops are Navajo-owned, they are a striking reminder of the region's history of
settler colonialism. Fortunately, there is a more optimistic direction in
freeway cultural relations. East of Flagstaff at Twin Arrows (a historic Route
66 "trading post"), the relatively new Navajo Blue travel plaza represents a
new direction by its parent organization Navajo Gaming, a government-owned
enterprise of the Navajo Nation. It sells a number of Navajo-made products
including new ventures in food and beverage, and is decorated with recountings
of Navajo culture (and signage in the Navajo language) rather than the "vaguely
Indian" pastiche that is usually called "authenticity" in the region.
The cross-border difference may reflect the different histories of eastern
Arizona and western New Mexico. Blessed with the Grand Canyon, Arizona has
always been a hotspot of highway tourism. The Navajo of New Mexico, though,
were more cursed with uranium. The mining and processing of uranium represented
the dominant industry in many parts of the Navajo Nation during the Cold War.
It has left towns like Gallup and Grants with a legacy of remediation sites,
tailings containments, and cancer clinics, all funded by the Department of
Energy. Just off the freeway in this area, New Mexico's customary billboards
for personal injury lawyers are displaced by those for law firms specializing
in the Radiation Exposure Compensation Act.
These people saw the largest radiation accident in United States history,
surpassing Three Mile Island for the quantity of radiation released. In Church
Rock, near Gallup, the dam containing a uranium mine's tailings pond failed.
Over 1,000 tons of solid tailings and 90 million gallons of liquid slurry, all
acidic and radioactive, flowed into the Rio Puerco. This was 1979, shortly
after Three Mile, and no one cared. No one really cares today, and the
government's reticence to pay out claims still keeps billboard lawyers busy.
Church Rock, you must understand, is a sacrifice zone. As a matter of federal
policy, both the land and the people were consumed by the uranium mills until
the Cold War cooled a bit further down. After that, the federal government
walked away. It took long court battles and legislation to bring the government
back to the uranium mines of the Colorado plateau, now a disaster that can only
be mitigated.
This is a tangent, yes, but it is critically important that both advocates and
critics of nuclear energy remember the plight of the Navajo Nation. While
controversy over nuclear energy continues to revolve around the safe storage of
nuclear waste, actually a problem of rather small scale, even the harshest
opponents of nuclear energy and nuclear weapons seldom remember the destruction
that uranium extraction wreaked on both a region and a people. The Department
of Energy, through several of its national laboratories, has conducted
exhaustive radiation surveys of the Navajo Nation both from aircraft and on the
ground. Each time the surveys become more sensitive, the findings become more
alarming. Large numbers of older structures in the Navajo Nation, many of them
homes, were built with materials contaminated by uranium tailings. Tailings
piles across the four corners region (largely coterminous with the Navajo
Nation) will require permanent monitoring and maintenance to protect the
engineered barriers, installed at great expense, that prevent wind and water
spreading uranium and its daughter products further. Each one of these tailings
piles easily exceeds the volume of fuel waste produced in the entire history of
civilian nuclear energy.
Yes, the tailings are far less active and so easier to manage, but the scale of
this easier problem is so much larger. Within sight of the gates of Arches
National Park, one of the most important recreational areas in the region, the
Moab Uranium Tailings Remedial Action Project (for some reason abbreviated
UMTRA) has its own rail freight facility to ship away the more active waste.
Primary cleanup activities started in 2001 and will continue until at least
2029. This site is not exceptional, there are many others like it, and yet it's
the federally-funded medical clinics in each former mine town that most vividly
tell this forgotten human story of the atomic age.
This has gotten a bit dark, hasn't it? But telephones, telephones are a lighter
topic. Well into California now, we stopped at Ludlow, right on I-40 a bit west
of the Mojave National Preserve. There's a fairly intact Whiting Brothers
service station here, an artifact of historic Route 66 that I always enjoy
seeing. What I am really hunting for, though, are indications of historic
open-wire telephone routes. Immediately east of the old Whiting Brothers, a
cluster of old "H-frame" telephone poles on both sides of the street strongly
suggest a convergence of open-wire toll leads, if not a station of some kind.
Open-wire telephone leads mostly ran on multi-arm telephone poles. You've seen
these before, for electrical distribution if nothing else: just an ordinary
utility pole with as many as five crossarms on it. At points where toll leads
ended, though, or at certain points when crossing rivers or where especially
long spans are needed for another reason, you'll see an H-frame. This is a set
of two poles, with the crossarms spanning between them, connected at both ends.
They're sturdier and provide more resistance to movement in the lateral
direction especially. Repeater stations usually have one or more H-frames
around them that serve as sturdy anchor points for the leads heading off in
each direction.
An abandoned building, in about the right position, has the vague shape of an
open-wire repeater station but was at least more recently a house. It is
possible that it was a repeater facility converted to a house by a later owner,
this isn't very unusual, but I doubt it in this case. The house seems to be
wood-framed. The analog electronics in open-wire repeater stations were
sensitive to temperature, and so AT&T employed thick brick or concrete block
walls as a way to minimize the diurnal temperature variation inside. It's
possible that there wasn't a repeater station there at all, one H-frame has a
perpendicular arm that makes me believe it could have just been a point where
one of the toll lead pairs was broken off to serve a local exchange or, most
likely, a toll station at the nearby railroad depot.
We spent the night on the edge of the Black Mountain Wilderness Area near
Barstow, where a mineshaft plunged dramatically downwards with only a loose
barbed-wire cattle fence around it. Almost everywhere in southern California
was an active mining district at some point. There was gold and silver in many
cases. Fortunately, there wasn't much in the way of uranium. The next morning,
we finished the drive into the Owens valley.
As with much of the southern California desert, the Owens Valley is both
untamed by humans and yet forever changed by human development. Owens Lake, at
the center, was a large lake at the bottom of the valley fed by the Owens River
and various streams. It was until 1913, when the City of Los Angeles took it.
One of the great controversies of the West is water. It is difficult to
succinctly convey exactly how insane the history of water in the West is, and I
will have to refer you to the classic book "Cadillac Desert" by Marc Reisner.
Water was the subject of hostilities that ranged from intense political
intrigue to outright armed combat, and the City of Los Angeles was one of the
principal combatants. In a series of events now known as the California water
wars, the Los Angeles Department of Water and Power employed persuasion,
bribery, deception, and in some cases military force to claim every drop of
water in the Owens Valley.
There were farmers, ranchers, and industry in the Owens Valley that relied on
Owens River and Owens Lake, but they had no influence on the City of Los
Angeles. The LADWP built a pumping plant that diverted the entire flow of Owens
river into an aqueduct, around the Sierras and into the city. The impact of
LADWP's diversion on Owens Lake was a dramatic one: Owens Lake ceased to exist,
and with it most of the industry of the Owens valley. With all the water of the
Owens River diverted, there was no water left to fill the lake, and it slowly
evaporated away. The ranches and farms of Owens Lake evaporated too, leaving
only mineral extraction operations that didn't survive much longer.
The Natural Soda Products Company had their plant destroyed by impacts of
LADWP's engineering works. They sued, and the LADWP paid the cost of
rebuilding. 25 years later, the LADWP destroyed the plant again. Once again, it
was forced to pay to rebuild. Natural Soda Products went out of business not
long after. Quickly changing water levels and compositions in the lake, a
result of LADWP's scattershot management of the diversion, made it unprofitable
to continue on the site. In 1941, LADWP was forced by court order to build a
flood control dam higher up on the Owens river to prevent Owens lake
unexpectedly flooding whenever there was a wet year. This provided enough
assurance for the Pittsburgh Plate Glass Company to stand up their own soda
plant. In the 1960s, it too closed, another victim of LADWP. Today the Natural
Soda Products Company has been built over by an LADWP warehouse and equipment
yard. The Pittsburgh Plate Glass plant remains abandoned on the west shore,
behind a beautiful midcentury office building that can't have been built that
long before the end.
Drying out a lake is, in general, a bad idea. As Owens lake evaporated, its
mineral content settled on the former lakebed. Owens valley is rich in salt and
several minerals that include heavy metals. These constituents form a dry crust
across the lakebed, and whenever the wind kicks up, the top layer begins to
blow away. Salt, heavy metals, and small particles combine with archae,
single-celled organisms that thrive in the brine, to create an environmental
disaster of tremendous scale. Everywhere downwind from Owens Lake has
accumulated a fine layer of dust, carcinogenic to humans and animals because of
the heavy metal content. The massive particulate loading of the air becomes
downright dangerous, causing widespread respiratory illness in the valley's few
remaining residents.
California contains another notable dry lakebed I have written about before,
the Salton Sea. The situation is somewhat different there as the Salton Sea
started out dry and was accidentally wetted. Owens Lake started out wet and was
rather intentionally dried, but the adverse consequences of that drying were
mostly ignored since the City of Los Angeles was making the decisions and it
was on the other side of a mountain range. Private land owners in the Owens
Valley had already been decimated by the LADWP's actions anyway, and for most
of the 20th century the Owens Valley was abandoned as a sacrificial zone. We
might chide the LADWP for their lack of care for the residents and businesses
there, but that seems too generous. For a long time, some would say still
today, the LADWP has been actively hostile to the valley's residents. They are,
after all, the insurgent force left over after the LADWP won the water wars.
Even so, the valley had its advocates. A few residents remained around the
lake, the Alabama hills sustained a tourist operation at the nearby town of
Lone Pine, and by the '90s environmental organizations and government agencies
outside the control of Los Angeles gained power.
Owens Lake remains property of the LADWP to this day, but the times have
changed. By the 1930s courts were already starting to side with land owners in
the Owens Valley area rather than allowing Los Angeles carte blanche to do as
it pleased. In practice, this has less to do with courts changing opinions than
it does with the City of Los Angeles losing the above-the-law status it had
held in the early 20th century through sheer power of will (and the general
difficulty of administering the law in frontier California). Now, in the 21st
century, Owens Lake has decidedly become property of the LADWP in the kind of
way the LADWP had long tried to avoid: it is now Los Angeles's problem. And a
problem it is.
The tide turned against the LADWP as the new millennium arrived. The Great
Basin Unified Air Pollution Control District, a joint powers agreement between
the region's three counties to administer air quality regulations, employed
politics and lawsuits to force the LADWP into an agreement to address the dust.
A 1999 memorandum of agreement between the LADWP and the Pollution Control
District started a monumental industrial project that continues today, and will
continue indefinitely into the future. LADWP has to stop the dust.
This partnership has not been an especially happy one. The LADWP has decidedly
drug its feet, seeking to extend timelines and reduce the area over which they
are required to implement dust control measures. The Pollution Control District
has sued the LADWP and won several times since, leading to a series of
settlements and injunctions that define the LADWP's obligations today.
The requirement to tame the dust has evolved into a complex set of numerical
performance standards, and a regime of air sampling stations and modeling to
evaluate compliance.
There are multiple approaches to reducing the dust emitted by the dry lakebed,
and each comes with drawbacks. Collectively, methods accepted under agreements
with the Pollution Control District are called Best Available Control Measures
or BACM. They include shallow flooding, managed vegetation, and gravel. There
are also three alternative forms of shallow flooding, referred to as tillage,
brine, and dynamic water management, which basically entail shallow flooding
with optimizations to reduce water consumption. We'll discuss each of these in
more detail, but this is the first important thing to know about Owens Lake.
There are multiple dust control methods, and each comes with advantages and
drawbacks. A key part of the dust control project is the selection of different
control measures for different parts of the lake.
Shallow flooding is the most widely used, and as of 2019 about 36 square miles
are managed by shallow flooding. The concept is simple: areas of lake bed are
flooded with just enough water to keep mineral deposits wet. Much of the
central, low-elevation part of the lake is controlled by shallow flooding,
which surprisingly gives it a somewhat normal appearance from a distance. The
most obvious part of the lake is a large water surface. Only on closer
inspection do you realize that, first, this area is far smaller than the total
size of the lakebed, and second, it is extremely shallow. It's more of a
reflecting pool than a lake.
Shallow flooding is almost completely effective in preventing windblown dust,
but it consumes water... 2-3 feet per year for conventional shallow flooding,
although some of the alternate methods like tillage reduce this. The whole
problem was created by LA's need for water, and the LADWP considers water use
for dust control undesirable since it reduces the portion of Owens River water
that can be sent on to the city. The water also has to be pumped and other
control works have to be built and maintained. Conventional shallow flooding
costs around $30 million per square mile to install and a third of a million
per square mile annually to maintain.
Another option is managed vegetation, used for 5.4 square miles in 2019. It's
pretty much the gardening option. In areas of the lake with good soil,
irrigation is installed to support plant cover. Apparently in the interest of
keeping costs low, irrigation is kept to the absolute minimum, and so round
green areas form around each water jet. Managed vegetation is similar to
shallow flooding in terms of cost and can use just as much water, but it
adds valuable animal habitat to the Owens valley.
Finally, gravel is the most expensive option, but also completely effective and
fairly easy to maintain. Another 5.4 square miles of dry lakebed are simply
covered in a layer of gravel, preventing fast-moving air directly over the
mineral deposits. Gravel comes out to $37 million per square mile to install,
but it's effective immediately unlike managed vegetation that takes some time
to grow in.
A number of other methods are being investigated. For example, precision
surface wetting relies on the same basic concept as shallow flooding (keeping
the lake bed wet) but uses sprinkler heads to distributed the water instead of
flooding. This can be more efficient, but also requires more complex
infrastructure.
Perhaps most interesting experimental techniques are the broad category of
"artificial roughness." Part of the dust problem is the very flat nature of the
lakebed, which allows for very fast wind at low altitudes. By making the
terrain of the lakebed more complex, the wind is slowed and turbulence is
introduced that makes it harder for dust particles to travel outside of the
lake area. The question is how to practically introduce such roughness. The
largest experiment so far has made use of hay bales scattered in a somewhat
regular grid. The hay bales demonstrated as much as 92% efficacy in reducing
dust, which is less than that of the three primary techniques but still quite
high, especially considering the low water consumption of artificial surface
roughness.
Smaller scale roughness experiments have used weighted tubs and frames covered
in snow fencing to slow and disrupt wind. The efficacy of these methods is not
well established, since a fairly large area is required to see the full
benefits of surface roughness.
The scale of these efforts is hard to comprehend. After learning about the
effort at CLUI's small installation, we headed out one of the many service
roads into the lake proper. One wouldn't usually drive into a lake, but Owens
Lake today feels more like a quarry or salt operation. In order to separate
areas for different dust control measures and to make shallow flooding more
manageable, much of the lake near shore is divided into rectangular areas by
high berms topped by gravel roads.
Access to the lake from the west side involves first driving past an
interpretive kiosk, apparently installed in an effort to explain the strange
landscape that is locally being called a lake. Driving from the historic shore
into the lakebed proper, you are struck first by how incredibly flat it is, and
second by how the entire surface ahead of you is punctuated not just by the
separating berms but also by electrical enclosures and valve boxes. We left our
car by a small pumping plant with a maze of insulated pipes, and walked past
one of the many tank filling stations found around the lake to support the
water trucks used for surface wetting during construction.
Within sight were several different control measures. To one side, two large
rectangles were being flooded by standpipes. Near a corner, both a corner of
the rectangular cell and an intersection of the access roads, a plastic pipe of
perhaps 6" diameter sticks a short distance above the shallow water. Each of
these standpipes emits many gallons per minute from one of the pumping plants,
and the water coming out of them has a somewhat unsettling pink tinge. This is
apparently a result of both the salt content and the archae that feeds on it.
Depending on the salinity and soil conditions of the individual area, some
flooding is done with salt brine recirculated from other parts of the lakebed.
Other areas must be flooded with the LADWP's precious freshwater to avoid
worsening the already severe salinity problems on the lake.
We stopped for a moment next to one of these standpipes, taking a few photos,
and were amused at its apparent shyness: shortly after we approached, the flow
stopped. The complexity of shallow flooding is not just in the piping, but
control. Electrical cabinets all over the lake have log periodic antennas
pointed back towards the LADWP operations building, typical of radio SCADA
communications. Flooded areas receive water based on several factors, and water
is pumped to different areas throughout the day. Sensor input, weather
conditions, and a schedule all factor into control logic that starts and stops
water delivery to different cells of the lake.
On the other side of the berm, there is another area of shallow flooding
demonstrating a different approach to delivery. Smaller plastic pipes stick up
in a grid across the cell, each ending in a T-fitting with a small stream of
water pouring out of each side. This type of water application can be more
efficient since less water depth is required to achieve full coverage. It also
involves a lot more piping, and so higher installation cost.
Further away, past one of the flooded areas, we can see some managed
vegetation. Clumped grasses and small plants surround each of the irrigation
heads. It's far from natural, but it's also perhaps the most lush greenery we
have seen in the valley. This vegetation forms a key part of Owens lake's bird
habitat, along with areas of shallow flooding planted to function as wetlands.
The entire time we spend walking around, the quiet of the lakebed is
periodically interrupted by an LADWP dump truck making trips back and forth on
a nearby service road. Even on a weekend, construction is ongoing. Rocks are
being moved to the north end of the lake for whatever reason, probably to build
up rock-covered berms in a small-scale dust control measure called cobbling.
Workers in pickup trucks are seen elsewhere around the lake, and when the wind
blows the right direction there's a faint sound of heavy equipment, somewhere
out there.
Standing somewhere in what is technically a lake and taking in the view, it is
hard not to think of some of the land art I have visited. The smell of salt and
overall atmosphere of an evaporating (or here, evaporated) lake make an obvious
connection to the Great Salt Lake, which increasingly looks to be headed for a
similar fate. The Spiral Jetty, a 1970 work of land art found on the Salt
Lake's shores, would fit right in to Owens lake.
This similarity between dust control and land art is a developing part of the
Owens lake management strategy. During my visit I spoke with Alex Robinson, a
professor of landscape architecture and principal of the Landscape Morphology
Lab. Robinson's work includes the "Rapid Landscape
Prototyping Machine," a sort of 3D printer that uses a robotic arm to perform
small-scale earth-moving on a tray of sand. The resulting model landscapes fit
into his machine "Greetings from Owens Lake," which allows visitors to explore
the artificial landscape as it would appear under the real conditions of Owens
Lake---including under different dust control measures.
In an essay on Owens
Lake, Robinson writes that
"the lesson of Owens Lake is that, increasingly, there is no such thing as an
environmental fix. There is only reinvention." This is what brought me here.
Environmental remediation, particularly in the public eye, focuses mostly on
"fixing," putting things back the way they were. A fix, though, may not be
feasible, or even possible. I often think of my visit to the Asarco copper mine
south of Tucson where our remarkable tour guide proclaimed that, after mining
was complete, they would "put sheep in it." Another member of the tour asked
about the future form of the enormous hole. Would they fill it in?
No, our tour guide explained. That would be incredibly costly, create its own
environmental problems, and besides, the stepped sides look a bit like Table
Mountain (she was referring, I think, to the one in California) and people seem
to like that. "The sheep, they love it," she told us, although I have recycled
this through my head so many times that I can't honestly remember if she
actually delivered this line with the cadence of Donald Trump.
Environmental changes made on the scale of an entire valley lake or a huge
copper mine cannot simply be put back the way they were. Explaining the concept
behind "Greetings from Owens Lake," which not only renders the user's future
lakebed landscape but prints the predicted vista onto a postcard, Robinson
emphasized that "we have to choose." There is no "natural" in the Owens Valley
any more, not since 1913. The dust control project is less restoration than it
is remaking. Robinson writes:
Whether the setting is California's rapidly shrinking Salton Sea or the
wave-lapped shoreline of the Eastern Seaboard, global warming and the needs of
civilization dictate that there is no going back, only futures we might choose
to design. To reinvent landscapes to rival the ones we have lost will require
broader, more synthetic and imaginative forms of authorship than
problem-solving paradigms can provide.
In this view, perhaps the best possible future for Owens Lake is as a
monumental work of land art, on the scale (if hopefully not the timeline) of
Michael Heizer's "City." This approach has received some official endorsement.
In the mid-2010s, a large section of the east side of the lake was reworked by
landscape architecture firm Nuvis. The goal was to create an outdoor
recreational area and wildlife habitat, not at all unlike the nation's many
wildlife refuges except that the land itself had to be designed anew. The
topology is treated as sculpture, with a series of wedge-shaped berms built up
to provide aesthetic interest---along with surface roughness, required to meet
the land art area's strict dust control requirements.
As we drove further north through Owens lake and past the land art area, we
neared the north edge where the Owens River is diverted towards LA. The
snowpack was high this year and so there is a much appreciated excess of water;
the portion of Owens river beyond that needed for LA's water supply is allowed
to flow over a low flood wall into the lakebed, supplementing the shallow
flooding operations. The main road around the lake just fords the river over
this floodwall, it's not typical for much of any water to be flowing. The
rarity of the steady flow is underscored by the ford's depth gauge: a standard
measuring tape, pulled out a couple of feet and taped to a bollard by some
worker.
Continuing around the north side of the lake towards its west edge, we are
joined on the access road by a half dozen side-by-sides. One flies a Trump
flag, the other "Let's Go Brandon." We had seen them earlier in the day as
well, traveling in a pack up and down the access roads on the east side. It's
hard for me to interpret Owens lake as an OHV attraction, given its near
perfect flatness and the fact that nearly every part of it that isn't flooded
has been graveled to resemble some of the region's better roads. As we reach
the west edge, though, they all turn into the lake's sole remaining RV park.
Surrounded by (presumably irrigated) trees, it looks far more inviting than the
lake around it. The RZR seems to be the golf cart of our day, and so I suppose
it's a fitting mode of transportation in this desolate imitation of a lakeside
resort.
At CLUI's installation on the east shore, besides "Greetings from Owens Lake"
and a clever interpretive activity in which the visitor attempts to understand
the various legal jurisdictions by assembling them as a puzzle, we took in
Robinson's interpretive art installation "The Fountains of Owens
Lake." The
darkened room presents the standpipes and sprinklers that wet the flooded
sections as art objects, neatly framed in videos. With perhaps eight of these
videos playing around you, the sound from the speakers above each screen
combines into something that sounds like an actual river flowing into a lake.
It's all constructed, though, and spread so far apart in the lakebed as to be
more of an ambient phenomenon than the confluence of two bodies of water. One
of the videos loops, starting back from the beginning, when the pump turned on:
a plastic pipe jutting out of a salt-covered embankment gurgles and then lets
out a spurt of water, frightening off the birds that had been stalking insects
around it. As it settles into a steady flow, the birds seem to settle back as
well.
Here is an industrial project that, for reasons mostly of history, we call a
lake. Somewhere on its shores, a Modbus command goes into an IP packet into an
ethernet frame into a 900MHz FHSS data radio. It passes by a cabinet on the
side of a gravel road on one of the berms that make up the lake's organized
structure. Decoded, it reaches a PLC, which pulls in a contactor, or these
days perhaps sends a few bytes to a VFD. An electric motor starts, and water
flows into the lake once again... at least until 70-85% coverage of the dry bed
is achieved and the motor shuts off again. This is an ecosystem, or at least
part of one; it is the water cycle of the Owens valley. It is not a temporary
solution, not a remediation project, but the permanent engineered environment.
The birds have had to get used to it.
"We have to choose," a remark that Robinson made several times. This captures,
I think, the most frightening part of today's major environmental projects. We
have destroyed things so completely that we cannot "fix" them, we cannot
"restore." Instead, we have to choose what we want them to be.
"Greetings from Owens Lake" has, as just part of its logic, a bit of an
optimization game. As the user changes the dust control method applied to the
synthetic landscape, the display shows a row of bar graphs. It shows how well
the landscape performs against various traditional purposes of a lake.
Aesthetics are hard to state objectively, but with wildlife biology the quality
of the lake as bird habitat can be. One can play the "Breeding shorebird" bar
nearly to the top, but then the huge water consumption and cost required to
maintain so much habitat looms on the other side of the screen. There are
numerous options for dust control because there is no one correct choice.
Depending on the specific conditions of each part of the lake, some methods are
more viable than others. Depending on the type of bird and season, some methods
present better habitat than others. Some look better than others, and provide
better recreational opportunities. Others are much cheaper.
We like to let nature solve these problems. But in the Owens valley, we tore
nature out about a hundred years ago. Now we have to choose.
We only had a long weekend for this trip, and so we turned around for
Albuquerque the next day. On the way back we visited the Salton Sea State
Recreational Area, a formerly popular state park now consisting mostly of dusty
campsites on the dustier shore. The size of the visitor center's parking lot is
incomprehensible given the number of actual visitors, even on this holiday
weekend. The boat ramp has long been out of the water. That it exists at all
now seems like a bit of a joke.
We pass through Trona, a town dominated both physically and culturally by the
plant of the Searles Valley Mineral Company. We visit the Trona Pinnacles, an
unusual set of jutting rocks that give some alien landscape appeal to the dry
lake bed of the Searles valley---this one natural. Passing through Joshua Tree
National Park, I chuckle at a sign warning tourists that cacti are pointy and
take a group photo for some visitors from India.
We stop by the old GWEN station near Essex. It was in use for Coast Guard DGPS
into the 2000s, but has since been demolished thoroughly. There are some wires
sticking out of the ground that I think might be remaining evidence of the
ground screen. As much as 100' of coaxial cable are wrapped around a bush, and
I spend a while inspecting a perforated pipe bolted to a nicely machined piece
of metal. I can't even guess at its purpose. It's hard to say if either of
these are artifacts of the GWEN equipment or have blown over from the adjacent
junkyard. The GWEN station was built on the site of a CAA intermediary
airfield. The junkyard looks more like an abandoned gas station but may have
been an airport office. A structure right next to it was clearly the generator
shed from an airmail route beacon, you can still just make out the orange paint
on the roof. I didn't see any signs of the beacon's foundation, I suspect the
generator shed had been moved for use as storage.
Remarkably close by, to the north, is another landing strip. This one, labeled
"Fenner air strip" on maps, seems to have been two parallel runways with six
pads at each end, separated from the runways by individual 500' taxiways. I'm
guessing Fenner airstrip was a military training field, and these pads might
have been for loading or arm/dearm. They tend to separate those pads onto
independent taxiways like this so that one accidental detonation should only
kill one airman. It's all in surprisingly good shape considering it has
apparently been abandoned since WWII. Not that that's very good shape, but
still, I think a light aircraft with a backcountry kit and an adventurous pilot
could probably manage landing and takeoff. We drive around the site for a
while, but I can't find any artifacts more substantial than a 55 gallon drum.
There is a desert tortoise sunning on a taxiway, a reminder that something
lives on. Driving back out to the highway, we realize that we had missed the
actual road to the landing strip and driven up an old wash instead.
OpenStreetMap has made the same mistake, the actual road is in better condition
but somehow harder to see.
Hours further east on I-40, the freeway has been put right over one of the
runways of what I assumed to be a former Army Air Station. Research shows that
it was an auxiliary field of Kingman AAF, but what I assumed to be old base
housing was actually built by the El Paso Natural Gas Company as a work camp
for their pipeline. The airfield was disused by the '50s and the work camp,
along with the freeway, were built on top of it. Today the site is still used
by EPNGCO to support the nearby pipeline compressor station, and part of it is
an Arizona DOT yard. In the DOT yard is a row of three houses, mostly
abandoned, and foundations for more. These presumably date back to when Arizona
DOT provided housing for its field crews, a practice that seems to have been
particularly common in Arizona compared to other states (most Arizona rest
stops have a caretaker's house on site, for example). One of the three, looking
only a bit less abandoned, is signed as an office of the Arizona Department of
Public Safety.
Hours further east on I-40, we venture into some ranchland to find the Devil's
Hole, also called Dante's Descent. This dramatic sinkhole in an otherwise
unremarkable desert is property of the Arizona State Land Trust, which has put
up a fence around it and posted it as no trespassing. I am sure they have
liability concerns about people falling into the hole, my husband and I found
ourselves unable to get close enough to see the bottom. But still, it seems
like a sad lack of effort to display this rather unusual natural feature.
Arizona tends to be like that, with many fine historic and natural attractions
that can be found only by driving up pipeline access roads and climbing over
fences.
Hours further east on I-40, I pull off to visit an AT&T microwave station.
Sometimes this feels more like paying respects, as these are invariably in poor
condition in the Southwest. This one, a bit east of Flagstaff, is better than
most. The outhouse has been moved to accommodate a modern concrete modular
shelter in the fenced back lot. Two KS-15676 antennas remain on a tower that
once held four. There are ring mounts left over from conical horns, and a
number of modern microwave antennas, apparently backhaul for what look like two
or three cellular base stations sharing this tower. An FAA GATR is just a short
distance behind it, and another microwave site with the look of MCI, but it's
hard to say for sure. AT&T had a unique verve for dramatic microwave
installations (or at least the dual needs of tube-based equipment and nuclear
hardening forced them into one). Most other microwave sites are unremarkable
and indistinguishable, just a bolt-together lattice tower and a portable
shelter they pushed off the back of a flatbed truck.
Finally, we make it back to New Mexico, crossing the border at which the Indian
Crafts billboards largely end. Owens lake is a sacrifice zone. LADWP knowingly
destroyed the lake, its surrounding valley, and the economy of the region in
order to serve what they believed to be the greater good. The Navajo Nation has
been sacrificed as well, to many causes, but among them to the nation's (more
perceived than actual) need for vast stockpiles of uranium. I'm not sure how
much we can learn from this comparison, but I have always seen the stabilized
tailings as land art. Some future archaeologist could easily wonder at a
ceremonial purpose for these huge black forms, geometrically precise in their
measurements but randomly placed wherever uranium was found.
Many years ago I saw the Cahokia mounds, built by indigenous people centuries
ago. They are an ancient work of landscape architecture, although the people
that built them may not yet have understood them that way. We are still
building mounds today: in the four corners to contain dangerous tailings, in
Owens valley to contain dangerous dust. We are only now beginning to understand
them as landscape architecture. The idea that they are art---that we might
build them not just out of necessity but also according to our desires---is one
that we are being made to come to terms with. There is, after all, no natural
disposition to the problems we've created. There is only a future we might
choose to design.
Let's take a break from our boring topic of regional history to focus instead
on an even more boring topic: implementation details of telephone lines.
The conventional "copper pair" analog telephone line is fading away. The FCC
has begun to authorize abandonment of copper outside plant in major markets,
and telcos are applying to perform such abandonment in more and more areas. The
replacement is IP, part of the overall trend of "over the top" delivery,
meaning that all communications utilities can be delivered by the use of IP as
a common denominator.
This type of service is usually branded as "digital voice." Historically this
seems to have come about to evade VoIP's bad reputation; in the early days of
Vonage and the charmingly sketchy, back-of-magazine-ad Magic Jack, VoIP
products often delivered subpar service. Today, I think "digital voice" has
mostly just become part of price differentiation for carrier-offered VoIP,
since independent VoIP services tend to cost considerably less. Still, there is
some logic to differentiating digital voice and VoIP: because digital voice
service is offered by the operator of the underlying IP network, it benefits
from QoS measures that general internet traffic doesn't. On consumer internet
connections, especially slower ones, digital voice is still likely to be more
reliable than VoIP due to QoS policy.
Ultimately the move to digital voice is probably a good thing, as the
abandonment of copper plant will kill off DSL in urban markets and make way
for faster offerings---from telcos, usually PON. I'll take the opportunity
to eulogize the conventional copper pair, though, by going in to a bit of
detail about how it actually worked.
To start, a large disclaimer: the details of the telephone network have varied
over time as technology and the industry evolved. Details often varied from
manufacturer to manufacturer, and because Western Electric had a practical
monopoly on the manufacturing of telephone instruments for many decades, it's
pretty much the case that the "standards" for telephone lines in the US were
"whatever Western Electric did," which varied over time. There were some
independent organizations that promulgated telephone standards (such as the
railroads which had their own extensive telephone plants), but they were almost
always completely deferential to the Bell System. Independent telephone
companies initially had to use different conventions than Bell because much
of the Bell telephone system was under patent; after the expiration of these
patents they mostly shifted to doing whatever Western Electric did to benefit
from the ready availability of compatible equipment.
After divestiture, Western Electric's de facto standards-making power was vested
to Bellcore, later Telcordia, today iconectiv, which after the end of AT&T
monopoly was owned by defense contractor SAIC and is owned today by AT&T's
erstwhile competitor Ericsson. iconectiv continues to promulgate some of the
standards used in the telephone industry today, through contracts awarded by
the FCC.
This is all to explain that the telephone system is actually surprisingly
poorly standardized in the United States. You might expect some hefty ISO
specification for analog telephone lines, but there isn't really one outside of
equipment specifications published by manufacturers. Many international markets
have much more detailed engineering specifications from independent bodies, but
they're usually based directly on Western Electric's practices. To make things
more confusing, it's not unusual for international telephone standards to
either be based on older US practices that are now rare in the US, or to have
standardized on "in practice" properties of the US system instead of nominal
values, or to have mixed conventions from Western Electric with conventions
from European telephone manufacturers like Ericsson. All of these standards end
up being mostly the same, but with a dizzying number of slight differences.
Today, the FCC imposes requirements on telephone lines as part of its
regulatory oversight of telcos. The FCC's requirements are basically to "keep
doing whatever Western Electric did," and are often surprisingly loose. Phones
are really very robust, and the basic design of the system is over 100 years
old. Local loops are routinely in poor condition which throws things out of
spec anyway, and then subscribers use all kinds of weird phones that are not
always that well designed (the history of regulation of telephone instruments
could fill its own post). It's actually fairly intentional that the electrical
specifications in the system are all soft targets.
For the purpose of this article I am mostly going to describe the state of a
fairly modern local loop, such as one connected to a 5ESS or DMS-100 digital
switch. But I will definitely not describe either of these switches totally
accurately. I'm trying to give the right general idea without getting too
bogged down in the details like the last few paragraphs. I'll take the topic
of electrical specifications (potential and current on telephone lines) as a
chance to give some examples of the variation you see in practice.
First, let's talk about the very general architecture of an analog local loop.
Somewhere in town there is a telephone exchange, and somewhere in your house
there is a telephone. The local loop is fundamentally two long copper wires
that go directly from your phone to the exchange. This distance varies greatly.
It's advantageous to keep it under a few miles (mostly for DSL), but in rural
areas especially it can be far longer.
There's a little more detail to what goes on at the two ends of the line. In
your house, you have one or more telephones that you use to make and receive
calls. In the parlance of the industry, these are often referred to as
"instruments" or "subscriber terminals" depending on the era and organization.
Historically, instruments were considered part of the telephone system proper
and were property of your telco. Today, you are allowed to purchase and use
your own telephones. This comes with some downsides. Along with the phone being
your property (and thus your problem), the telephone wiring inside of your home
is your property (/problem).
The telephone wiring in your house runs from jack to jack. In the United
States, all of the telephone jacks in a home are connected in parallel. This is
one of the differences you will find if you look in other countries: because of
exact details of the electrical design of the exchange and the phones, and
where different components are placed, some countries such as the UK require
slightly more complex household wiring than just putting all jacks in parallel.
But in the US, that's all we do.
If you crack open a wall and look at your household telephone wiring, you will
almost certainly find a surprising number of wires. Not two, but four. This, of
course, corresponds with the four pins on a modular telephone jack. Your
telephone only uses two wires (one pair), but dating back to the '60s it has
been a widespread convention to wire homes for two separate telephone lines.
It doesn't cost much more, but it gives the telco the opportunity to upsell you
to a second line. This convention is reflected not only in the wiring but, as I
mentioned, the connector.
The connector used for modern telephones is often called RJ-11, although that
term is not exactly correct in a pedantic way that rarely matters. It's a
little more correct, when speaking of the connector itself, to call it a 6P4C
modular connector. 6P4C means six positions, four contacts---it could have six
pins, but only four of the pins are actually populated. Two are for one phone
line, two are for the other. If you actually have two phone lines fitted to
your house, you will find that the single-line phones common in homes always
use the same pair, so you'll either need an adapter cable or some jacks wired
the other way around to use two lines. If you've ever lived in a house with two
phone jacks right next to each other, it's likely that one of them is wired with
the pairs reversed (or only has one pair at all) so that a standard single line
phone can be used on the second line.
The phone wiring in your house joins your phones in parallel with a device
formally called a Network Interface Device (NID), but often referred to as the
demarc or demarcation point. This is because the NID is not just a technical
object but an administrative concept: it is the border between your property
and the telco's property. Inside of the NID there is usually a short phone
cable (connected to your house wiring) plugged directly into a jack (connected
to the telco's wiring). If your phone ever malfunctions, the telco will likely
ask you to take it directly to the NID, unplug your household wiring, and plug
your phone straight into the jack. If the problem goes away, it is somewhere in
your household wiring, and therefore not the telephone company's problem. This
is their preferred outcome, and you will be told to use your non-functioning
phone to call an electrician.
The NID may be located in different places depending on the details of your
house, when it was built, and when telephone service was installed. Most
commonly it is somewhere outside of the house mounted on an exterior wall
or skirting, often somewhere near the electrical service entry. There are
plenty of exceptions, and especially in older houses the NID may be in the
basement or crawl space. In some cases, mostly mobile and manufactured homes,
the NID may actually be mounted to the telephone pole at the street or your
property line, and the overhead or underground connection to your house is
also your problem.
From the NID, the telephone line makes way to the exchange. In most cases today
this will be as part of a telephone cable. A telephone cable is an assembly of
many telephone pairs bundled into one sleeve, and they're either run along
utility poles (lower than the electrical lines for isolation) or underground.
Either way, your telephone line will be connected to the cable inside of a
splice closure. For overhead wiring, splice closure are usually black plastic
cylinders hung alongside the cable. For underground wiring, they're usually
gray-green pedestals sticking out of the ground. Either one provides space for
the many pairs in a cable to be spliced together, some to another cable but
some to a drop line that runs to your house.
Most of the time, the cable does not run directly to the exchange. This is not
exactly modern practice, but a common convention is to have two levels of
"feeder" cables. The F1 cable is a very large cable that runs from the
telephone exchange to a neighborhood. There, the F1 cable is spliced onto
multiple smaller F2 cables that run along neighborhood streets.
The splice between F1 and F2 cables, and in general any splice between multiple
cables, is usually done in a larger splice cabinet. Historically splice
cabinets were sometimes mounted up utility poles, but this made them more
difficult to safely work on so this arrangement is pretty much gone to history.
Instead, modern splice cabinets are larger ground-level pedestals, usually a
good 4-8 feet wide with large double doors.
There are several advantages to these splice points. First, they are obviously
necessary for the original installation of the telephone infrastructure.
Second, splice closure and cabinets are intentionally made for easy
modification. This gives the telco a lot of flexibility in fixing problems.
The world of local telephone loops is a dirty one, full of dirt and rain.
Despite precautions, water has a way of working its way into telephone cables
and can cause corrosion which makes pairs unreliable. When you complain, and
the NID test shows the problem is on the telco side, they will likely just
resplice your home telephone service onto a different pair back to the
exchange. This swap from one pair to the other avoids the problem, which is a
whole lot easier than fixing it. Actually fixing problems inside of telephone
cables is a whole lot of work, and with subscriber numbers dwindling in cities
there are usually lots of unused pairs so it's easy to swap them out.
In some areas, your local loop may not actually go directly to an exchange. It
might go to something like a remote line concentrator, or a serving area
cabinet, or a loop extender. These are all variations on the idea of putting
some of the exchange-side equipment in a big curb cabinet, closer to your
house. These arrangements are most common in suburban areas where local loop
lengths are long and subscriber density is fairly high. I'll mostly ignore
this, but know that some of the parts of the telephone switch may actually be
in a curb cabinet in your case. These curb cabinets usually function as remote
components of the switch and connect back by ISDN or fiber.
Once your telephone loop makes it from your phone, through your house wiring,
down a drop cable, through an F2 cable, and then through an F1 cable, it
arrives at the telephone exchange. There it often passes through an area called
the cable vault, usually an underground space in or adjacent to the basement
where cables enter the building, seeping water is drained, and pairs come out
of the armored cable jacket. Everything before this point has been "outside
plant," and is the realm of outside plant engineers. Now, we have entered the
sanctum of the inside plant, and a different department of the company.
From the basement, pairs go to the main frame, basically a really big splice
cabinet inside of the telephone exchange. The main frame allows exchange
technicians to connect pairs to the switch as they please. If an outside plant
technician has fixed your telephone problem by resplicing your house to a
different pair, they will submit a ticket (originally a paper slip) to have the
exchange technicians perform the same remapping on the main frame. Originally,
if you stopped paying your bill, a ticket would be generated for an exchange
technician to un-splice your phone line at the main frame. Today, both of these
are often done digitally instead by leaving pairs connected to the switch's
line cards and reconfiguring the line card mapping.
Which takes us to your local loop's next stop: the actual switch. The many
local loops that a class-5 or exchange switch serves terminate (in the case of
modern electronic switch) at what are called "line cards." The line card is
responsible for managing all of the electrical aspects of a small set of local
loops connected to it. Depending on the type of switch, the line card may
perform ADC and DAC to convert your analog local loop to digital signaling for
further handling by digital means. Or, it may connect your local loop to a
device called a hybrid transformer that separates your call into two pairs (one
for audio each direction) for further handling in analog form.
And that is your local loop. It's called a loop because the two wires,
connected at one end by the switch and at the other end by your phone, allow
current to flow all the way around. "Current loop" is the term in electrical
engineering for this type of arrangement, and it's such a common means of
conveying information by electrical signals that it's often only implied.
For there to be current through the loop, though, someone has to put some
potential onto the line. Historically, phones were expected to provide power,
but this practice had become obsolete by the end of WWII. In modern phone
systems the loop power is provided by the switch. In the case of phones
providing power, the phone contained a battery which was occasionally replaced
by the telco. In the case of switch-provided power, AC-DC rectification was an
imperfect art and there was a need for a backup capability in any case, and so
the telephone switch would get its loop power from a very large battery.
Because of this history, the normal potential on your phone line is known as
battery power. People will sometimes shorten this to say that the switch
"provides battery," especially in situations like test equipment or military
field phones where it isn't always obvious which end battery power will come
from. As another bit of telephone terminology, a telephone line with battery
applied is sometimes called "wet," while one without battery applied is called
"dry."
Battery power in the United States nominally comes from a series of lead-acid
batteries producing a nominal 48v. In practice, there is some considerable
variation. In older phone switches, a float charger was continually connected
to the batteries and so held the battery voltage higher (around 52-54v)
whenever the exchange had utility power available. Likely because of this, some
countries such as Japan actually standardized 50v or 52v as the nominal
off-hook potential. In newer equipment, battery voltage often comes not from
batteries at all but from a regulated switch-mode power supply (that runs
either off of external AC power or a battery bank of a potentially different
voltage). It may therefore be exactly 48v, but some of these power supplies are
actually regulated to 50v to match the typical behavior of older equipment. It
really just depends on the device, and most telephones will function
acceptably with well below 48v off-hook.
For historic reasons, telephone switches are mostly frame-positive. This means
that battery voltage is often described as -48v. The difference doesn't really
matter that much on the telephone end, but can be confusing if you are looking
at documents that use different conventions. Owing to the 1/4" jacks originally
used for telephone exchanges, the two wires in a telephone pair are called
"tip" and "ring." Compared to ground, "tip" should be about 0v while "ring"
should be about -48v. This "ring" is not to be confused with ringing power, to
be discussed later. It's just the naming convention for the wires. Some phones,
especially rotary types, will function fine with the polarity backwards. Most
newer phones won't.
This talk of voltages, so far, has all been assuming that the phone is on hook.
When a phone is on hook, the "hookswitch" disconnects the two sides of the
local loop from each other, leaving an open circuit. The voltage across the
phone thus settles at whatever voltage is applied by the switch. There are two
conditions under which this voltage changes significantly.
First, when the phone is off-hook and you receive a call, the exchange needs
some way to ring your phone. An age-old method of ringing phones is a magnetic
solenoid (arranged so that its core will strike bells when it moves) with the
line passing through it. The coil provides a very high resistance and so passes
little current under normal off-hook conditions. When the switch wants your
phone to ring, it applies a much higher voltage, and alternates it. In fact,
this AC voltage is superimposed on the normal DC battery voltage, and is known
as "ringing power" or "ringing voltage." Ringing voltage is AC at 20Hz by wide
agreement (this works well for the simple mechanical solenoid ringers), but the
potential has varied over time and by manufacturer. It also just tended to vary
based on equipment condition, since it was originally produced by
electromechanical means such as a motor-generator. 90v is fairly typical, but
up to 110v AC is acceptable.
The fact that a telephone line can carry 110v AC is a surprise to many, and
should discourage licking loose phone wiring, but from a fire safety
perspective it's reassuring to know that the current of the ringing power is
rather limited. The details of how it's limited, and to what level, depend on
the telephone switch in use, but the current-limiting at the switch allows
telephone lines to be handled as class 2 or power-limited for purposes of the
electrical code.
The astute reader might notice an interesting problem here. Multiple telephones
may be connected in parallel, but ringing voltage provides only limited power.
If you have enough phones, will there not be enough ringing power?
Yes.
The implementation of this problem is sort of an odd thing. Federal standards
created the concept of a "ringer equivalence number" or REN. When the REN was
developed, the standard telephone instrument from Western Electric was the
model 500, and other Western Electric phones for PSTN use were made to match
the 500. So, a REN is defined as the impedance and resistance imposed by a
single Western Electric model 500. Local loops are expected to provide enough
ringing power for up to 5 REN. In practice, this is rarely a concern today.
Few modern phones use the power-intensive electromechanical ringer of the 500,
and if you look at the literature that came with a digital phone like a DECT
cordless set you will likely find that it is specified as 0.1 REN. You can have
a lot of modern phones before you run into problems.
Ringing current only applies when there is an incoming call and the phone is on
hook. When you take the phone off hook, the hookswitch connects the two sides
of your local loop together via the phone's "voice circuit." This prompts the
switch to stop applying ringing current. Once the voice circuit is connected,
the voltage across your phone drops considerably. Power is now flowing and so
Ohm's law has taken control. Local loops are long and made of small-gauge wire;
just the resistance of the telephone line itself is often over a thousand ohms.
Besides the wiring itself, there are two other notable components the loop
current must pass through. First, there is some resistance imposed by the
switch. The details of this resistance depend on the exact switch in use but
the most common situation is that the phone line passes through a coil in a
"line relay" assembly as long as your phone is off hook. This relay informs the
switch logic of the state of your telephone line. The resistance imposed by the
line relay varies by switch, and on many switches it's adjustable, at least in
two or three steps. This allows an exchange technician to make an adjustment if
your loop is very short or very long, to keep the loop current more in a normal
range. 600 ohms is fairly typical for line relays.
The third important component is the voice circuit of your phone itself, which
also varies by phone but is also typically around 200 ohms. Because the phone
is in fact powered by the loop current, there is a certain requirement for
enough power for the phone to operate. AT&T specified that the maximum local
loop resistance should be 2400 ohm. At 2400 ohm and 48v battery power, the loop
current will be 20mA. 20mA was about the lower bound at which the voice circuit
in typical Western Electric models (such as the 500) performed acceptably.
Modern electronic phones are often workable at lower currents (and thus higher
loop resistance), but audio quality will become worse.
Too high of loop current can also be a problem. There isn't a widely accepted
upper limit that I know of, but it was known by the '60s at least that high
loop currents due to low resistance loops (usually found when the subscriber
lived close to the telephone exchange) caused higher temperatures in the line
relay which could result in early failure of the line card. This is one of the
reasons line cards often provide an adjustment, so that short loops can have a
higher line relay resistance to reduce the current. Modern line cards often
monitor the current on a loop and raise a trouble ticket if it is abnormally
low or high. Either one tends to indicate that there is a problem somewhere in
the line. Modern line cards also provide better over current protection and will
usually automatically disconnect battery power (and raise a trouble ticket) if
the current goes significantly above normal. This is why you can't get greedy
if you are trying to charge your laptop on Ma Bell's dollar. Older equipment
may have just used a fuse, so at least newer systems have the forgiveness of
auto-reset a few seconds after laptop-charging mistakes.
Under off-hook conditions, with loop current flowing, the voltage across your
telephone will be much lower. 3-5v is pretty typical, but the value will vary
with voice modulation as you use the phone and isn't very important.
It's important to emphasize that nothing in this system is very well regulated.
Well, with modern equipment the battery voltage is often precisely regulated,
but that's more a natural consequence of switch-mode power supplies than any
aspect of the design of the telephone system. Battery voltage is 48v in the
same sense that automotive electronics are 12 or 24v, in practice it's often
higher. Loop current is limited but fairly loosely. The acceptable range is
pretty wide in either case.
To put some more solid numbers on it, here are some specs taken from the
manual for a PABX. They seem pretty typical, except the upper limit for
off-hook voltage which seems unusually low. Many sources say 4-9v. Note
the two different ringing voltage specifications, these are to accommodate
two different national standards.
On-hook voltage: 40-50 V DC
Off-hook voltage: 4 to 6 V DC
Ringing: 90-100 V AC 20Hz or 60-90 V AC 25Hz.
Loop current: 25-40 mA (up to 80 mA in "unusual circumstances")
And for flavor, numbers I took from the spec sheet of a different business
telephony product:
On-hook voltage: -54v to 40v DC
Off-hook voltage: -20v to -5v DC
Current: 23-35 mA
Note the different voltage convention, and that all the numbers are, well,
similar, but different. That's just how it is!
Where we left off, Los Alamos had become a county, but the town itself
continued to be directly administered by the Atomic Energy Commission (AEC).
The Atomic Energy Communities Act (AECA) mandated the AEC to dispose of the
towns it owned by transferring the property to private owners or government
agencies. This included not just the individual houses (which had all been
rented by AEC) but utilities, parks, and other municipal property.
In 1963, shortly after the addition of Los Alamos to the AECA, the AEC started
the process of transferring public resources. The schools were a relatively
simple case, as a county school board had existed since the creation of the
county in 1949, but it still took until 1966 for the AEC to give the school
board title to the real estate the schools occupied. A case that seemed
more complex, but also moved faster, was the electrical supply.
It's likely that the AEC had their own motivations. Electricity was fairly
expensive and limited in Los Alamos, and the lab had an almost perpetual need
for more power. Transferring the electrical utility to an outside agency might
have benefited the AEC by moving system expansions into state and municipal
financing and out of the federal budget. In any case the electrical system was
actually reasonably cut-and-dry since New Mexico had a well established system
of electrical franchise agreements and there were plenty of utility operators
with experience in power distribution. As mandated by the AECA, the AEC put
the electrical system out to bid.
One of the bidders was entirely unsurprising: The Public Service Company of New
Mexico, today known as PNM [1]. The politics of Los Alamos utilities turn out to
be one of the most complex issues surrounding the transition from AEC control,
and PNM's attempt to purchase the electrical system is a great example of the
debate.
First, it should be understood that electrical service in Los Alamos was
expensive to offer due to the remote location and difficult terrain around the
town, but Los Alamos residents had been largely insulated from this issue. It
was the AEC that really footed the bill for electrical power, and residents
were charged a rate that was designed to be similar to other electrical rates
in the area---not to recoup the AEC's cost. There was a clear risk that
electrical rates in Los Alamos would increase significantly after transfer away
from the AEC, and much of the politics of mid-'60s Los Alamos was animated by
this concern.
The debate over ownership of utilities played out in a series of delightfully
mid-century newspaper ads taken out not just by PNM but also by organizations
with names like "People's Committee for Private and Professional Utilities" and
"Los Alamos County Citizens for Utilities." The debate was something like this:
PNM argued that their large size and experience in the electrical industry
would allow them to sustain low rates in Los Alamos. Some Los Alamos residents
favored this argument, taking the view that PNM was an established, experienced
organization that would operate the utility based on sound analysis and
long-term planning. This kind of thinking seems to have been particularly
popular in Los Alamos, where nearly all the residents worked for another huge
bureaucracy that considered technical expertise its defining identity. Further,
it was thought that if the county owned the utilities, the financial shortfall
would likely be made up by increasing county property taxes---effectively
increasing electrical cost, even if the rates stayed the same.
On the other hand, advocates of county ownership felt that a public operator,
without the motive of returning profit to its shareholders, could ultimately
keep costs lower. In their support, consulting engineers hired by both the AEC
(to evaluate bids) and the county (to support their bid) concluded that it
would be possible to operate the system profitably without exorbitant rates.
County ownership was often seen as the more conservative option anyway, since
it would allow for the privatization of the utility in the future by franchise
agreement. To further assuage concerns about the county's competence to run the
utilities, the county board advertised in the newspaper that it intended to
directly hire staff from the Zia Company, the private contractor the AEC had
used for maintenance and operations of the utilities.
Finally, the county had a rather unusual motivation to assume control of
utilities. Because the laboratory was essentially the only industry in Los
Alamos and was exempt from property tax due to federal ownership, the county
had only a very limited ability to raise revenue. The county felt that the
profit it could make from utility operations would become a key revenue source,
particularly since the lab itself would be a customer. In the words of state
senator Sterling Black, county-owned utilities would become "a club we can wave
over the AEC." Considering the AEC's tendency to completely steamroll county
decisions when they conflicted with federal policy, this kind of political
power was extremely important.
The idea of AEC decision input by referendum was not a new one. Just a bit
earlier in 1963, Los Alamos residents had gone to the polls to choose who would
receive the Los Alamos Medical Center. Their choice was between the Lutheran
Hospitals and Homes Society (an out-of-state concern but known to operate the
hospital in Clayton at the time), the Los Alamos Medical Center Inc. (an
organization formed by Los Alamos residents expressly to operate the hospital
locally), or "neither." This issue had been substantially less contentious as,
it turns out, the Lutheran organization had submitted their proposal to the AEC
before it knew about the local organization. Once it came to their knowledge
they apparently deferred to the local initiative, telling the Santa Fe New
Mexican that they supported local operation of the hospital and would pursue
ownership only if the county invited them to do so.
In the case of the hospital, though, we can already see the formation of the
movement against local organizations and towards larger, more experienced
organizations that were likely to offer a more predictable and stable---even if
less locally driven---operation. The Lutheran organization won the referendum,
and controlled the hospital until 2002, when another attempt at establishing a
local hospital operator failed.
The story of the Los Alamos Medical Center illustrates some other challenges of
the military-to-municipal transition as well. When the AEC built the Los Alamos
Medical Center, it included housing for staff as part of the complex. As a
result, the hospital had two, 24-unit apartment buildings directly attached to
it. This seems to have established a tradition in which another condo building,
owned by a group of physicians, was attached to the hospital in 1982. The odd
mix of hospital and housing, and the resulting complex lease agreements,
complicate ownership transfers of the hospital to this day.
The hospital referendum provided a clear example for the utility referendum,
but the utility question attracted significantly more debate. This is perhaps
ironic considering that the utility referendum was also less significant in the
actual selection process. Because of the precise wording of the AECA, the AEC
had determined that it would follow the outcome of the hospital referendum
exactly. In the case of utilities, though, the AECA mandated that the AEC
consider a set of technical and economic factors as well. The referendum would
not be absolute, it would only be one of the factors in the AEC's decision.
A further complication in the decision was the exact nature of the county's
legal authority. New Mexico counties are not normally authorized to operate
utilities (although several do today under various provisions, such as
Bernalillo County's partnership with the City of Albuquerque to operate the
Albuquerque-Bernalillo County Water Utility Authority). In the legislative
session at the beginning of 1963, the New Mexico legislature passed a
constitutional amendment intended to facilitate the Los Alamos transfer,
without further fanning local debates by requiring the City of Los Alamos to
incorporate [2].
Although the amendment didn't take effect until 1964 following its approval in
a statewide referendum, it was very much on the minds of the county government
as they considered utilities. We will revisit this amendment in more detail
shortly, but it had an important impact on the utility debate. In the 1960s,
counties in New Mexico granted utility franchises under state regulations. This
meant that such utilities had their rates reviewed by the state Public Service
Commission. Because of the unusual status of Los Alamos county as the only
"category 7" county (by the '60s now called an "H-class" county), the situation
was somewhat more complex, and Los Alamos County probably had the opportunity
to function as a municipal government instead, which allowed for complete home
rule of utilities. In other words, it seemed that Los Alamos utilities could
be regulated by the Public Service Commission, but they didn't have to be,
depending on exactly how the county wrote up the paperwork. The upcoming
constitutional amendment in fact solidified this situation.
One argument for county ownership was then that it would give the public better
control of the utility, since it would not necessarily be subject to state rate
regulation. One argument the other way was that a private utility operator
could still be state regulated, if so desired. The regulation issue really only
served to confuse the debate, as did a number of other events of 1963,
including minor scandals of the county withholding documents and accusations of
harassment and illegal campaigning by various individuals and political
organizations in Los Alamos. Notably, the League of Women Voters, long the most
important and highly regarded non-partisan political organization in New
Mexico, was widely accused of compromising its impartial, non-partisan values
by openly advocating for county control. In some ways this was actually not a
very partisan position, as in a rather unusual political turn Los Alamos's
Democratic Committee and Republican Committee joined together in endorsement of
county ownership. The Santa Fe New Mexican quipped that the situation had
become David vs. Goliath, except that somehow David was the mighty PNM (and
Southern Union Gas, the company that had previously bought out PNM's gas
operations) and Goliath was tiny Los Alamos.
A full accounting of the politics around the utility acquisition could quite
possibly fill a book, as very nearly every aspect of Los Alamos life was
somehow a factor in the debate. It only adds color to the situation that the
federal officials involved, being AEC personnel, included well-known scientists
like Glenn Seaborg. Just the newspaper editorials written in support of either
position would make for a hefty binder. I will try not to get stuck on this
forever and jump to the conclusion: David did not fell Goliath, and Los Alamos
voters favored county control of utilities by 71%. Although the full transfer
process would take several years to complete, the decision was made and the
electrical and gas infrastructure were transferred to Los Alamos County, where
they remain today.
There were yet more utilities to consider. Telephone service in Los Alamos had
been provided by the AEC, with all service ultimately connected to the exchange
at the laboratory. This, too, needed to be transferred, and in 1964 the AEC put
out a request for bids. There was little precedent for government operation of
telephone systems, and it was viewed as legally difficult, so it was clear that
some private operator would step in. Two made an offer: Universal Telephone
and Mountain States Telephone and Telegraph.
Universal Telephone was born of Milwaukee, Wisconsin. Wisconsin was known in
the middle part of the 20th century for its particularly skeptical stance on
telephone monopolization under AT&T. Wisconsin was among the first states to
introduce pro-competition telephone regulation, namely the requirement that
competitive telephone carriers interconnect so that their customers could call
each other. Universal Telephone was thus among the most ambitious of AT&T's
competitors, and no doubt saw Los Alamos as an opportunity to capture a brand
new telephone market and demonstrate the capabilities of those outside the
Bell system.
Mountain States, though, was an AT&T company, and brought with it the full
force of AT&T's vast empire. The complex landscape of telephone regulation
immediately became the key point of this fight, with AT&T arguing that the
state telephone regulation agreements gave Mountain States the exclusive legal
authority to operate telephone service in Los Alamos county. Far from a
referendum debated in the newspapers, the future of Los Alamos telephony was
mostly debated by lawyers behind closed doors. The AEC's lawyers sided with
Mountain States, disqualifying Universal Telephone as a bidder. By the end of
1964 the writing was on the wall Mountain States would be the new telephone
company (besides, AT&T rarely lost a fight), although legal arguments would
continue into the next year. In early 1966, the deal was complete and Mountain
States began to staff its Los Alamos operation. They had a lot of work to do.
Even into the '60s the telephone exchange in Los Alamos supported only 5-digit
dialing. While 5-digit dialing is not unusual in some large private phone
systems (and indeed is still in use by LANL today), the use of 5-digit dialing
across a town was exceptional and meant that Los Alamos telephone users could
not directly dial outside of the town. In 1968, Mountain States began
construction on a new telephone building with a distinctive microwave tower for
direct connectivity to the broader AT&T network. The new switch installed
there, a 1ESS, offered for the first time touch-tone dialing, direct-dial long
distance, and even three-way calling. At the same time, numerous new lines were
installed to take advantage of the higher capacity of the new exchange.
Telephone service became far easier to order in Los Alamos and on Barranca Mesa
(not yet always considered part of the town), although White Rock would have to
wait almost another decade for improved service availability.
Electronic telephone switching wasn't the only new idea in Los Alamos. The
entire "H-class" county designation of Los Alamos was extremely confusing, and
more and more the nature of the county was considered a mistake going back to
its start in 1949. State legislators had tried to make Los Alamos a special
kind of county that had the authority to do certain things normally withheld to
municipal governments, but the piecemeal way they had done this created a lot
of problems and required some sort of state legislation to address one problem
or another nearly every year from 1949 to 1969. This included two cases of
significantly reworking the county's enabling legislation, which is part of why
it went from "7th category" to "class H" even though H isn't the 7th letter.
Los Alamos was created as a whole new kind of county in 1949, it was made yet
another whole new kind of county in 1955, major changes were made in 1963, and
now it was still a mess. Something had to change, and the people of Los Alamos
along with the state legislature were increasingly leaning towards the idea of
incorporating like a city---but as a county.
In 1963, the state legislature passed Constitutional Amendment #4. In 1964,
it was approved by state voters and took effect. This amendment, now article
X section 5 of the New Mexico Constitution, created yet another brand new
kind of county: the incorporated county.
The applicability of X § 5 was intentionally limited to Los Alamos, although
the realities of legislation discouraged writing it that way. Instead, it was
made applicable to any county with an area under 140 miles and a population
greater than 10,000 (this latter rule mostly just to agree with other rules on
incorporating municipalities). You can no doubt guess the number of counties
under 140 square miles in New Mexico, a state known for its vast counties. Even
tiny Bernallilo County, sized for its dense urban population, measures over
1,000 square miles. Despite its ostensibly general criteria, X § 5 applies
uniquely to Los Alamos. It's even written such that any new counties that
small would be excluded, only counties that existed at the time of its passage
are eligible.
The amendment allowed Los Alamos county to incorporate, just like a
municipality:
An incorporated county may exercise all powers and shall be subject to all
limitations granted to municipalities by Article 9, Section 12 of the
constitution of New Mexico and all powers granted to municipalities by
statute.
In other words, it would resolve the problem of Los Alamos County's confusing
city-county nature by making it, well, a real city-county. This concept is not
unique nationally and some states, such as Virginia, rely on it extensively
with "primary cities." San Francisco is a well known example of a city-county
in the West. Los Alamos has much the same status today, but due to the history
of the city being closely tied to AEC administration, the city-county title is
rarely used. The Los Alamos government today identifies itself only as Los
Alamos County (slogan: "Where Discoveries Are Made"), and its nature as a
municipal government is relegated to the legal details.
Los Alamos's administrative situation is actually oddly obscure today. Wikipedia
refers to Los Alamos proper only as a census-designated place and county seat,
making no mention that the county it is seat of is actually a municipality as
well. If I could mount one odd political campaign in Los Alamos it would be to
restyle the government as The City and County of Los Alamos for aesthetic
reasons, but then the strict limitations in X § 5 make it unlikely that the
idea will catch on more broadly in New Mexico.
But what about the actual incorporation process? An incorporated municipality,
under state law, requires a charter. The constitutional amendment provided
that such a charter could be developed by a charter committee and then approved
by a majority of the voters. This detail of the incorporation process would
become a very tricky one.
The first major charter effort gained momentum in 1966, when a charter
committee completed a draft county charter and began the process of gaining
electoral support. In doing so, they brought the debate over public utilities
back to life.
Incorporation would normalize the county's authority to franchise, operate, and
regulate utilities, but changed the legal situation enough that the county
would need to rebuild its administrative bureaucracy for utility oversight. The
1963 concern that utilities would become a "political football" (in the words of
one county commissioner at least) were back in the spotlight, as the new charter
proposed a utility advisory committee but gave it little actual authority and
mostly deferred utility matters to the elected county commission.
Utility operations might seem like a small detail of municipal government but,
as the original 1963 referendum shows, it was a very big detail in Los Alamos.
The problem of the 1966 charter and its utility advisory committee lead to many
newspaper articles and opinion columns, and a group calling itself the
"Citizens for Representative Government" formed a major opposition to the '66
charter. While their name makes it sound like a broader issue, the
representative government they sought was mostly the ability to put utility
rates to referendum, denied by the draft charter. Another, less decisive
problem was the charter's general increase in the powers of the county manager.
Several county positions which were previously elected would be appointed by
the manager, mostly positions related to finance and tax administration.
In a cunning political move, the Citizens for Representative Government seem to
have made themselves the principal source of information on the draft charter.
They mailed a copy of the proposed charter to every Los Alamos resident,
including their phone number for questions. A nearly full-page newspaper ad
featured the charter represented as arrows shot towards the hapless Los Alamos
public, captioned "There's still time - duck now! Vote no February 8th."
Major political points made in the papers included limited recall powers,
the possible appointment of judges by the county council, and the lack of a
requirement for competitive bidding on county contracts... but still focused
on utility management and utility rates as the single largest issue.
Whether through good political organizing or the genuine public fear of
politicization of town utilities, the '66 charter failed its referendum vote.
The general consensus, reflected in the papers, is that the charter was mostly
acceptable but there was widespread opposition to a few specific sections.
Given the situation, there was not much to do except to start over again, and
that they did. At the end of 1967, a new charter committee was formed. As an
example, they worked from the new City of Albuquerque charter, one of several
which rotated Albuquerque through various forms of city government in the mid
century.
The second charter committee had been asked to work quickly, and they delivered
on a tight timeline. In early 1968 a draft had been published and was the
subject of much debate once again, and as the end of that year approached it
had reached the form of a final draft. In many ways, this second charter was
more conservative, leaving much of the county government to function the same
way it had before. Like the City of Albuquerque at that time, it was based on a
manager-commission model without an elected executive.
This time around, proponents of the charter leaned heavily on its amendment
mechanism as a selling point. Being such a new municipality, and with an
unusual administrative situation, it seemed clear that Los Alamos government
would require flexibility. In an effort to address the specific complaints
about the '66 charter, this new charter also formed an independent utility
board with members appointed by the county council on a staggered basis... and
direct authority over utilities, alongside a professional utilities director
hired by the commission.
This new charter gained public support much more easily, apparently benefiting
greatly from the lessons learned during the last attempt. In September it was
approved by the county commission, and in December a referendum was held for
public approval. This time, it passed---with lower turnout than the last effort,
suggesting some mix of greater support with, well, less opposition. Still, not
making enemies is a kind of success, and at the beginning of 1969 Los Alamos
took its identity as New Mexico's only "incorporated county."
It took a lot of attempts for New Mexico to make the transition, perhaps five
depending on how you count, but the townsite eventually did convert from AEC
property to a municipality. We've even talked about the transition of the
school district and electrical, gas, and telephone service. But what of water
service? That story is complex enough I could go on about it for a while,
and I probably will, in a part III.
[1] This "public service" naming is extremely common in older, central US
electrical utilities, but should not be taken as implying public ownership.
Nearly all of the "public service companies" are private corporations with no
history of public ownership.
[2] It may be important here to understand that New Mexico is one of the states
that achieves most major legislation through constitutional amendments. The
New Mexico Secretary of State publishes a convenient, pocket-sized copy of
the constitution, which is 218 pages long.