One of the more common arguments you hear in contemporary computer security
circles is about hardware trust and embedded controllers. Technologies like
Intel ME, the ARM Secure Element, TPMs, etc, remind us that the architecture
of the modern computer is complex and not entirely under our control. A typical
modern computer contains a multitude of different independent processors, some
running relatively complete software stacks. They come from different vendors,
serve different purposes, and are often unified only by opacity: for several
different reasons, the vendors of these controllers don't like to discuss the
details of the implementation.
It's tragic how the modern PC has put us into this situation, where we no
longer have control or even visibility into the working of core, privileged
components of our computers---components running software that could
potentially be malicious. By the modern PC I do, of course, mean the IBM PC
of 1981.
I don't want to belabor this post with much background, but if you are quite
new to the world of computer history I will briefly state one of the field's
best-known facts: for reasons that are ultimately more chance than logic, the
original IBM PC established many de facto standards that are still used in
computers today. "PC compatibles," in the 1980s meaning computers that could
run software targeted originally at the IBM PC, had to duplicate its
architecture rather exactly. The majority of modern computers, with Apple
products as a partial exception, are directly descended from these PC
compatibles and are thus strongly influenced by them.
Let's talk a little bit about the architecture of the IBM PC, although I'm
going to pull a slight trick and switch gears to the 1984 PC/AT, which has more
in common with modern computers. By architecture here I don't mean the ISA or
even really anything about the processor, but rather the architecture of the
mainboard: the arrangement of the data and address busses, and the peripheral
controllers attached to them. The 80286 CPU at the core of the IBM PC had an
16-bit data bus and a 24-bit address bus. Together, these were called the
system bus.
The system bus connected, on the mainboard, to a variety of peripheral devices.
The RAM and ROM sat on the bus, as well as the 8254 timer, the 8259 interrupt
controller (actually two of them), the 8237 DMA controller (once again, two of
them), and the 8042 keyboard controller.
We are going to talk about the keyboard controller. See, there's something sort
of interesting about these peripheral controllers. Most of them are
purpose-built ICs, like the 8237 which was designed bottom to top as a DMA
controller (actually by AMD, and used under license by Intel). The 8042,
though, is not really a keyboard controller. It's a general-purpose
microcontroller from the same product line used as a CPU in some early video
game consoles. The 8042 on the PC/AT mainboard was simply programmed with
software that made it function as a keyboard controller, reading scancodes from
the keyboard, interrupting the CPU (via the 8259), and reporting the scancode
read on the system bus.
The actual software on the 8042 is poorly known, and seemed to vary in its
details from one model to the next (this is one of the things that could create
subtle compatibility issues between ostensibly PC-compatible computers). In
fact, the OS/2 museum
reports
that the software of the 8042 on the PC/AT itself wasn't dumped and analyzed
until 2009-2010. And IBM, of course, was not the only vendor of 8042-based
keyboard controllers. For decades following, various manufacturers offered
keyboard controllers intended to replicate the function of IBM's own software.
These keyboard controllers were known to have a number of undocumented
commands, the details of which depended on the mainboard vendor. And most
interestingly, since they were general-purpose microcontrollers and had spare
capacity after handling the keyboard, the 8042 in practice served as more of a
general-purpose embedded controller. The term "keyboard controller" is a
misnomer in this way, and it would probably be more accurate to call it an
"embedded controller," but that term wasn't yet in use in the mid '80s.
The sort of miscellaneous functions of the 8042 are well-illustrated by the A20
mask in the PC/AT. Explaining this requires brief IBM PC backstory: the
original PC used the 8086 processor, which had a 20-bit address bus. The 80286
in the PC/AT offered a 24-bit address bus, allowing it to address appreciably
more memory (16MB as compared to 1MB). The problem is that the 8086 had a
mostly accidental behavior in which memory addresses beyond the 20-bit limit
(the details of this relate to the unusual addressing mode used by the 8086)
would wrap back to zero and instead address early sections of memory. Some
software written in the 8086 period actively exploited this behavior as an
optimization, and some of that software was quite popular. Suddenly, on the PC
AT, these just-past-1MB addresses actually referred to the new upper section of
memory. Software that assumed it could address early memory by wrapping past
the 1MB limit was in for a surprise---typically one that halted execution.
The 80386 would introduced "virtual" mode that addressed this problem by
emulating the 8086 in a bug-compatible way, but the 80286 lacked this feature.
To make sure that existing IBM PC software would be usable on the PC/AT, IBM
introduced a trick to the architecture: the A20 gate. An extra logic gate on
the motherboard would force the A20 line (the 21st bit of the address bus, due
to zero-indexing) to zero, restoring the original "wraparound" behavior. For
older software to run, the A20 gate should be "closed" for compatibility. To
use memory beyond 1MB, though, the A20 gate needed to be "open" so that the
second MB of memory could be addressed.
Anyone who has studied the boot process of modern computers knows that it is
littered with the detritus of many generations of awkward additions. The PC/AT
was sort of the genesis of this situation, as the 80286 introduced the concept
of "real" vs. "protected" mode and the need for the operating system or
bootloader to switch between them. Along with this came a less widely known
need, to also manage the A20 gate. On boot, the A20 gate was closed. Software
(such as an operating system) that wanted to use memory past 1MB had to open it.
Given that the A20 gate was just a random bit of extra logic in the mainboard,
though, how would software interact with it?
The answer, of course, is the keyboard controller. The 8042 could be sent a
command that would it cause it to open the A20 gate. In fact, this wasn't the
only role the 8042 served in the basic management of the processor. The 80286
couldn't switch from protected mode back to real mode without a reset, so for
the computer to switch between real mode and protected mode dynamically (such
as for multitasking with legacy software) something had to "poke" the reset
line on demand. Once again, the PC/AT placed the 8042 keyboard controller in
this role.
Over time, the 8042 tended to accumulate more roles. It handled the mouse as
well in later computers with PS/2-type mice. It drove the PC speaker on some
motherboards. Some early laptops used the 8042 for power management or display
functions. None of these things were ever really standardized and often become
a headache today.
Over time, the number of discrete components on motherboards has plummeted.
The 8042 fell victim to this process fairly early on. On most PC-compatible
computers, the 8042 gave way to the "Super I/O controller." This larger
genera-purpose microcontroller combined even more functions, combining the
serial, parallel, floppy, and keyboard/mouse controllers into one device
that also handled auxiliary functions like fan speed control, MIDI/game port
(different functions but for practical reasons usually packaged together),
power management, and yes, the A20 gate and processor reset. The Super I/O
is still more or less present in computers today, but it's usually referred to
as the Embedded Controller or EC in modern architectures. The EC is often
connected to modern processors over the LPC or "low pin count" bus, which
amusingly is a somewhat modernized version of the original ISA expansion
architecture on the PC/AT... long ago replaced by AGP, PCI, PCIe, etc. for
most purposes.
That said, all of these terms are becoming somewhat irrelevant today. Modern
processor architectures are pushing perpetually further into unified chipset
designs in which progressively larger controllers of various names integrate
more and more functions into one device. I learned about computer architecture
in the days in which the "northbridge" and "southbridge" were the major active
components of the system bus... but few computers today have discrete north and
south bridges (or front-side and back-side buses for that matter), the
functions of the two having been integrated into various combinations of the
processor die and a large, monolithic motherboard chipset like the platform
controller hub (PCH), with the functions of essentially all motherboard
components put into one IC.
The 8042 thing all feels like sort of a hack, and indeed it was, but the PC
and PC/AT (and PS/2 for that matter) architectures were full of such hacks.
The IBM PC had a device we've mentioned, the PIC or programmable interrupt
controller, that served to multiplex 7 different interrupt lines onto the
processor's single IRQ pin. By the release of the PC/XT, all seven of those
interrupts were in use on many machines, so IBM decided to get some more...
by daisy chaining a second PIC onto the interrupt 2 line of the first one.
In other words, interrupts 8-15 on a PC/AT (and subsequent PC compatibles)
are in fact interrupts 0-7 on the second PIC. When the second PIC receives
an interrupt, it forwards it to the first PIC by triggering interrupt 2,
which then interrupts the processor.
Conventionally the keyboard used interrupt 1, so the 8042 was wired up to the
interrupt 1 line of the PIC. The mouse, though, assuming a PS/2 mouse, used
interrupt 12... reflecting the fact that the interrupt-capable mouse interface
introduced on the IBM PS/2 was copied to the PC/AT, at the same time that the
second PIC was added. In other words, the mouse is one of the things that they
had run out of interrupts for. Of course today, we use keyboards and mice
primarily via a polling strategy over USB.
This kind of hardware-level hackjob is not at all a thing of the '80s, and most
modern systems still have plenty of it going on. Two of the most common cases
are the need to reset the processor for various reasons and low-power modes,
where something needs to wake up the processor... ideally something that
consumes less power than keeping the processor awake. I am dealing with a
microcontroller device right now, for example, where the embedded capacitative
touch controller does double-duty to wake up the processor from deep sleep
state on certain events. This allows the processor to shut off its own power, a
huge savings considering the tiny draw of keeping the capacitative touch
controller powered. It's oddly like the original 8042, taking some input
controller that's a little more powerful than it needs to be and giving it some
miscellaneous extra tasks.
While I am making few to no promises, I am once again attempting to fill
the hole that social media left in my heart by using Cohost. If you are
one of its ten other users please follow me: jbcrawford
After something like two years of saying I would, I finally got around to
implementing my own software for the email newsletter. I have of course
moved over existing subscribers but if you notice any issues with the email
version please let me know. If you aren't an email subscriber, now is a good
time to make my self esteem number go up and subscribe.
Most OTH radar have faded into obscurity since the end of their era of
relevance (something we will get to in the chronology soon). One, though, looms
above all the others. This is mostly a figurative expression, but it's also
literally true to some degree, as it's a pretty tall one. I am referring of
course to Duga, or as some might know it, the brain scorcher.
As I have previously mentioned, development of OTH radar proceeded in the USSR
more or less in synchronization to developments on this side of the iron
curtain. It's a bit difficult to nail down the Soviet history exactly due to a
combination of lingering Cold War secrecy and the language barrier, but it
appears that the USSR may have developed their first functional OTH radar
somewhat before the US, at least as a pilot project. Soviet OTH radar did not
become well known to the western world, though, until the commissioning of their
first full-scale ballistic missile surveillance radar in 1972. This would
become known as Duga-1.
To discuss Duga we first need to return briefly to the fundamentals of radar.
A monostatic radar system transmits and receives at the same site, observing
for reflections. A bistatic radar system transmits and receives from separate
sites. The earliest bistatic radar systems, and many later on, actually looked
less for reflections than variations in the signal received directly from the
transmit site. We can understand this concept by considering the simplest
possible type of bistatic radar: if you transmit a signal at one antenna, and
receive it at another, than any reduction in the strength of the received
signal might indicate the presence of an object in between.
This naive scheme is actually surprisingly practical and sees real use mostly
in intrusion-detection applications, where microwave or millimeter wave radar
acts as an RF analog to a laser or infrared beam gate---which has the advantage
of being more tolerant of rain and fog. A notable example is the AN/TPS-39
microwave perimeter protection radar used to protect the entrance points to
Atlas missile silos, but a similar system (made both more reliable and more
compact by digital technology) is sometimes used today, particularly around the
perimeters of prisons.
Bistatic radar can also be appreciably more complex. For example, the "Pinetree
Line" radar system deployed across southern Canada for detection of Soviet
bombers employed the simple bistatic radar principle enhanced with the use of
Doppler detection. If you imagine the field between a radar transmitter pointed
at a remote radar receiver, you might realize that objects not exactly between
the antennas, but near the line between them, might reflect RF radiation at a
glancing angle and cause a return to the receiver delayed by the angle distance
between the object and the two sites. The Pinetree line exploited this effect
as well, allowing it to detect aircraft before they reached the line proper.
At the extreme, a bistatic radar system detecting objects not directly between
the sites appears more like a monostatic system, and the separation of the
transmit and receive sites becomes more a detail of the antenna construction
than the operating principle of the radar. This blurring of the line between
monostatic and bistatic radar is apparent in OTH radar, very much as a design
detail of the antennas.
There is a general challenge in radio: transmit and receive can be difficult
functions to combine. There are several reasons for this. Considering the
practicalities of antenna design, different antennas are sometimes more optimal
for transit vs. receive. Moreover, it is very difficult to receive at the same
time as transmitting, because the transmit power will easily overwhelm the
front-end filtering of the receiver causing extremely poor reception, if not
outright damage (this is the basic reason most conventional radio applications
are "half duplex," where you cannot hear when you are talking).
For radar, this problem is typically addressed by transmitting a pulse and then
switching to receive operation immediately afterwards. The same problem rears
its head again when considering the relationship between range and pulse
repetition frequency. When a radar detects an object 1500km away (slant range
for simplicity), it takes about 0.01s for the RF pulse to make the round trip.
This means that if the radar is avoiding duplex operation, it can transmit
pulses a maximum of 100 times per second. In reality the problem is much worse,
because of the need for buffer time past the longest desired range and more
significantly the time required to switch between transmit and receive, which
in the case of radar was typically being done by "RF contactors..." huge relays
designed to tolerate RF power [1].
OTH radar systems are intended to operate at both extremely long ranges (3000km
being moderate in this context) and desire high PRFs, since high PRFs are very
useful in identifying and tracking fast-moving objects. For this reason, most
practical OTH radars actually transmit and receive at the same time. There is
only one practical way to achieve this: geographic separation, or in other
words, bistatic radar. Most of the OTH radar systems we will discuss from this
point use this method, with transmitter and receiver separated by at least 50km
or so and usually placed on opposite sides of a mountain range or other ridge
to provide additional geographic attenuation of the direct transmit power at
the receive site.
That was all a long preamble to say where Duga is, which is of course the
reason that it's so well known. The Duga-1 transmit site was located very
near Chernobyl in present-day Ukraine, and the receive site about 50km to the
northeast. The imposing Duga-1 antenna, which might be 150m tall but it's hard
to find that number for sure, is visible from many parts of the Chernobyl
exclusion zone and is a typical stop when visiting the area. The other antenna
of Duga-1 no longer survives, having been dismantled decades ago.
It's hard to find much detailed information about Duga-1. Its transmit power is
often cited as 10MW, but given the general history of OTH radar in the United
States I find it likely that that was a target peak power that may never have
been achieved. The practical average output power was probably more in the
range of 1MW or lower, similar to US systems. There are actually quite clearly
two separate antennas at the Chernobyl transmit site, something that is oddly
rarely mentioned. I haven't been able to find details on why, but I will
speculate: first, it is not at all unusual for radio transmit sites to have two
antennas, primary and auxiliary. The auxiliary antenna serves as a backup and
can be switched into use when the primary antenna is damaged or being serviced.
The auxiliary antenna is often smaller, both to save money on the antenna and
because it's often used with an auxiliary transmitter, smaller for the same
reasons.
It's also possible that the two antennas have different radiation patterns,
allowing basic steering functionality such as seen with Cobra Mist. It's most
likely that these patterns would be two altitude angles, intended for longer
and shorter range use. Considering the different sizes of the antennas it might
make sense for the smaller antenna to have a higher altitude angle, since less
transmit power would be required at shorter ranges (remember that the radar
signals reflect of the upper atmosphere, so aiming higher moves the target area
closer).
Duga-1 pointed north, towards the United States by great circle distance. It
operated from 7-19 MHz (the lower minimum frequency than Cobra Mist would
improve performance at night), and tended to operate on a given frequency for a
short time of up to ten minutes before switching to a different frequency.
This is a common operating pattern for HF radars because of the need for human
operators to find clear frequencies and because different frequencies will tend
to propagate better to certain regions. Cobra Mist tended to operate in a
similar fashion, with operators selecting a likely-looking frequency and
operating the radar on it for long enough to receive and analyze a fair amount
of data before moving on.
Duga is best known in the western world for its transmissions. There is an
aspect of OTH radar that I have not so far discussed, but is important: HF
radio can propagate over extremely long distances, and OTH radars operate at
very high power levels. The pulses transmitted by OTH radar can thus often be
received over a large portion of the planet. Duga became famous for its widely
heard chirps, earning it the nickname "Russian woodpecker." It was particularly
well-known because it operated without regard for typical European and American
band plans (but presumably according to USSR spectrum management conventions),
so it was often heard on broadcast and amateur bands where it became irritating
interference.
There was a famous fight back against the interference: some amateur radio
operators began actively jamming Duga, transmitting recorded pulses back on the
same frequency. These false returns apparently did cause frustration to the
Duga operators, because shortly after jamming started Duga would switch to a
different frequency. This casual electronic warfare went on for years.
Duga-1 apparently performed well enough in practice, as the USSR built another
one. Duga-3, also located in Ukraine, faced east instead to observe for
launches from Asia. It appears to have been a substantially similar design and
was likely located in the same region to allow for shared engineering efforts
and parts stockpiles.
Duga, at least its general design, is not unique to the USSR, and might be
considered a latecomer depending on how exactly you place the dates. Despite
the mysterious failure of Cobra Mist, the United States had nearly
simultaneously began work on a full-scale OTH radar for defense against inbound
ICBMs: OTH-B.
The history of OTH-B is fuzzier than Cobra Mist, although not quite as fuzzy as
that of Duga. Although OTH-B is no longer operational, it formally left
operation recently enough that it's likely some of the documentation is still
classified. Moreover, OTH-B became a bureaucratic embarrassment to the Air
Force for much the same reason as other late Cold War bureaucratic
embarrassments: it took so long to finish that by the time it was ready, the war
was over. There are still documents available from DTIC, though, enough that
we can put together the general details.
Work on OTH-B began in 1970, before completion of Cobra Mist, but progress was
slow for the kind of reasons that are later detailed in a GAO report. Cost and
schedule issues delayed the first working prototype to 1977, years after the
failure of Cobra Mist. OTH-B faced similar challenges: the prototype system
apparently did not perform to specification, but it was functional enough that
the Air Force made the decision to proceed with full-scale implementation.
Sometime in the late '70s, construction began on the first full OTH-B site.
Like Duga, OTH-B employed a bistatic design. The transmitter was located at
Moscow, Maine (a fitting town name), and the receiver near Columbia Falls,
Maine. While the timeline of OTH-B is often confusing, multiple sources agree
that the Moscow and Columbia Falls sites entered service in 1990---an
astoundingly late date considering the program's 1970 start (with limited
experimental validation of antenna and radio designs in the first year!). The
reasons for the tremendously slow development of OTH-B seem lost to history, or
at least deeply buried in Air Force files. We do know that over the course of
its development the GAO issued multiple reports questioning not only the
extremely long schedule and large budget of the program, but the necessity of
the program itself. In 1983 the GAO estimated a final cost of about one billion
dollars per site, and given that the program continued for another seven years
before achieving its first commissioning, it seems more likely that this was an
underestimate than an overestimate.
The same 1983 GAO report suggests that OTH-B befell much the same fate as many
ambitious military acquisition programs: the full production system was
contracted before experimental validation was complete, resulting in parallel
construction and R&D activities. This situation nearly always results in
changes being made to the design after relevant components were assembled,
creating a work-rework cycle that can easily drag on for years and hundreds of
millions.
For construction and operations purposes, OTH-B was divided into sectors. The
word sector here can be understood in the same way as "sector antenna." A
sector of OTH-B comprised a complete antenna and transmitter chain that
provided coverage of an area of roughly 60 degrees. The East Coast system
installed in Maine included three sectors to provide 180 degree coverage off of
the coast, and the minimum practical takeoff angle resulted in a range of a bit
over 3000km, placing a surprisingly large chunk of South America and a sliver
of Europe within the detection area. Each sector operated at 1MW of power,
although it's not clear to me if this was peak or average---more likely, peak.
OTH-B's late commissioning date provided the benefit of much more sophisticated
computer technology. OTH-B target discrimination appears to have been entirely
digital rather than a hybrid digital-analog system as seen in Cobra Mist. The
system implemented beam steering, and used a basic digital beam steering design
in which the transmit site emitted a 7.5 degree beam and the receive site
processed it as three, separate 2.5 degree slices simultaneously based on a
simple phased-array technique.
OTH-B's basic radio design varied appreciably from earlier systems like Cobra
Mist or Duga. Rather than a pulsed radar, it was a continuous wave radar. The
transmitter continuously emitted a signal that was frequency modulated with a
"chirp" signal that could be used to "align" received reflections with the
original transmitted signal. The CW design of the radar gave it significantly
improved sensitivity, at the cost that the "duplex problem" (of the transmitter
overwhelming the receiver) was much greater. As a result, OTH-B transmit and
receive sites required an even greater separation of 100-200km.
OTH-B worked, apparently well although details on performance are hard to come
by. It worked, though, in 1990: the year before the fall of the Soviet Union.
By 1990 the Cold War was already effectively over, having been replaced by a
period of both warming relations and Soviet economic collapse. OTH-B seemed
a system without a purpose.
The DoD seldom lets lack of a purpose come between it and a large contract, and
despite the clearly decreasing relevancy of the system construction went ahead
on the second and third OTH-B sites. In 1988, the Air Force activated two West
Coast stations for construction of three additional OTH-B sectors: Christmas
Valley, Oregon (transmit) and Tulelake, California (receive). Christmas Valley
is an unusual place, a failed real estate development in the pattern set by
Horizon and Amrep in the Southwest. Platted to be a large city, it is now a
tiny town centered on an artificial lake. A short distance from what might be
called city center, the 300+ acre OTH-B site once loomed on the horizon. Its
three sector antennas were metal frames reaching 135' feet high for the lowest
bands, supporting insulated radiating elements projected in front of the "rack"
at a jaunty angle. Each of the three antennas, nearly a mile wide, sat behind a
4000' buried ground screen. In total, each antenna sector antenna occupied
nearly a square mile. Behind each antenna, a transmitter building of 14,000
square feet housed the radio equipment, shielded behind the antenna back
screen.
Although construction of the Christmas Valley and Tulelake sites was
substantially complete, they were turned down in 1991 a few months before the
Soviet Union itself was deactivated. The radar never operated.
Meanwhile, construction had begun on the first sector of the north system, in
Alaska. The transmitter would be located at Gakona, and the receiver at Tok.
Little is known about this northern site. Radomes Inc. (an association for
veterans of air defense radar operations) reports that construction on the
first sector transmitter at Gakona had begun but was never completed, and the
Tok site was never started.
As of 1991, the constructed and activated east installation of OTH-B formed
sectors 1-3, the constructed but never activated west installation sectors 3-6,
and the partially constructed Alaska system sectors 7-8. The final planned
sector, 9, would have been located somewhere in the central United States
facing south. It is not clear to me if a final site was ever selected for the
south system, as it was canceled before construction was contracted.
The east system, fully functional when it was deactivated, won a second life.
From 1991 to 1997 it was operated in partnership with the Border Patrol on a
drug interdiction mission, detecting aircraft involved in smuggling. It was
apparently useful enough in this role that its antenna was modified in 1992
to expand coverage over a portion of the US-Mexico border.
The western site, never activated, was kept in an idle state ready for
potential reactivation until 1997. In 1997, the eastern system was deactivated
and both east and west sites were transitioned to a storage status with only a
few staff members to maintain site security and integrity against the elements.
In 2002, east and west were reduced to "cold storage" and the radio equipment
was removed. In 2007, at 17 years of disuse after their near-completion, the
Christmas Valley and Tulelake sites were partially demolished. Antennas were
removed and sold for scrap and some site remediation was performed. The
equipment buildings and some infrastructure remain, their ownership having
reverted to the respective states who have unclear plans for their future use.
The Maine sites appear to be in a similar state, with antennas removed,
although I haven't been able to find documentation on this action.
The partially constructed site at Gakona, Alaska met a more interesting fate.
In 1993, it was reused by the Air Force in partnership with other organizations
for the construction of HAARP. HAARP is a research instrument that uses RF
radiation to ionize the upper atmosphere and observes the results. While HAARP
is now operated by the University of Alaska for public research purposes, its
original lifespan under Air Force management was secretive enough to make it
the center of many conspiracy theories. While the Air Force is cagey about
identifying its research function, it seems likely that it was used for at
least two disciplines.
First, to develop an improved understanding of the radio reflection properties
of the upper atmosphere. The lack of a clear model of this behavior was
repeatedly identified as a limitation in the performance of OTH radar systems,
and so this research would have directly contributed to the Air Force's
interest in long-range radar. Second, by this time the NSA had a
well-established interest in ionospheric radio propagation. Unusual propagation
patterns sometimes allowed radio signals originating within the USSR and China
to be received in nearby US-allied territories or even within the US proper.
The NSA operated several research facilities that both performed this type of
interception and basic research into ionospheric propagation with an eye
towards making this signals intelligence method more effective and predictable.
While not proven to my knowledge, it seems likely that HAARP served as a part
of this effort, and that active ionization of the atmosphere may have been at
least evaluated as a possibility to improve the chances of interception.
Amusingly, a 2015 paper from a HAARP researcher suggests that the University of
Alaska is once again investigating use of the site for an OTH radar directed at
observing vessel traffic through the arctic ocean. The arctic ocean north of
Canada is becoming increasingly navigable due to climate change, and so is
thought likely to become a strategically important area for naval combat and
enforcement of Canadian sovereignty. Indeed, confirming a connection between
HAARP's research goals and OTH radar, the paper notes that HAARP has
successfully demonstrated the creation of an "artificial ionosphere" at lower
altitudes which can be used to perform OTH radar at shorter ranges than
previously practical---filling in the "beyond visual horizon but before
atmospheric skip" range gap in existing radar methods.
What of OTH radar as a broader art? I have framed the decline of OTH radar
systems mostly in terms of the fall of the Soviet Union. This was indeed the
killing blow for both Duga and OTH-B, but by 1983 the GAO was recommending that
OTH-B be canceled. OTH radar faced a challenge even greater than the end of
the war: satellites. By the '90s, satellite remote sensing had advanced to the
point that it could perform the functions of OTH radar, and usually better.
The MIDAS (Missile Defense Alarm System) satellites, launched in the '60s, were
the first working space-based ICBM launch detection system. MIDAS was quickly
replaced by the more covertly named Defense Support Program or DSP. These
satellite systems, based mostly on infrared imaging, remain the United States'
primary missile defense warning system today. By the '80s, they provided both
better sensitivity and wider coverage than HF OTH radar.
Today, HF OTH radar has faded nearly into obscurity. Some military OTH radar
systems still operate. The Navy, for example, built a relatively small OTH
radar system in 1987 directed south for surveillance of the Caribbean. It
remains in operation today, mostly for counter-narcotics surveillance similar
to the east OTH-B site's second chance. China constructed and operates a system
very similar to OTH-B for airspace surveillance. China's OTH-B is thought to
use interferometry methods for significantly improved resolution. The antenna
design looks extremely similar to OTH-B, although I suspect this is more a case
of common constraints producing a common design than China having based their
design on the US one.
Interestingly, one of OTH-Bs most enduring contributions is not military but
scientific: NOAA, through a partnership with the Air Force, discovered that the
"ground clutter" received by OTH-B's east site from the ocean could be analyzed
to collect information on wave heights and current directions. Today, a system
of small coastal HF radars on both sides of the United States collect
information useful for marine weather forecasting. Data from this system is
also used for improved tidal predictions and by the Coast Guard to model
surface currents, allowing improved predictions of where a drifting vessel in
distress (or its lifeboats or survivors) might be found.
Ionospheric propagation remains an area of scientific interest. For some time,
the Arecibo observatory operated an ionospheric modification observatory using
transmitters repurposed from OTH-B's eastern sectors. HAARP operates to this
day. The NSA is mum as always, but seems to still be busy at their propagation
research facilities. But large OTH radar, on the scale of OTH-B, seems unlikely
to return to the United States. It is costly, complex, and struggles to provide
anything that satellites can't. As with terrestrial radio-based navigation
systems, perhaps the specter of ASAT warfare will motivate renewed interest in
OTH methods. But I wouldn't hold my breath. For now, all we have is a set of
large trapezoids in the Oregon desert.
The end of OTH radar is, of course, far from the end of military radar.
Improved radar methods, especially phased-array systems, made it far more
practical to detect ICBMs at their extremely high, orbital approach altitudes.
Terrestrial missile warning systems re-concentrated on this method, and many are
in operation today. I'll get to them eventually.
[1] Some readers might be familiar with duplexers, filtering devices that
employ dark magic RF trickery to effectively "split" transmit and receive
power. There are two reasons that filtering-based approaches are not very
practical for radar: first, radar must receive the same frequency it transmits,
preventing the use of common cavity filters for separation. Second, RF filters
become larger and dissipate more heat as they handle higher powers, and most
radar operates at very high power levels. Even the simplest RF filters become
impractical to build when considering megawatt power levels.
Previously on Deep Space Nine, we discussed the MUSIC and MADRE
over-the-horizon radar (OTH) programs. MUSIC and especially MADRE validated the
concept of an HF radar using ionospheric (often called "skywave" in the radio
art) propagation, with a novel hybrid digital-analog computerized signal
processing system. MADRE was a tremendous success, ultimately able to detect
ICBM launches, aircraft, and ship traffic in the North Atlantic region. What
was needed next seemed simple: a very similar radar, perhaps more powerful,
aimed at the Soviet Union.
In 1964, final planning began on the full-scale OTH radar for missile defense.
Code-named "Cobra Mist," the AN/FPS-95 [1] radar was initially slated for
Turkey. Design proceeded for some time (a year or so) for a site in Turkey, but
ultimately no site could be obtained. The documents I have are somewhat vague
on the reason for this, but it seems likely that Turkey was hesitant to host a
installation that would seem a direct affront to the USSR's nuclear power.
Another host would have to be found, some time already into the design process.
Finally, a site was selected that was not quite ideal but workable: Orford
Ness, England.
Orford Ness (alternately, according to the typical complexity of English place
names, Orfordness) is a sort of long, sandy semi-island, of a type prone to
forming on the coast of Great Britain but not seen much here in the western
United States. Orford Ness has been under the control of the Ministry of
Defence since before WWII, and by the '60s was well in use as a research range
for the Atomic Weapons Establishment (AWE). As is often the case with odd bits
of land in military use, it contains a number of interesting historic sites
which include a very early marine radionavigation system, a lighthouse notable
for its involvement in the Rendlesham Forest UFO incident which occurred
nearby, and several curious instrumentation bunkers left from AWE
high-explosives research.
It also contains a lot of land, and that land would be needed for Cobra Mist.
Construction proceeded from 1966 to 1972. Due to the experimental nature of OTH
radar and the large scale of the system, during construction a set of
experiments was designed under the name Design Verification System Testing
(DVST). This was essentially field acceptance testing, but with the added
complication that Cobra Mist was advancing the state of the art in OTH radar so
much that the expected performance of the system was largely unknown. Cobra
Mist fell into sort of an uncomfortable in-between: In part a production
defense system, in part an experimental apparatus.
The Cobra Mist antenna, constructed by RCA (then a major defense contractor),
consisted of 18 strings 620 meters long arranged in a circle, radiating from
the center. A buried mesh ground plane extended forward from the antenna
strings to provide vertical shaping of the beam, for a total built length on
each string of about 900m.
A complex switching network connected six of these strings at a time to the
transmitter and receiver, applying phase shifts to four of the strings (those
not in the center of the active array) to maintain the phase of the emitted
signal despite the varying distances of the strings from the target area (due
to the arc shape of the array). This is an early version of the phased-array
technology which is heavily used in military radar today... but a very early
version indeed. The radar could be "steered" or aimed by selecting different
sets of strings, but this was a fairly lengthy process and was done while
offline. Nonetheless, it allowed the radar to cover a total azimuth of about 90
degrees with a smaller target area, about seven degrees, selected from within
that range.
The altitude of the antenna was also somewhat steerable: each string contained
two sets of radiating elements in phase relationships that would produce two
different vertical angles. Further, varying the transmit frequency across its
6 to 40 MHz range resulted in different ranges as the ionospheric propagation
shifted closer and further.
The minimum effective range was 500nmi, because the antenna could only emit
radiation upwards (being located on the ground) and the maximum elevation angle
produced a reflection from the ionosphere centered at about that distance. The
maximum range is a somewhat more theoretical matter, but was placed at about
2000 nmi assuming simple single-hop propagation at the antenna's lowest
achievable radiating angle. In practice, the vagaries of ionospheric
propagation would make this range overoptimistic in some cases but an
underestimate in favorable conditions where multi-hop or ducting patterns
occur.
Behind the antenna, a large squat building housed a transmitter specified for
600 kW average power and 10 MW peak, although a report on the project from the
Mitre Corporation (from which much of the information here is derived [2])
states that only 3.5 MW was achieved in practice. A specially designed receiver
with a very large dynamic range (140dB specified) was connected to a set of RF
filters and then an analog-to-digital converter system that provided the
received signal to the computer.
Computers are ostensibly what I write about here, and this story has a good
one. The Signal Analysis Subsystem (SAS) was a hybrid digital-analog computer
driven by a Xerox Sigma V. The Sigma V was actually a fairly low-end computer,
32-bit but without the virtual memory support found on higher-end Sigma models.
The inexpensive (relatively) computer was enabled by the fact that most actual
signal processing was performed in the SAS. The SAS was constructed by
Interstate Electric Corporation (IEC), under contract with the Army Security
Agency which would later become the NSA. Like MADRE, the SAS functioned by
converting the digitally stored signals back to analog---but at a much
accelerated speed. This higher-frequency signal was then fed through a series
of analog filters including a matched filter to discriminate reflections of the
radar's distinctive pulse shape. The results of the analog filter process were
then redigitized and returned to the computer, which allowed operators to
display the results in graphical form. Cobra Mist's designers also provided a
useful secondary mechanism: received signals could be written out to magnetic
tape, allowing them to be analyzed off-line later by other computer systems.
This would prove an important capability during DVST, when multiple other
analysis systems were developed to interpret these tapes.
DVST began in 1972 and almost immediately hit a serious snag. Cobra Mist, an
implementation of a technology reasonably well proven by MUSIC and MADRE,
turned out to be pretty much useless. It struggled to detect targets in even
the most favorable conditions, much less past Moscow.
Cobra Mist was never expected to be especially precise. Its huge antenna
produced wide beams that reflected off of the ionosphere at an uncertain point.
It would never be able to locate targets to more than general regions. But,
like the eyes of insects, Cobra Mist was expected to make up for its blurry
vision with a very fine sensitivity to motion. Its narrow Doppler filters
should have been able to discriminate any object moving at more than 1.5 knots
towards or away from the radar. The ability to discriminate target speeds very
finely should have allowed Cobra Mist to not only differentiate moving targets
from the ground, but to determine the number of moving targets based on their
varying speeds.
The Mitre corporation had devised a series of twelve DVST experiments to
demonstrate the radar's ability to detect and track aircraft. Despite its
excellent sensitivity to motion, only three of these were even performed. The
radar's poor performance on the initial three tests made the remainder
hopeless, and efforts shifted to identifying the cause of the radar's
unexpected blindness.
To make a rather long story short, DVST researchers identified a large,
consistent problem with the radar: in every configuration, a great deal of
radio noise was received at roughly the same range as the ground. This noise
was of such great magnitude, not previously observed with OTH radar, that it
drowned out the actual radar returns, preventing discrimination of the pulse
reflections.
There were other problems as well. The 6MHz frequency floor proved a challenge,
the computer display system was difficult to use to detect faint targets, and
the antenna somewhat under-performed expectations. Various adaptations were made
on most of these issues, including a significant overhaul of parts of the
antenna system to improve gain, offline analysis of the recorded tapes, and
later an overhaul of the computer system to provide a more flexible display
capability. But none of these improvements could overcome the most basic
problem, the surprising noise, which researchers labeled "range-related
noise" due to its appearance at the same range and Doppler gates as the
typical ground clutter.
I will spare a multi-page discussion of efforts to characterize and eliminate
this noise, which you can read in the original paper linked at the bottom of
this article. Suffice to say that numerous experiments showed that the noise
appeared at around the same range as the ground in nearly all operating
conditions, and that it was far greater in magnitude than would be explained by
ground reflection (the usual clutter) or typical atmospheric, galactic, or
man-made radio noise. The noise did not originate in the receiver or antenna
system, it did not appear over signals transmitted from nearby the antenna, and
the same noise could even be detected by a completely separate test antenna
system installed for this purpose.
DVST engineers felt that the problem could likely be identified, but that for
the time being Cobra Mist could not be effective in its mission. An Air Force
panel reviewed the problem, and the result was a bit surprising. Defense money
was clearly tighter in the '70s than it is in the era of the F-35, and perhaps
the recently negotiated ABM treaty's restrictions on large radar overseas was
a factor (Cobra Mist was an overseas radar, since although placed in the UK it
was built and operated by the US). In any case, the Air Force pulled the plug.
Cobra Mist shut down in 1973, without ever being operational and less than 18
months after the beginning of DVST. The project had cost about a billion
dollars in today's money, and its short life left more questions than answers.
Or, perhaps more accurately, it left one very big question: what was the noise?
History has borne out the design of Cobra Mist. Multiple similar OTH radars
have been constructed and provided good performance, including the Jindalee
radar in Australia (designed with the participation of the same NRL researchers
as Cobra Mist) and the Duga radar in the USSR, present-day Ukraine (which I
will discuss more in part III). The OTH-B system, based on similar principles
and first designed at around the same time, was greatly expanded over following
decades and remained in service into the 21st century (this too will be
discussed more in the next part). The point is that Cobra Mist should have
worked, it was all correct in principle, but no one could explain that noise.
Given the passage of a great deal of time since then, the most likely
explanation is probably some subtle defect in the receiver design. The Mitre
report, written mostly to argue against the radar's premature shutdown, admits
the possibility of a problem in the analog-to-digital conversion stage that
possibly contributed to the noise but was not thoroughly investigated before
the shutdown. The authors also considered the possibility that the noise was a
result of one or more of several speculated effects: perhaps there are so many
aircraft in the air as to create a broad set of Doppler-shifted reflections
that cannot be discriminated from one another. Or perhaps there are some sort
of objects on the ground that vibrate or rotate in such a way to produce a set
of strong and broadly Doppler-shifted reflections. Subsequent OTH radars have
not suffered from these problems, though, making it unlikely that the cause of
the noise was some incidental property of Europe at the time.
The reason I find this story so fascinating is that the Mitre authors also
suggest another possibility, one that they had intended to evaluate with
additional tests that were never performed due to the 1973 shutdown. They
thought it possible that Cobra Mist was being jammed.
The basic idea is this: if the USSR became aware of Cobra Mist (not at all a
far stretch considering its large physical size) and managed to obtain some
detailed information on its operation (via espionage presumably), it would have
been quite possible to build an active countermeasure. A system of relatively
small transmitters distributed across various locations in the USSR, the Mitre
authors estimated 15 sites would do, could use computer technology to detect
Cobra Mist's emitted pulses and then transmit the same pulse back at a wide
variety of Doppler offsets. It would be difficult or (at least at the time)
impossible to differentiate these transmitted "distractors" from actual
returns, and they would appear to the operators as random noise centered around
the target range. The slight latency the computer system would introduce even
provides an explanation for one of the observed properties of the noise, that
it occurred mostly at ranges slightly further than the peak ground reflection.
Because of the extreme sensitivity of the radar, these active countermeasures
would only require a few watts of RF power to overwhelm the radar.
The DVST team was able to perform one experiment that bolstered the theory that
the noise was a result of something on the ground, whether an intentional
countermeasure or an incidental property of something widely used in Europe.
When targeting the radar at azimuths and ranges that consisted primarily of
open sea, they did not observe the noise. The noise only seemed to occur on
land, and more generally in their target region. The Mitre paper states near
its end:
We are forced to conclude that the jamming technique is quite feasible,
and it is not clear that the experiments conducted at the AN/FPS-95
would have discovered the jamming had it occurred. If experiments
confirming or denying the possibility had been conducted, they would
have perhaps resolved the issue. They were not conducted.
With fifty years of hindsight it seems unlikely that there was any meaningful
scale electronic countermeasures effort in the USSR that has still not come to
light, but the possibility is certainly interesting. It is, at least in my
mind, within the realm of reason: the USSR were working on their own OTH radar
efforts at the time along much the same technical lines and so would have been
familiar with the principles that Cobra Mist relied on. The USSR's own premiere
OTH radar, Duga, suffered occasional jamming and the operators had to develop
countermeasures. In other words, both the means and the motivation were in
place. Considering that the scale of the jamming effort would have been
relatively small (the Mitre authors mention 15 sites would have been sufficient
and that Mitre had constructed similar antennas at the cost of about $25,000,
low for high-end RF equipment at the time), perhaps there is something to it
and the Soviet program simply faded into obscurity, little documented and known
to few. This could be a minor front of early electronic warfare that has been
lost to history.
The failure of Cobra Mist, frustrating as it was, did little to dissuade the
United States or other countries from the broader concept of OTH radar. In
fact, by the time Cobra Mist DVST was in progress, the US had already begin
major work on an OTH radar system in the United States (and thus compliant with
the ABM treaty), called OTH-B. By the end of the Cold War, OTH-B reached almost
10 MW in combined operating power across multiple sites, and was well on the
way to complete 360 degree coverage from the US extending over a large portion
of the planet.
As with many things late in the Cold War, it also suffered an ignominious fate:
repeatedly replanned at the whims of politics, partially constructed, partially
dismantled, repurposed for the war on drugs, and ultimately forgotten. We'll
talk more about OTH-B, and its Soviet contemporary Duga made incidentally
famous by the Chernobyl disaster, in Part III. As with most of my interests, it
involves secret sites in barren remote areas near failed cities... not
California City or the Rio Estates this time, but Christmas Valley, Oregon.
[1] I think I've used these part numbers several time without explaining them.
Dating back to WWII, many military technology systems have been assigned
article numbers according to this system. Initially, it was called the
Army-Navy Nomenclature System, from which the "AN" prefix is derived (for
Army-Navy). The part after the slash is a type code, for which we most often
discuss FPS: fixed (F) radar (P) search (S). P is used for "radar" because R
had already been claimed for "radio." When researching radar, you will also
often see AN/TPS - transportable search radar. The number is just, well, a
number, assigned to each project sequentially.
[2] This report, titled "The Enigma of the AN/FPS-95 OTH Radar," was helpfully
retrieved via FOIA and preserved by the Computer UFO Network---not so much due
to a latent interest in radar history but because Cobra Mist's nature as a
secret facility in close proximity to the Rendlesham Forest has suggested to
some a connection to the prominent UFO incident there. You can read it at
CUFON
One if the most interesting things about studying history is noting the
technologies that did not shape the present. We tend to think of new
inventions as permanent fixtures, but of course the past is littered with
innovations that became obsolete and fell out of production. Most of these
at least get the time to become well-understood, but there are cases where
it's possible that even the short-term potential of new technologies was
never reached because of the pace at which they were replaced.
And of course there are examples to be found in the Cold War.
Today we're going to talk about Over-the-Horizon Radar, or OTH; a key
innovation of the Cold War that is still found in places today but mostly
lacks relevance in the modern world. OTH's short life is a bit of a
disappointment: the most basic successes in OTH were hard-won, and the
state of the art advanced rapidly until hitting a standstill around the
'90s.
But let's start with the basics.
Radar systems can be described as either monostatic or bistatic, terms which
will be important when I write more about air defense radar. Of interest to us
now is monostatic radar, which is generally what you think of when someone just
says "radar." Monostatic radars emit RF radiation and then observe for a
reflection, as opposed to bistatic radars which emit RF radiation from one site
and then receive it at another site, observing for changes. Actually, we'll see
that OTH radar sometimes had characteristics of both, but the most important
thing is to understand the basic principle of monostatic radar, of emitting
radiation and looking for what bounces back.
Radar can operate in a variety of parts of the RF spectrum, but for the most
part is found in UHF and SHF - UHF (Ultra-High Frequency) and SHF (Super High
Frequency) being the conventional terms for the spectrum from 300MHz-3GHz and
3GHz-30GHz. Why these powers of ten multiplied by three? Convention and
history, as with most terminology. Short wavelengths are advantageous to radar,
because RF radiation reflects better from objects that are a large portion or
even better a multiple of the wavelength. A shorter wavelength thus means that
you can detect smaller objects. There are other advantages of these high
frequencies as well, such as allowing for smaller antennas (for much the same
reason, the gain of an antenna is maximized at multiples of the wavelength, or
at least at divisions by small powers of two).
UHF and SHF have a disadvantage for radar though, and that is range. As a rule
of thumb, the higher the frequency (and the shorter the wavelength), the
shorter the distance it will travel. There are various reasons for this, a big
one is that shorter wavelengths more readily interact with materials in the
path, losing energy as they do so. This has been a big topic of discussion in
5G telephony; since some 5G bands are in upper UHF and lower SHF where they
will not pass through most building materials. The atmosphere actually poses
the same problem, and as wavelengths get shorter the molecules in the
atmosphere begin to absorb more energy. This problem gets very bad at around
60GHz and is one of the reasons that the RF spectrum must be considered finite
(even more so than suggested by the fact that, well, eventually you get visible
light).
There's another reason, though, and it's the more important one for our
purposes. It's also the atmosphere, but in a very different way.
Most of the time that we talk about RF we are talking about line-of-sight
operations. For high-band VHF and above [1], it's a good rule of thumb that RF
behaves like light. If you can see from one antenna to the other you will have
a solid path, but if you can't things get questionable. This is of course not
entirely true, VHF and UHF can penetrate most building materials well and
especially for VHF reflections tend to help you out. But it's the right general
idea, and it's very much true for radar. In most cases the useful range of a
monostatic radar is limited to the "radio horizon," which is a little further
away than the visible horizon due to atmospheric refraction, but not that much
further. This is one of the reasons we tend to put antennas on towers. Because
of the low curvature of the earth's surface, a higher vantage point can push
the horizon quite a bit further away.
For air-defense radar applications, though, the type I tend to talk about, the
situation is a little different. Most air-defense radar antennas are quite low
to the ground, and are elevated on towers only to minimize ground clutter
(reflections off of terrain and structures near the antenna) and terrain shadow
(due to hills for example). A common airport surveillance radar might be
elevated only a few feet, since airfields tend to be flat and pretty clear of
obstructions to begin with. There's a reason we don't bother to put them up on
big towers: air-defense radars are pointed up. The aircraft they are trying to
detect are quite high in the air, which gives a significant range advantage,
sort of the opposite situation of putting the radar in the air to get better
range on the ground. For the same reason, though, aircraft low to the ground
are more likely to be outside of radar coverage. This is a tactical problem in
wartime when pilots are trained to fly "nap of the earth" so that the reverse
radar range, from their perspective, is very small. It's also a practical
problem in air traffic control and airspace surveillance, as a Skyhawk at 2000'
above ground level (a pretty typical altitude here in the mountain west where
the ground is at 6k already) will pass through many blind spots in the Air
Force-FAA Joint Surveillance System.
This is all a somewhat longwinded explanation of a difficult problem in the
early Cold War. Before the era of ICBMs, Soviet nuclear weapons would arrive by
airplane. Airplanes are, fortunately, fairly slow... especially bombers large
enough for bulky nuclear munitions. The problem is that we would not be able to
detect inbound aircraft until they were quite close to our coasts, allowing a
much shorter warning (and interception) time than you would expect. There are a
few ways to solve this problem, and we put great effort into pursuing the most
obvious: placing the radar sets closer to the USSR. NORAD (North American Air
Defense Command) is a joint US-Canadian venture largely because Canada is,
conveniently for this purpose, in between the USSR and the US by the shortest
route. A series of radar "lines" were constructed across Alaska, Canada, and
into Greenland, culminating with the DEW (Distant Early Warning) Line in arctic
norther Canada.
This approach was never quite complete, and there was always a possibility that
Soviet bombers would take the long route, flying south over the Pacific or
Atlantic to stay clear of the range of North American radar until they neared
the coasts of the US. This is a particularly troubling possibility since even
today the population of the US is quite concentrated on the coasts, and early
in the Cold War it was even more the case that the East Coast was the United
States for most purposes. Some creative solutions were imagined to this
problem, including most notably the Texas Towers, radar stations built on
concrete platforms far into the ocean. The Texas Towers never really worked
well; the program was canceled before all five were built and then one of them
collapsed, killing all 28 crew. There was an even bigger problem with this
model, though: the threat landscape had changed.
During the 1960s, bombers became far less of a concern as both the US and the
USSR fielded intercontinental ballistic missiles (ICBMs). ICBMs are basically
rockets that launch into space, orbit around to the other side of the planet,
and then plunge back towards it at terminal velocity. ICBMs are fast: a famous
mural painted on a blast door by crew of a Minuteman ICBM silo, now Minuteman
Missile National Historic Park, parodies the Domino's Pizza logo with the
slogan "Delivered worldwide in 30 minutes or less, or your next one is free."
This timeline is only a little optimistic, ICBM travel time between Russia and
the US really is about a half hour.
Moreover, ICBMs are hard to detect. At launch time they are very large, but
like rockets (they are, after all, rockets, and several space launch systems
still in use today are directly derived from ICBMs) they shed stages as they
reach the apex of their trip. By the time an ICBM begins its descent to target
it is only a re-entry vehicle or RV, and some RVs are only about the size of a
person. To achieve both a high probability of detection and a warning time of
better than a few minutes, ICBMs needed to be detected during their ascent.
This is tricky: Soviet ICBMs had a tendency of launching from the USSR, which
was a long ways away.
From the middle of the US to the middle of Russia is around 9000km, great
circle distance. That's orders of magnitude larger than the range of the best
extant radar technology. And there are few ways to cheat on range: the USSR was
physically vast, with the nearest allied territory still being far from ICBM
fields. In order to detect the launch of ICBMs, we would need a radar that
could not only see past the horizon, but see far past the horizon.
Let's go back, now, to what I was saying about radio bands and the atmosphere.
Below VHF is HF, High Frequency, which by irony of history is now rather low
frequency relative to most applications. HF has an intriguing property: some
layers of the atmosphere, some of the time, will actually reflect HF radiation.
In fact, complex propagation patterns can form based on multiple reflections
and refraction phenomenon that allow lucky HF signals to make it clear around
the planet. Ionospheric propagation of HF has been well known for just about as
long as the art of radio has, and was (and still is) regularly used by ships at
sea to reach each other and coast stations. HF is cantankerous, though. This
is not exactly a technical term but I think it gets the idea across. Which HF
frequencies will propagate in which ways depends on multiple weather and
astronomical factors. More than the complexity of early radio equipment
(although this was a factor), the tricky nature of HF operation is the reason
that ships carried a radio officer. Establishing long-distance connections by
HF required experimentation, skill, and no small amount of luck.
Luck is hard to automate, and in general there weren't really any automated HF
communications systems until the computer age. The long range of HF made it
very appealing for radar, but the complexity of HF made it very challenging.
An HF radar could, conceptually, transmit pulses via ionospheric propagation
well past the horizon and then receive the reflections by the same path. The
problem is how to actually interpret the reflections.
First, you must consider the view angle. HF radar energy reflects off of the
high ionosphere back towards the earth, and so arrives at its target from
above, at a glancing angle. This means of course that reflections are very
weak, but more problematically it means that the biggest reflection is from the
ground... and the targets, not far above the ground, are difficult to
discriminate from the earth behind them. Radar usually solves this problem
based on time-of-flight. Airplanes or recently launched ICBMs, thousands of
feet or more in the air, will be a little bit closer to the ionosphere and thus
to the radar site than the ground, and so the reflections will arrive a bit
earlier. Here's the complication: in ionospheric propagation, "multipath" is
almost guaranteed. RF energy leaves the radar site at a range of angles
(constrained by the directional gain of the antenna), hits a large swath of the
ionosphere, and reflects off of that swath at variable angles. The whole thing
is sort of a smearing effect... every point on earth is reached by a number of
different paths through the atmosphere at once, all with somewhat different
lengths. The result is that time-of-flight discrimination is difficult or even
impossible.
There are other complexities. Achieving long ranges by ionospheric propagation
requires emitting RF energy at a very shallow angle with respect to the
horizon, a few degrees. To be efficient (the high path loss and faint
reflections mean that OTH radar requires enormous power levels), the antenna
must exhibit a very high gain and be very directional. Directional antennas are
typically built by placing radiating and reflecting elements some distance to
either side of the primary axis, but for an antenna pointed just a few degrees
above the horizon, one side of the primary axis is very quickly in the ground.
HF OTH radar antennas thus must be formidably large, typically using a
ground-plane design with some combination of a tall, large radiating system
and a long groundplane extending in the target direction. When I say "large"
here I mean on the scale of kilometers. Just the design and construction of the
antennas was a major challenge in the development of OTH radar.
Let's switch to more of a chronological perspective, and examine the
development of OTH. First, I must make the obligatory disclaimer on any cold
war technology history: the Soviet Union built and operated multiple OTH
radars, and likely arrived at a working design earlier than the US.
Unfortunately, few resources on this history escaped Soviet secrecy, and even
fewer have been translated to English. I know very little about the history of
OTH radar in the USSR, although I will, of course, discuss the most famous
example.
In the US, OTH radar was pioneered at the Naval Research Laboratory. Two early
prototypes were built in the northeastern United States: MUSIC, and MADRE.
Historical details on MUSIC are somewhat scarce, but it seems to have been of
a very similar design to MADRE but not intended for permanent operation. MADRE
was built in 1961, located at an existing NRL research site on Chesapeake
Bay near Washington. Facing east towards the Atlantic, it transmitted pulses
on variable frequencies at up to 100kW of power. MADRE's large antenna is
still conspicuous today, about 300 feet wide and perhaps 100 feet tall---but
this would be quite small compared to later systems.
What is most interesting about MADRE is not so much the radio gear as the
signal processing required to overcome the challenges I've discussed. MADRE,
like most military programs, is a tortured acronym. It stands for Magnetic-Drum
Radar Equipment, and that name reveals the most interesting aspect. MADRE, like
OTH radars to come, relied on computer processing to extract target returns.
In the early '60s, radar systems were almost entirely analog, particularly in
the discrimination process. Common radar systems cleared clutter from the
display (to show only moving targets) using methods like mercury acoustic delay
lines, a basic form of electronic storage that sent a signal as a mechanical
pulse through a tube of mercury. By controlling the length of the tube, the
signal could be delayed for whatever period was useful---say one rotational
period of the radar antenna. For OTH radar, though, data needed to be stored
on multiple dimensions and then processed in a time-compressed form.
Let's explain that a bit Further. When I mentioned that it was difficult to
separate target returns from the reflection of the earth, if you have much
interest in radar you may have immediately thought of Doppler methods. Indeed,
ionospheric OTH radars are necessarily Doppler radars, measuring not just the
reflected signal but the frequency shift it has undergone. Due to multipath
effects, though, the simple use of Doppler shifts is insufficient. Atmospheric
effects produce returns at a variety of shifts. To discriminate targets, it's
necessary to compare target positions between pulses... and thus to store a
history of recent pulses with the ability to consider more than one pulse at a
time. Perhaps this could be implemented using a large number of delay lines,
but this was impractical, and fortunately in 1961 the magnetic drum computer
was coming into use.
The magnetic drum computer is a slightly odd part of computer history, a
computer fundamentally architected around its storage medium (often not only
logically, but also physically). The core of the computer is a drum, often a
fairly large one, spinning at a high speed. A row of magnetic heads read and
write data from its magnetically coercible surface. Like delay tubes, drum
computers have a fundamental time basis in their design: the revolution speed
of the drum, which dictates when the same drum position will arrive back at the
heads. But, they are two-dimensional, with many compact multi-track heads used
to simultaneously read and write many bits at each drum position.
Signals received by MADRE were recorded in terms of Doppler shifts onto a drum
spinning at 180 revolutions per second. The radar similarly transmitted 180
pulses per second (PRF), so that each revolution of the drum matched a radar
pulse. With each rotation of the drum, the computer switched to writing the
new samples to a new track, allowing the drum to store a history of the recent
pulses---20 seconds worth.
For each pulse, the computer wrote 23 analog samples. Each of these samples
was "range gated," meaning time limited to a specific time range and thus
distance range. Specifically, in MADRE, each sample corresponded to a 455 nmi
distance from the radar. The 23 samples thus covered a total of 10,465 nmi in
theory, about half of the way around the earth. The area around 0Hz Doppler
shift was removed from the returned signal via analog filtering, since it
always contained the strong earth reflection and it was important to preserve
as much dynamic range as possible for the Doppler shifted component of the
return.
As the drum rotated, the computer examined the history of pulses in each range
gate to find consistent returns with a similar Doppler shift. To do this,
though, it was first necessary to discriminate reflections of the original
transmitted pulse from various random noise received by the radar. The signal
processing algorithm used for this purpose is referred to as "matched
filtering" or "matched Doppler filtering" and I don't really understand it very
well, but I do understand a rather intriguing aspect of the MADRE design: the
computer was not actually capable of performing the matched filtering at a high
enough rate, and so an independent analog device was built to perform the
filtering step. As an early step in processing returns, the computer actually
played them back to the analog filtering processor at a greatly accelerated
speed. This allowed the computer to complete the comparative analysis of
multiple pulses in the time that one pulse was recorded.
MADRE worked: in its first version, it was able to track aircraft flying over
the Atlantic ocean. Later, the computer system was replaced with one that used
magnetic core memory. Core memory was random access and so could be read faster
than the drum, but moreover GE was able to design core memory for the computer
which stored analog samples with a greater dynamic range than the original
drum. These enhancements allowed MADRE to successfully track much slower
targets, including ships at sea.
The MUSIC and MADRE programs produced a working OTH radar capable of surveiling
the North Atlantic, and their operation lead to several useful discoveries.
Perhaps the most interesting is that the radar could readily detect the
ionospheric distortions caused by nuclear detonations, and MADRE regularly
detected atmospheric tests at the NNSS despite pointing the wrong direction.
More importantly, it was discovered that ICBM launches caused similar but much
smaller distortions of the ionosphere which could also be detected by HF radar.
This improved the probability of HF radar detecting an ICBM launch further.
AND THAT'S PART ONE. I'm going to call this a multi-part piece instead of just
saying I'll return to it later so that, well, I'll return to it later. Because
here's the thing: on the tails of MADRE's success, the US launched a program
to build a second OTH radar of similar design but bigger. This one would be
aimed directly at the Soviet Union.
It didn't work.
But it didn't work in a weird way, that leaves some interesting questions to
this day.
[1] VHF is 30-300MHz, which is actually a pretty huge range in terms of
characteristics and propagation. For this reason, land-mobile radio technicians
especially have a tendency to subdivide VHF into low and high band, and
sometimes mid-band, according to mostly informal rules.
One of my chief interests is lighting. This manifests primarily as no end of
tinkering with inexpensive consumer IoT devices, because I am cheap and running
new cabling is time consuming. I did nearly end up using DMX for my
under-cabinet lighting but ultimately saw sense and stuck to a protocol that is
even more unfamiliar to the average consumer, Z-Wave.
I worked in theater (at a university conference center) only briefly but the
fact that it was a very small operation gave me a great deal of exposure to the
cutting edge of theatrical control last time a major capital expenditure had
been authorized, in the '90s. This was an ETC Sensor dimmer system with an ETC
Express 48/96 console for which we had to maintain a small stash of 3.5"
diskettes. The ETC Express is still, in my mind, pretty much the pinnacle of
user interface design: it had delightfully tactile mechanical buttons that you
pressed according to a scheme that was somehow simultaneously intuitive and
utterly inscrutable. Mastery of the "Thru," "And," "Except," "Rel" buttons made
you feel like a wizard even though you were essentially typing very elementary
sentences. It ran some type of non-Microsoft commercial DOS, maybe DR-DOS if I
remember correctly, and drove the attached 1080p LCD display at 1024x768.
The integration with the Lutron architectural lighting control system had never
actually worked properly, necessitating a somewhat complex pattern of
button-mashing to turn on the lobby lights that sometimes turned into sending a
runner upstairs to mash other buttons on a different panel. There was an
accessory, the Remote Focus Unit, that was a much smaller version of the
console that was even more inscrutable to use, and that you would carry around
with you trailing a thick cable as you navigated the catwalks. This was one of
two XLR cables that clattered along the steel grating behind you, the other
being for the wired intercom system.
My brief career in theater was very influential on me: it was a sort of
Battlestar Galactica-esque world in which every piece of technology was from
the late '90s or early '00s, and nothing was wireless. You unplugged your
intercom pack, climbed the spiral staircase (which claimed many a shin) to an
alarmingly high point in the flyloft, and plugged your intercom pack into the
wall socket up there. Then you fiddled around for a moment and had to walk back
to the wall socket, because the toggle switch that changed the socket between
buses was always set wrong, and you never thought to check it in the first
place. Truly a wonderful era of technology.
The spiral staircase exists in a strange liminal space in the building: the
large open area, behind the flyloft, which primarily contained the air handlers
for the passive solar heating system installed as a pilot project in the '80s.
It had apparently never worked well. The water tubing was prone to leaking, and
the storage closets under the solar array had to be treated as if they were
outdoors. Many of the counterweights and older fixtures were rusty for this
reason. It would rain indoors, in the back of the Macey Center: not because of
some microclimate phenomena, but by the simple logic of a university that
occasionally received generous grants for new technology, but never had the
money for maintenance. Of course today, the passive solar array has been
removed and replaced by a pointless bank of multicolored architectural panels
curiously aimed at the sun. Progress marches on.
Well that's enough nostalgia. Here's the point: I think lighting control is
interesting, chiefly because it involves a whole lot of low-speed digital
protocols that are all more or less related to RS-422. But also, there is
history!
I am going to sort of mix theatrical and architectural lighting control here,
but it is useful to know the difference. Theatrical lighting control is
generally the older field. Early theaters had used chemical light sources
(literal limelight) to light the stage, later theaters used primitive
electrical lights like carbon-arc spotlights. Just about as soon as electrical
lights were introduced, electrical light dimming came about. Theatrical
lighting controls have certain properties that are different from other
lighting systems. They are usually designed for flexibility, to allow light
fixtures (sometimes called luminaires but moreso in architectural lighting)
to be moved around and swapped out between shows. They are designed with the
expectation that relatively complex scenes will be composed, with a single
show containing a large a number of lighting cues that will be changed from
show to show. Theatrical lighting is largely confined to the theater, mostly
in very traditional forms of footlights, side lights, and numbered catwalks
or bridges extending both directions from the proscenium (upstage and into
the house).
Architectural lighting systems, on the other hand, are intended to make
buildings both more dramatic and practical. There are similarities in that
architectural lighting control systems mostly involve channels which are
dimmed. But there are significant differences: architectural lighting is mostly
permanently installed and unmovable. There is a relatively limited number of
channels, and more significantly there is a relatively limited number of
scenes: maybe a half dozen in total. Control is sometimes automated (based on a
solar calendar, sunset and sunrise) and when manual is intended to be operated
by untrained persons, and so usually limited to a row of buttons that call up
different scenes. You, the reader, probably encounter architectural lighting
control most often in the completely automated, scheduled systems used by large
buildings, and second in the wall panel scene control systems often used in
conference rooms and lecture halls.
There exists, of course, an uncomfortable in between: the corporate or
university auditorium, which has some elements of both. Theaters are also often
found inside of buildings with architectural lighting controls, leading to a
need to make the two interoperate. Because of both the similarities and the
need for interoperability there are some common protocols between theatrical
and architectural systems, but for the most part they are still fairly separate
from each other.
So how does a light control system actually work?
The primitive element of a light control system was, for a long time, the
dimmer. Early theaters used saltwater dimmers and later variac dimmers arranged
into banks and operated by levers, which could be mechanically linked to
each other to effect scene changes. Architectural systems are much the same, but
instead of backstage or in a patch bay, the dimmers are located in a closet.
Architectural systems have always been more automated and required remote
control, which of course means that they came about later.
Let's start with a very basic but very common scheme for lighting control: the
0-10V dimmer. Widely used in architectural lighting for many decades, this is
perhaps the simplest viable system. For each dimmer there is a knob which
adjusts an output voltage between 0 and 10v, and this is routed by low voltage
wiring to either a central dimmer or (more common in later systems) a
distributed system of dimmmable lighting ballasts incorporated into the
fixtures. The main appeal of 0-10v analog dimming is its simplicity, but this
simplicity betrays the basic complexity of dimming.
Some lights are very easy to dim, mostly incandescent bulbs which are capable
of a very wide range of brightnesses corresponding more or less linearly to the
power they consume (which can be moderated by the voltage applied or by other
means). Arc and discharge lights introduce a complication; they produce no
light at all until they reach a "striking power" at which point they can be
dimmed back down to a lower power level. Incandescent light bulbs can actually
behave the same way, although it tends to be less obvious. The issue is a
bigger one in architectural lighting than in theatrical lighting [1], because
architectural lighting of the early era of central control relied heavily on
fluorescent fixtures. These have a particularly dramatic difference between
striking power and minimum power, and in general are difficult to dim [2].
This has lead to a few different variations on the 0-10v scheme, the most
common of which is 1-10v fluorescent control. In this variant, 0v means "off"
while 1v means "minimum brightness." This difference is purely semantic in the
case of incandescent bulbs, but for fluorescent ballasts indicates whether or
not the bulb should be struck. The clear differentiation between "off" and
"very dim" was important for simpler, non-microcontroller ballasts, but then
became less important over time as most fluorescent ballasts switched to
computerization which could more intelligently make a threshold decision
about whether or not the bulb should be struck near 0v.
The 0-10v scheme is simple and easy to work with, so it was widely installed.
It has a major downside, though: the need to run separate control wiring to
every "zone" or set of dimmable lights. In typical architectural installations
this is pretty manageable, because in the era of 0-10v analog dimming (as in
the era before of direct dimming of the power supply wiring) it was typical to
have perhaps six distinct zones. In theatrical lighting, where a modest
configuration is more like 16 dimming channels and wiring is more often
expected to be portable and reconfigurable, it was a much bigger nuisance.
Fortunately, improving electronic technology coming mostly out of the telecom
industry offered a promising innovation: multiplexing.
If you are not familiar with the term at its most general, multiplexing
describes basically any method of encoding more than one logical channel over a
single physical channel. On this blog I have spoken about various multiplexing
methods since it has an extensive history in telecommunications, most obviously
for the purpose of putting multiple telephone calls over one set of wires. If
you, like me, have an academic education in computing you might remember the
high level principles from a data communications or networking class. The most
common forms of multiplexing are FDM and TDM, or frequency division muxing and
time division muxing. While I'm omitting perhaps a bit of nuance, it is mostly
safe to say that "muxing" is an abbreviation for "multiplexing" which is the
kind of word that you quickly get tired of typing.
While there are forms of muxing other than FDM and TDM, if you understand FDM
and TDM you can interpret most other methods that exist as being some sort of
variation on one, the other, or both at the same time. FDM, frequency division,
is best explained (at least I think) in the example of analog telephone muxing.
Humans can hear roughly from 20-20kHz, and speech occurs mostly at the bottom
end of this range, from 80-8kHz (these rough ranges tend to keep to multiples
of ten like this because, well, it's convenient, and also tends to reflect
reality well since humans interpret sound mostly on a logarithmic basis). A
well-conditioned telephone pair can carry frequencies up to a couple hundred
kHz, which means that when you're carrying a single voice conversation there's
a lot of "wasted" headroom in the high frequencies, higher than audible to
humans. You can take advantage of this by mixing a speech channel with a higher
frequency "carrier", say 40kHz, and mixing the result with an unmodified voice
channel. You now have two voice conversations on the same wire: one at 0-20kHz
(often called "AF" or "audio frequency" since it's what we can directly hear)
and another at 40-60kHz. Of course the higher frequency conversation needs to
be shifted back down at the other end, but you can see the idea: we can take
advantage of the wide bandwidth of the physical channel to stuff two different
logical channels onto it at the same time. And this is, of course,
fundamentally how radio and all other RF communications media work.
TDM, time division, took longer to show up in the telecom industry because it is
harder to do. This is actually a little counterintuitive to me because in many
ways TDM is easier to understand than FDM, but FDM can be implemented with
all-analog electronics fairly easily while TDM is hard to do without digital
electronics and, ultimately, computers. The basic idea of TDM is that the
logical channels "take turns." The medium is divided into "time slots" and each
logical channel is assigned a time slot, it gets to "speak" only during that
time slot. TDM is very widely used today because most types of communication
media can move data faster than the "realtime" rate of that data. For example,
human speech can be digitized and then transmitted in a shorter period of time
than the speech originally took. This means that you can take multiple realtime
conversations and pack them onto the same wire by buffering each one to
temporary memory and then sending them much faster than they originally
occurred during rotating timeslots. TDM is basically old-fashioned sharing and
can be visualized (and sometimes implemented) as something like passing a
talking stick between logical channels.
Why am I explaining the concepts of FDM and TDM in such depth here? Well,
mostly because I am at heart a rambler and once I start on something I can't
stop. But also because I think lighting control systems are an interesting
opportunity to look at the practicalities of muxing in real systems that are
expected to be low-cost, high-reliability, and operate over cabling that isn't
too demanding to install.
And also because I think it will be helpful in explaining a historically
important lighting control scheme: analog multiplexing, or AMX.
AMX192, the most significant form of AMX, was introduced in 1975 (or so,
sources are a little vague on this) by Strand Lighting. Strand is a
historically very important manufacturer of theatrical lighting, and later
became part of Philips where it was influential on architectural lighting as
well (along with the rest of Philips lighting, Strand is now part of Signify).
In this way, one can argue that there is a direct through-line from Strand's
AMX to today's Hue smart bulbs. AMX192 supports 192 channels on a single cable,
and uses twisted-pair wiring with two pairs terminated in 4-pin XLR connectors.
This will all sound very, very familiar to anyone familiar with theatrical
lighting today even if they are too young to have ever dealt with AMX, but
we'll get to that in a bit.
What makes AMX192 (and its broader generation of control protocols) very
interesting to me is that it employs analog signaling and TDM. Fundamentally,
AMX192 is the same as the 0-10v control scheme (although it actually employs
0-5v), but the analog control signal is sent alongside a clock signal and every
clock pulse it changes to the value for the next channel. On the demultiplexing
or demuxing side, receivers need to pick out the right channel by counting
clock pulses and then "freeze" the analog value of the signal pair to hold it
over while the control wiring cycles through the other channels.
One of the sort of neat things about AMX192 is that you can hook up your
control wiring to an oscilloscope and, once you've got the triggering set up
right, see a very neat visualization of all 192 control channels across your
scope going up and down like the faders on your control board. It's a neat and
simple system, but was still fairly cutting edge in the '70s due to the
complexity of the electronics used to track the clock pulses.
We'll take a moment here too to discuss the physical wiring topology of AMX192:
as you might guess, AMX192 is set up as a "bus" system with each dimmer
connected to the same two twisted pairs. In the '70s, dimmers were still fairly
large devices and so theaters almost exclusively used traditional dimmer rack
systems, with all dimmers installed in one central location. So while there was
a multidrop bus wiring arrangement, it was mostly contained to the rack
backplanes and not really something that users interacted with.
This idea of multi-drop bus wiring, though, might sound familiar if you have
read my other posts. It's largely the same electrical scheme as used by RS-485,
a pretty ubiquitous standard for low-speed serial buses. AMX192 is analog, but
could RS-485 be applied to use digital signaling on a similar wiring topology?
This is not a hypothetical question, the answer is obviously yes, and about ten
years after AMX192 Strand introduced a new digital protocol called DMX512. This
stands for Digital Multiplexing, 512 channels, and it employs the RS-485 wiring
scheme of one twisted pair in a shielded cable terminated with 5-pin XLR
connectors. Now, on the 5-pin XLR connector we have two data pins and one
shield/common pin. Of course there are two more pins, and this hints at the
curiously complicated landscape of DMX512 cabling.
The DMX512 specification requires that 5-pin cables include two twisted pairs,
much like AMX192. You have no doubt determined by now that DMX512 is directly
based on AMX192 and carries over the same two-twisted-pair cabling, but with
the addition of an extra pin for a grounded shield/signal reference common as
required by RS-485, which is the physical layer for DMX512. RS-485 uses
embedded clocking though, so it does not require a dedicated pair for clock
like AMX192 did. This creates the curious situation that a whole twisted pair
is required by the spec but has no specified use. Various off-label
applications of the second pair exist, often to carry a second "universe" of an
additional 512 channels, but by far the most alternative use of the second pair
is to omit it entirely... resulting in 3 pins, and of course this is a rather
attractive option since the 3-pin XLR connector is widely used in live
production for balanced audio (e.g. from microphones).
You can run DMX512 over microphone cables, in fact, and it will largely work. A
lot of cheaper DMX512 equipment comes fitted with 3-pin XLR connectors for this
purpose. The problem is that microphone cables don't actually meet the
electrical specifications for DMX512/RS-485 (particularly in that they are not
twisted), but on the other hand RS-485 is an intentionally very robust physical
protocol and so it tends to work fine in a variety of improper environments. So
perhaps a short way to put it is that DMX512 over 3-pin XLR is probably okay
for shorter ranges and if you apply some moral flexibility to standards.
Let's talk a bit about the logical protocol employed by DMX512, because it's
interesting. DMX512 is a continuous broadcast protocol. That is, despite being
digital and packetized it operates exactly like AMX192. The lighting controller
continuously transmits the values of every channel in a loop. The only real
concession to the power of digital networks in the basic DMX512 protocol is
variable slot count. That is, not all 512 channels have to be transmitted if
they aren't all in use. The controller can send an arbitrary number of channels
up to 512. Extensions to the DMX protocol employ a flag byte at the beginning
of the frame to support types of messages other than the values for sequential
channels starting at 1, but these extensions aren't as widely used and tend to
be a little more manufacturer-specific.
DMX512 has no error correction or even detection; instead it relies on the fact
that all values are repeatedly transmitted so any transient bit error should
only be in effect for a short period of time. Of course running DMX512 over
non-twisted 3-pin XLR cable will increase the number of such transient errors,
and in the modern world of more complex fixtures these errors can become much
more noticeable as fixtures "stutter" in movement.
Let's talk a bit about the fixtures. AMX192 was designed as a solution for the
controller to send channel values to the dimmer rack. DMX512 was designed for
the same application. The same digital technology that enabled DMX512, though,
has enabled a number of innovations in theatrical lighting that could all be
summed up as distributed, rather than centralized, dimming. Instead of having a
dimmer rack backstage or in a side room, where the dimmers are patched to
line-level electrical wiring to fixtures, compact digital dimmers (called
dimmer packs) can be placed just about anywhere. DMX512 cabling is then
"daisy-chained" in the simplest configurations or active repeaters are used to
distribute the DMX512 frames onto multiple wiring runs.
The next logical step from the dimmer pack is building dimming directly into
fixtures, and far more than that has happened. A modern "moving head" fixture,
even a relatively low-end one, can have two axes of movement (altitude-azimuth
polar coordinates), four channels of dimming (red, green, blue, white), a
multi-position filter or gobo wheel, and even one or two effect drive motors.
Higher-end fixtures can have more features like motorized zoom and focus,
additional filter wheels and motorized effects, cool white/warm white and UV
color channels, etc. The point is that one physical fixture can require direct
connection to the DMX bus on which it consumes 8 or more channels. That 512
channel limit can sneak up on you real fast, leading to "multi-universe"
configurations where multiple separate DMX512 networks are used to increase
channel count.
DMX, then, while cutting-edge in 1986, is a bit lacking today. Strand basically
took AMX192 and shoved it into RS-485 to develop DMX512. Could you take DMX512
and shove it into IP? Consider that a cliffhanger! There's a lot more to this
topic, particularly because I haven't even started on digital architectural
control. While DMX512 can be used for architectural lighting control it's not
really all that common and there's a universe of interesting protocols on "the
other side of the fence."
[1] Nonetheless, the effect can be noticeable in theatrical lighting even at
its small magnitude with halogen bulbs. As a result many theatrical light
controllers have a "bulb warmer" feature where they keep all fixtures at a very
low power level instead of turning them off. You can imagine that when mixing
incandescent and LED fixtures with much more noticeable minimum brightness,
making sure this is disabled for the LED fixtures can become a headache.
[2] Some may be familiar with the issue of dimmable vs. non-dimmable
fluorescent fixtures, and the similar issue that exists with LEDs to a lesser
extent. The difference here is actually less in the light than in the power
supply, which for fluorescent fixtures and sometimes LED fixtures is usually
called the (whether or not it is actually a ballast in the electrical
engineering sense, which newer ballasts are usually not). In LED fixtures it is
becoming more common to refer to it as a driver, since the prototypical form of
an LED light power supply is a constant-current driver... although once again,
many "LED drivers" are actually more complex devices than simple CC drivers,
and the term should be viewed as an imprecise one. Dimmable fluorescent and LED
drivers mostly use PWM, meaning that they rapidly switch the output on and off
to achieve a desired duty cycle. This is slightly more complicated for
fluorescent bulbs due to the need to get them warm enough to strike before they
emit light, which means that modern dimmable fluorescent ballasts usually
include "programmed start." This basically means that they're running software
that detects the state of the lamp based on current consumption and provides
striking power if necessary. This is all sort of complicated which is why the
dimmable vs. non-dimmable issue is a big one for CFLs and cheaper LED bulbs: in
these types of light bulbs the power supply is a large portion of the total
cost and simpler non-dimmable ballasts and drivers keep the product price down.
It's a much smaller issue in architectural lighting where the type of ballast
is specified up front and the ballast is a separate component from the bulb,
meaning that its price is a little less important of an issue.