_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss
COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.

I have an M. S. in information security, more certifications than any human should, and ready access to a keyboard. This are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.

You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.

--------------------------------------------------------------------------------

>>> 2022-11-27 over the horizon radar pt I

One if the most interesting things about studying history is noting the technologies that did not shape the present. We tend to think of new inventions as permanent fixtures, but of course the past is littered with innovations that became obsolete and fell out of production. Most of these at least get the time to become well-understood, but there are cases where it's possible that even the short-term potential of new technologies was never reached because of the pace at which they were replaced.

And of course there are examples to be found in the Cold War.

Today we're going to talk about Over-the-Horizon Radar, or OTH; a key innovation of the Cold War that is still found in places today but mostly lacks relevance in the modern world. OTH's short life is a bit of a disappointment: the most basic successes in OTH were hard-won, and the state of the art advanced rapidly until hitting a standstill around the '90s.

But let's start with the basics.

Radar systems can be described as either monostatic or bistatic, terms which will be important when I write more about air defense radar. Of interest to us now is monostatic radar, which is generally what you think of when someone just says "radar." Monostatic radars emit RF radiation and then observe for a reflection, as opposed to bistatic radars which emit RF radiation from one site and then receive it at another site, observing for changes. Actually, we'll see that OTH radar sometimes had characteristics of both, but the most important thing is to understand the basic principle of monostatic radar, of emitting radiation and looking for what bounces back.

Radar can operate in a variety of parts of the RF spectrum, but for the most part is found in UHF and SHF - UHF (Ultra-High Frequency) and SHF (Super High Frequency) being the conventional terms for the spectrum from 300MHz-3GHz and 3GHz-30GHz. Why these powers of ten multiplied by three? Convention and history, as with most terminology. Short wavelengths are advantageous to radar, because RF radiation reflects better from objects that are a large portion or even better a multiple of the wavelength. A shorter wavelength thus means that you can detect smaller objects. There are other advantages of these high frequencies as well, such as allowing for smaller antennas (for much the same reason, the gain of an antenna is maximized at multiples of the wavelength, or at least at divisions by small powers of two).

UHF and SHF have a disadvantage for radar though, and that is range. As a rule of thumb, the higher the frequency (and the shorter the wavelength), the shorter the distance it will travel. There are various reasons for this, a big one is that shorter wavelengths more readily interact with materials in the path, losing energy as they do so. This has been a big topic of discussion in 5G telephony; since some 5G bands are in upper UHF and lower SHF where they will not pass through most building materials. The atmosphere actually poses the same problem, and as wavelengths get shorter the molecules in the atmosphere begin to absorb more energy. This problem gets very bad at around 60GHz and is one of the reasons that the RF spectrum must be considered finite (even more so than suggested by the fact that, well, eventually you get visible light).

There's another reason, though, and it's the more important one for our purposes. It's also the atmosphere, but in a very different way.

Most of the time that we talk about RF we are talking about line-of-sight operations. For high-band VHF and above [1], it's a good rule of thumb that RF behaves like light. If you can see from one antenna to the other you will have a solid path, but if you can't things get questionable. This is of course not entirely true, VHF and UHF can penetrate most building materials well and especially for VHF reflections tend to help you out. But it's the right general idea, and it's very much true for radar. In most cases the useful range of a monostatic radar is limited to the "radio horizon," which is a little further away than the visible horizon due to atmospheric refraction, but not that much further. This is one of the reasons we tend to put antennas on towers. Because of the low curvature of the earth's surface, a higher vantage point can push the horizon quite a bit further away.

For air-defense radar applications, though, the type I tend to talk about, the situation is a little different. Most air-defense radar antennas are quite low to the ground, and are elevated on towers only to minimize ground clutter (reflections off of terrain and structures near the antenna) and terrain shadow (due to hills for example). A common airport surveillance radar might be elevated only a few feet, since airfields tend to be flat and pretty clear of obstructions to begin with. There's a reason we don't bother to put them up on big towers: air-defense radars are pointed up. The aircraft they are trying to detect are quite high in the air, which gives a significant range advantage, sort of the opposite situation of putting the radar in the air to get better range on the ground. For the same reason, though, aircraft low to the ground are more likely to be outside of radar coverage. This is a tactical problem in wartime when pilots are trained to fly "nap of the earth" so that the reverse radar range, from their perspective, is very small. It's also a practical problem in air traffic control and airspace surveillance, as a Skyhawk at 2000' above ground level (a pretty typical altitude here in the mountain west where the ground is at 6k already) will pass through many blind spots in the Air Force-FAA Joint Surveillance System.

This is all a somewhat longwinded explanation of a difficult problem in the early Cold War. Before the era of ICBMs, Soviet nuclear weapons would arrive by airplane. Airplanes are, fortunately, fairly slow... especially bombers large enough for bulky nuclear munitions. The problem is that we would not be able to detect inbound aircraft until they were quite close to our coasts, allowing a much shorter warning (and interception) time than you would expect. There are a few ways to solve this problem, and we put great effort into pursuing the most obvious: placing the radar sets closer to the USSR. NORAD (North American Air Defense Command) is a joint US-Canadian venture largely because Canada is, conveniently for this purpose, in between the USSR and the US by the shortest route. A series of radar "lines" were constructed across Alaska, Canada, and into Greenland, culminating with the DEW (Distant Early Warning) Line in arctic norther Canada.

This approach was never quite complete, and there was always a possibility that Soviet bombers would take the long route, flying south over the Pacific or Atlantic to stay clear of the range of North American radar until they neared the coasts of the US. This is a particularly troubling possibility since even today the population of the US is quite concentrated on the coasts, and early in the Cold War it was even more the case that the East Coast was the United States for most purposes. Some creative solutions were imagined to this problem, including most notably the Texas Towers, radar stations built on concrete platforms far into the ocean. The Texas Towers never really worked well; the program was canceled before all five were built and then one of them collapsed, killing all 28 crew. There was an even bigger problem with this model, though: the threat landscape had changed.

During the 1960s, bombers became far less of a concern as both the US and the USSR fielded intercontinental ballistic missiles (ICBMs). ICBMs are basically rockets that launch into space, orbit around to the other side of the planet, and then plunge back towards it at terminal velocity. ICBMs are fast: a famous mural painted on a blast door by crew of a Minuteman ICBM silo, now Minuteman Missile National Historic Park, parodies the Domino's Pizza logo with the slogan "Delivered worldwide in 30 minutes or less, or your next one is free." This timeline is only a little optimistic, ICBM travel time between Russia and the US really is about a half hour.

Moreover, ICBMs are hard to detect. At launch time they are very large, but like rockets (they are, after all, rockets, and several space launch systems still in use today are directly derived from ICBMs) they shed stages as they reach the apex of their trip. By the time an ICBM begins its descent to target it is only a re-entry vehicle or RV, and some RVs are only about the size of a person. To achieve both a high probability of detection and a warning time of better than a few minutes, ICBMs needed to be detected during their ascent. This is tricky: Soviet ICBMs had a tendency of launching from the USSR, which was a long ways away.

From the middle of the US to the middle of Russia is around 9000km, great circle distance. That's orders of magnitude larger than the range of the best extant radar technology. And there are few ways to cheat on range: the USSR was physically vast, with the nearest allied territory still being far from ICBM fields. In order to detect the launch of ICBMs, we would need a radar that could not only see past the horizon, but see far past the horizon.

Let's go back, now, to what I was saying about radio bands and the atmosphere. Below VHF is HF, High Frequency, which by irony of history is now rather low frequency relative to most applications. HF has an intriguing property: some layers of the atmosphere, some of the time, will actually reflect HF radiation. In fact, complex propagation patterns can form based on multiple reflections and refraction phenomenon that allow lucky HF signals to make it clear around the planet. Ionospheric propagation of HF has been well known for just about as long as the art of radio has, and was (and still is) regularly used by ships at sea to reach each other and coast stations. HF is cantankerous, though. This is not exactly a technical term but I think it gets the idea across. Which HF frequencies will propagate in which ways depends on multiple weather and astronomical factors. More than the complexity of early radio equipment (although this was a factor), the tricky nature of HF operation is the reason that ships carried a radio officer. Establishing long-distance connections by HF required experimentation, skill, and no small amount of luck.

Luck is hard to automate, and in general there weren't really any automated HF communications systems until the computer age. The long range of HF made it very appealing for radar, but the complexity of HF made it very challenging. An HF radar could, conceptually, transmit pulses via ionospheric propagation well past the horizon and then receive the reflections by the same path. The problem is how to actually interpret the reflections.

First, you must consider the view angle. HF radar energy reflects off of the high ionosphere back towards the earth, and so arrives at its target from above, at a glancing angle. This means of course that reflections are very weak, but more problematically it means that the biggest reflection is from the ground... and the targets, not far above the ground, are difficult to discriminate from the earth behind them. Radar usually solves this problem based on time-of-flight. Airplanes or recently launched ICBMs, thousands of feet or more in the air, will be a little bit closer to the ionosphere and thus to the radar site than the ground, and so the reflections will arrive a bit earlier. Here's the complication: in ionospheric propagation, "multipath" is almost guaranteed. RF energy leaves the radar site at a range of angles (constrained by the directional gain of the antenna), hits a large swath of the ionosphere, and reflects off of that swath at variable angles. The whole thing is sort of a smearing effect... every point on earth is reached by a number of different paths through the atmosphere at once, all with somewhat different lengths. The result is that time-of-flight discrimination is difficult or even impossible.

There are other complexities. Achieving long ranges by ionospheric propagation requires emitting RF energy at a very shallow angle with respect to the horizon, a few degrees. To be efficient (the high path loss and faint reflections mean that OTH radar requires enormous power levels), the antenna must exhibit a very high gain and be very directional. Directional antennas are typically built by placing radiating and reflecting elements some distance to either side of the primary axis, but for an antenna pointed just a few degrees above the horizon, one side of the primary axis is very quickly in the ground. HF OTH radar antennas thus must be formidably large, typically using a ground-plane design with some combination of a tall, large radiating system and a long groundplane extending in the target direction. When I say "large" here I mean on the scale of kilometers. Just the design and construction of the antennas was a major challenge in the development of OTH radar.

Let's switch to more of a chronological perspective, and examine the development of OTH. First, I must make the obligatory disclaimer on any cold war technology history: the Soviet Union built and operated multiple OTH radars, and likely arrived at a working design earlier than the US. Unfortunately, few resources on this history escaped Soviet secrecy, and even fewer have been translated to English. I know very little about the history of OTH radar in the USSR, although I will, of course, discuss the most famous example.

In the US, OTH radar was pioneered at the Naval Research Laboratory. Two early prototypes were built in the northeastern United States: MUSIC, and MADRE. Historical details on MUSIC are somewhat scarce, but it seems to have been of a very similar design to MADRE but not intended for permanent operation. MADRE was built in 1961, located at an existing NRL research site on Chesapeake Bay near Washington. Facing east towards the Atlantic, it transmitted pulses on variable frequencies at up to 100kW of power. MADRE's large antenna is still conspicuous today, about 300 feet wide and perhaps 100 feet tall---but this would be quite small compared to later systems.

What is most interesting about MADRE is not so much the radio gear as the signal processing required to overcome the challenges I've discussed. MADRE, like most military programs, is a tortured acronym. It stands for Magnetic-Drum Radar Equipment, and that name reveals the most interesting aspect. MADRE, like OTH radars to come, relied on computer processing to extract target returns.

In the early '60s, radar systems were almost entirely analog, particularly in the discrimination process. Common radar systems cleared clutter from the display (to show only moving targets) using methods like mercury acoustic delay lines, a basic form of electronic storage that sent a signal as a mechanical pulse through a tube of mercury. By controlling the length of the tube, the signal could be delayed for whatever period was useful---say one rotational period of the radar antenna. For OTH radar, though, data needed to be stored on multiple dimensions and then processed in a time-compressed form.

Let's explain that a bit Further. When I mentioned that it was difficult to separate target returns from the reflection of the earth, if you have much interest in radar you may have immediately thought of Doppler methods. Indeed, ionospheric OTH radars are necessarily Doppler radars, measuring not just the reflected signal but the frequency shift it has undergone. Due to multipath effects, though, the simple use of Doppler shifts is insufficient. Atmospheric effects produce returns at a variety of shifts. To discriminate targets, it's necessary to compare target positions between pulses... and thus to store a history of recent pulses with the ability to consider more than one pulse at a time. Perhaps this could be implemented using a large number of delay lines, but this was impractical, and fortunately in 1961 the magnetic drum computer was coming into use.

The magnetic drum computer is a slightly odd part of computer history, a computer fundamentally architected around its storage medium (often not only logically, but also physically). The core of the computer is a drum, often a fairly large one, spinning at a high speed. A row of magnetic heads read and write data from its magnetically coercible surface. Like delay tubes, drum computers have a fundamental time basis in their design: the revolution speed of the drum, which dictates when the same drum position will arrive back at the heads. But, they are two-dimensional, with many compact multi-track heads used to simultaneously read and write many bits at each drum position.

Signals received by MADRE were recorded in terms of Doppler shifts onto a drum spinning at 180 revolutions per second. The radar similarly transmitted 180 pulses per second (PRF), so that each revolution of the drum matched a radar pulse. With each rotation of the drum, the computer switched to writing the new samples to a new track, allowing the drum to store a history of the recent pulses---20 seconds worth.

For each pulse, the computer wrote 23 analog samples. Each of these samples was "range gated," meaning time limited to a specific time range and thus distance range. Specifically, in MADRE, each sample corresponded to a 455 nmi distance from the radar. The 23 samples thus covered a total of 10,465 nmi in theory, about half of the way around the earth. The area around 0Hz Doppler shift was removed from the returned signal via analog filtering, since it always contained the strong earth reflection and it was important to preserve as much dynamic range as possible for the Doppler shifted component of the return.

As the drum rotated, the computer examined the history of pulses in each range gate to find consistent returns with a similar Doppler shift. To do this, though, it was first necessary to discriminate reflections of the original transmitted pulse from various random noise received by the radar. The signal processing algorithm used for this purpose is referred to as "matched filtering" or "matched Doppler filtering" and I don't really understand it very well, but I do understand a rather intriguing aspect of the MADRE design: the computer was not actually capable of performing the matched filtering at a high enough rate, and so an independent analog device was built to perform the filtering step. As an early step in processing returns, the computer actually played them back to the analog filtering processor at a greatly accelerated speed. This allowed the computer to complete the comparative analysis of multiple pulses in the time that one pulse was recorded.

MADRE worked: in its first version, it was able to track aircraft flying over the Atlantic ocean. Later, the computer system was replaced with one that used magnetic core memory. Core memory was random access and so could be read faster than the drum, but moreover GE was able to design core memory for the computer which stored analog samples with a greater dynamic range than the original drum. These enhancements allowed MADRE to successfully track much slower targets, including ships at sea.

The MUSIC and MADRE programs produced a working OTH radar capable of surveiling the North Atlantic, and their operation lead to several useful discoveries. Perhaps the most interesting is that the radar could readily detect the ionospheric distortions caused by nuclear detonations, and MADRE regularly detected atmospheric tests at the NNSS despite pointing the wrong direction. More importantly, it was discovered that ICBM launches caused similar but much smaller distortions of the ionosphere which could also be detected by HF radar. This improved the probability of HF radar detecting an ICBM launch further.

AND THAT'S PART ONE. I'm going to call this a multi-part piece instead of just saying I'll return to it later so that, well, I'll return to it later. Because here's the thing: on the tails of MADRE's success, the US launched a program to build a second OTH radar of similar design but bigger. This one would be aimed directly at the Soviet Union.

It didn't work.

But it didn't work in a weird way, that leaves some interesting questions to this day.

[1] VHF is 30-300MHz, which is actually a pretty huge range in terms of characteristics and propagation. For this reason, land-mobile radio technicians especially have a tendency to subdivide VHF into low and high band, and sometimes mid-band, according to mostly informal rules.

--------------------------------------------------------------------------------

>>> 2022-11-23 enlightenment and lighting controls

One of my chief interests is lighting. This manifests primarily as no end of tinkering with inexpensive consumer IoT devices, because I am cheap and running new cabling is time consuming. I did nearly end up using DMX for my under-cabinet lighting but ultimately saw sense and stuck to a protocol that is even more unfamiliar to the average consumer, Z-Wave.

I worked in theater (at a university conference center) only briefly but the fact that it was a very small operation gave me a great deal of exposure to the cutting edge of theatrical control last time a major capital expenditure had been authorized, in the '90s. This was an ETC Sensor dimmer system with an ETC Express 48/96 console for which we had to maintain a small stash of 3.5" diskettes. The ETC Express is still, in my mind, pretty much the pinnacle of user interface design: it had delightfully tactile mechanical buttons that you pressed according to a scheme that was somehow simultaneously intuitive and utterly inscrutable. Mastery of the "Thru," "And," "Except," "Rel" buttons made you feel like a wizard even though you were essentially typing very elementary sentences. It ran some type of non-Microsoft commercial DOS, maybe DR-DOS if I remember correctly, and drove the attached 1080p LCD display at 1024x768.

The integration with the Lutron architectural lighting control system had never actually worked properly, necessitating a somewhat complex pattern of button-mashing to turn on the lobby lights that sometimes turned into sending a runner upstairs to mash other buttons on a different panel. There was an accessory, the Remote Focus Unit, that was a much smaller version of the console that was even more inscrutable to use, and that you would carry around with you trailing a thick cable as you navigated the catwalks. This was one of two XLR cables that clattered along the steel grating behind you, the other being for the wired intercom system.

My brief career in theater was very influential on me: it was a sort of Battlestar Galactica-esque world in which every piece of technology was from the late '90s or early '00s, and nothing was wireless. You unplugged your intercom pack, climbed the spiral staircase (which claimed many a shin) to an alarmingly high point in the flyloft, and plugged your intercom pack into the wall socket up there. Then you fiddled around for a moment and had to walk back to the wall socket, because the toggle switch that changed the socket between buses was always set wrong, and you never thought to check it in the first place. Truly a wonderful era of technology.

The spiral staircase exists in a strange liminal space in the building: the large open area, behind the flyloft, which primarily contained the air handlers for the passive solar heating system installed as a pilot project in the '80s. It had apparently never worked well. The water tubing was prone to leaking, and the storage closets under the solar array had to be treated as if they were outdoors. Many of the counterweights and older fixtures were rusty for this reason. It would rain indoors, in the back of the Macey Center: not because of some microclimate phenomena, but by the simple logic of a university that occasionally received generous grants for new technology, but never had the money for maintenance. Of course today, the passive solar array has been removed and replaced by a pointless bank of multicolored architectural panels curiously aimed at the sun. Progress marches on.

Well that's enough nostalgia. Here's the point: I think lighting control is interesting, chiefly because it involves a whole lot of low-speed digital protocols that are all more or less related to RS-422. But also, there is history!

I am going to sort of mix theatrical and architectural lighting control here, but it is useful to know the difference. Theatrical lighting control is generally the older field. Early theaters had used chemical light sources (literal limelight) to light the stage, later theaters used primitive electrical lights like carbon-arc spotlights. Just about as soon as electrical lights were introduced, electrical light dimming came about. Theatrical lighting controls have certain properties that are different from other lighting systems. They are usually designed for flexibility, to allow light fixtures (sometimes called luminaires but moreso in architectural lighting) to be moved around and swapped out between shows. They are designed with the expectation that relatively complex scenes will be composed, with a single show containing a large a number of lighting cues that will be changed from show to show. Theatrical lighting is largely confined to the theater, mostly in very traditional forms of footlights, side lights, and numbered catwalks or bridges extending both directions from the proscenium (upstage and into the house).

Architectural lighting systems, on the other hand, are intended to make buildings both more dramatic and practical. There are similarities in that architectural lighting control systems mostly involve channels which are dimmed. But there are significant differences: architectural lighting is mostly permanently installed and unmovable. There is a relatively limited number of channels, and more significantly there is a relatively limited number of scenes: maybe a half dozen in total. Control is sometimes automated (based on a solar calendar, sunset and sunrise) and when manual is intended to be operated by untrained persons, and so usually limited to a row of buttons that call up different scenes. You, the reader, probably encounter architectural lighting control most often in the completely automated, scheduled systems used by large buildings, and second in the wall panel scene control systems often used in conference rooms and lecture halls.

There exists, of course, an uncomfortable in between: the corporate or university auditorium, which has some elements of both. Theaters are also often found inside of buildings with architectural lighting controls, leading to a need to make the two interoperate. Because of both the similarities and the need for interoperability there are some common protocols between theatrical and architectural systems, but for the most part they are still fairly separate from each other.

So how does a light control system actually work?

The primitive element of a light control system was, for a long time, the dimmer. Early theaters used saltwater dimmers and later variac dimmers arranged into banks and operated by levers, which could be mechanically linked to each other to effect scene changes. Architectural systems are much the same, but instead of backstage or in a patch bay, the dimmers are located in a closet. Architectural systems have always been more automated and required remote control, which of course means that they came about later.

Let's start with a very basic but very common scheme for lighting control: the 0-10V dimmer. Widely used in architectural lighting for many decades, this is perhaps the simplest viable system. For each dimmer there is a knob which adjusts an output voltage between 0 and 10v, and this is routed by low voltage wiring to either a central dimmer or (more common in later systems) a distributed system of dimmmable lighting ballasts incorporated into the fixtures. The main appeal of 0-10v analog dimming is its simplicity, but this simplicity betrays the basic complexity of dimming.

Some lights are very easy to dim, mostly incandescent bulbs which are capable of a very wide range of brightnesses corresponding more or less linearly to the power they consume (which can be moderated by the voltage applied or by other means). Arc and discharge lights introduce a complication; they produce no light at all until they reach a "striking power" at which point they can be dimmed back down to a lower power level. Incandescent light bulbs can actually behave the same way, although it tends to be less obvious. The issue is a bigger one in architectural lighting than in theatrical lighting [1], because architectural lighting of the early era of central control relied heavily on fluorescent fixtures. These have a particularly dramatic difference between striking power and minimum power, and in general are difficult to dim [2].

This has lead to a few different variations on the 0-10v scheme, the most common of which is 1-10v fluorescent control. In this variant, 0v means "off" while 1v means "minimum brightness." This difference is purely semantic in the case of incandescent bulbs, but for fluorescent ballasts indicates whether or not the bulb should be struck. The clear differentiation between "off" and "very dim" was important for simpler, non-microcontroller ballasts, but then became less important over time as most fluorescent ballasts switched to computerization which could more intelligently make a threshold decision about whether or not the bulb should be struck near 0v.

The 0-10v scheme is simple and easy to work with, so it was widely installed. It has a major downside, though: the need to run separate control wiring to every "zone" or set of dimmable lights. In typical architectural installations this is pretty manageable, because in the era of 0-10v analog dimming (as in the era before of direct dimming of the power supply wiring) it was typical to have perhaps six distinct zones. In theatrical lighting, where a modest configuration is more like 16 dimming channels and wiring is more often expected to be portable and reconfigurable, it was a much bigger nuisance. Fortunately, improving electronic technology coming mostly out of the telecom industry offered a promising innovation: multiplexing.

If you are not familiar with the term at its most general, multiplexing describes basically any method of encoding more than one logical channel over a single physical channel. On this blog I have spoken about various multiplexing methods since it has an extensive history in telecommunications, most obviously for the purpose of putting multiple telephone calls over one set of wires. If you, like me, have an academic education in computing you might remember the high level principles from a data communications or networking class. The most common forms of multiplexing are FDM and TDM, or frequency division muxing and time division muxing. While I'm omitting perhaps a bit of nuance, it is mostly safe to say that "muxing" is an abbreviation for "multiplexing" which is the kind of word that you quickly get tired of typing.

While there are forms of muxing other than FDM and TDM, if you understand FDM and TDM you can interpret most other methods that exist as being some sort of variation on one, the other, or both at the same time. FDM, frequency division, is best explained (at least I think) in the example of analog telephone muxing. Humans can hear roughly from 20-20kHz, and speech occurs mostly at the bottom end of this range, from 80-8kHz (these rough ranges tend to keep to multiples of ten like this because, well, it's convenient, and also tends to reflect reality well since humans interpret sound mostly on a logarithmic basis). A well-conditioned telephone pair can carry frequencies up to a couple hundred kHz, which means that when you're carrying a single voice conversation there's a lot of "wasted" headroom in the high frequencies, higher than audible to humans. You can take advantage of this by mixing a speech channel with a higher frequency "carrier", say 40kHz, and mixing the result with an unmodified voice channel. You now have two voice conversations on the same wire: one at 0-20kHz (often called "AF" or "audio frequency" since it's what we can directly hear) and another at 40-60kHz. Of course the higher frequency conversation needs to be shifted back down at the other end, but you can see the idea: we can take advantage of the wide bandwidth of the physical channel to stuff two different logical channels onto it at the same time. And this is, of course, fundamentally how radio and all other RF communications media work.

TDM, time division, took longer to show up in the telecom industry because it is harder to do. This is actually a little counterintuitive to me because in many ways TDM is easier to understand than FDM, but FDM can be implemented with all-analog electronics fairly easily while TDM is hard to do without digital electronics and, ultimately, computers. The basic idea of TDM is that the logical channels "take turns." The medium is divided into "time slots" and each logical channel is assigned a time slot, it gets to "speak" only during that time slot. TDM is very widely used today because most types of communication media can move data faster than the "realtime" rate of that data. For example, human speech can be digitized and then transmitted in a shorter period of time than the speech originally took. This means that you can take multiple realtime conversations and pack them onto the same wire by buffering each one to temporary memory and then sending them much faster than they originally occurred during rotating timeslots. TDM is basically old-fashioned sharing and can be visualized (and sometimes implemented) as something like passing a talking stick between logical channels.

Why am I explaining the concepts of FDM and TDM in such depth here? Well, mostly because I am at heart a rambler and once I start on something I can't stop. But also because I think lighting control systems are an interesting opportunity to look at the practicalities of muxing in real systems that are expected to be low-cost, high-reliability, and operate over cabling that isn't too demanding to install.

And also because I think it will be helpful in explaining a historically important lighting control scheme: analog multiplexing, or AMX.

AMX192, the most significant form of AMX, was introduced in 1975 (or so, sources are a little vague on this) by Strand Lighting. Strand is a historically very important manufacturer of theatrical lighting, and later became part of Philips where it was influential on architectural lighting as well (along with the rest of Philips lighting, Strand is now part of Signify). In this way, one can argue that there is a direct through-line from Strand's AMX to today's Hue smart bulbs. AMX192 supports 192 channels on a single cable, and uses twisted-pair wiring with two pairs terminated in 4-pin XLR connectors. This will all sound very, very familiar to anyone familiar with theatrical lighting today even if they are too young to have ever dealt with AMX, but we'll get to that in a bit.

What makes AMX192 (and its broader generation of control protocols) very interesting to me is that it employs analog signaling and TDM. Fundamentally, AMX192 is the same as the 0-10v control scheme (although it actually employs 0-5v), but the analog control signal is sent alongside a clock signal and every clock pulse it changes to the value for the next channel. On the demultiplexing or demuxing side, receivers need to pick out the right channel by counting clock pulses and then "freeze" the analog value of the signal pair to hold it over while the control wiring cycles through the other channels.

One of the sort of neat things about AMX192 is that you can hook up your control wiring to an oscilloscope and, once you've got the triggering set up right, see a very neat visualization of all 192 control channels across your scope going up and down like the faders on your control board. It's a neat and simple system, but was still fairly cutting edge in the '70s due to the complexity of the electronics used to track the clock pulses.

We'll take a moment here too to discuss the physical wiring topology of AMX192: as you might guess, AMX192 is set up as a "bus" system with each dimmer connected to the same two twisted pairs. In the '70s, dimmers were still fairly large devices and so theaters almost exclusively used traditional dimmer rack systems, with all dimmers installed in one central location. So while there was a multidrop bus wiring arrangement, it was mostly contained to the rack backplanes and not really something that users interacted with.

This idea of multi-drop bus wiring, though, might sound familiar if you have read my other posts. It's largely the same electrical scheme as used by RS-485, a pretty ubiquitous standard for low-speed serial buses. AMX192 is analog, but could RS-485 be applied to use digital signaling on a similar wiring topology?

This is not a hypothetical question, the answer is obviously yes, and about ten years after AMX192 Strand introduced a new digital protocol called DMX512. This stands for Digital Multiplexing, 512 channels, and it employs the RS-485 wiring scheme of one twisted pair in a shielded cable terminated with 5-pin XLR connectors. Now, on the 5-pin XLR connector we have two data pins and one shield/common pin. Of course there are two more pins, and this hints at the curiously complicated landscape of DMX512 cabling.

The DMX512 specification requires that 5-pin cables include two twisted pairs, much like AMX192. You have no doubt determined by now that DMX512 is directly based on AMX192 and carries over the same two-twisted-pair cabling, but with the addition of an extra pin for a grounded shield/signal reference common as required by RS-485, which is the physical layer for DMX512. RS-485 uses embedded clocking though, so it does not require a dedicated pair for clock like AMX192 did. This creates the curious situation that a whole twisted pair is required by the spec but has no specified use. Various off-label applications of the second pair exist, often to carry a second "universe" of an additional 512 channels, but by far the most alternative use of the second pair is to omit it entirely... resulting in 3 pins, and of course this is a rather attractive option since the 3-pin XLR connector is widely used in live production for balanced audio (e.g. from microphones).

You can run DMX512 over microphone cables, in fact, and it will largely work. A lot of cheaper DMX512 equipment comes fitted with 3-pin XLR connectors for this purpose. The problem is that microphone cables don't actually meet the electrical specifications for DMX512/RS-485 (particularly in that they are not twisted), but on the other hand RS-485 is an intentionally very robust physical protocol and so it tends to work fine in a variety of improper environments. So perhaps a short way to put it is that DMX512 over 3-pin XLR is probably okay for shorter ranges and if you apply some moral flexibility to standards.

Let's talk a bit about the logical protocol employed by DMX512, because it's interesting. DMX512 is a continuous broadcast protocol. That is, despite being digital and packetized it operates exactly like AMX192. The lighting controller continuously transmits the values of every channel in a loop. The only real concession to the power of digital networks in the basic DMX512 protocol is variable slot count. That is, not all 512 channels have to be transmitted if they aren't all in use. The controller can send an arbitrary number of channels up to 512. Extensions to the DMX protocol employ a flag byte at the beginning of the frame to support types of messages other than the values for sequential channels starting at 1, but these extensions aren't as widely used and tend to be a little more manufacturer-specific.

DMX512 has no error correction or even detection; instead it relies on the fact that all values are repeatedly transmitted so any transient bit error should only be in effect for a short period of time. Of course running DMX512 over non-twisted 3-pin XLR cable will increase the number of such transient errors, and in the modern world of more complex fixtures these errors can become much more noticeable as fixtures "stutter" in movement.

Let's talk a bit about the fixtures. AMX192 was designed as a solution for the controller to send channel values to the dimmer rack. DMX512 was designed for the same application. The same digital technology that enabled DMX512, though, has enabled a number of innovations in theatrical lighting that could all be summed up as distributed, rather than centralized, dimming. Instead of having a dimmer rack backstage or in a side room, where the dimmers are patched to line-level electrical wiring to fixtures, compact digital dimmers (called dimmer packs) can be placed just about anywhere. DMX512 cabling is then "daisy-chained" in the simplest configurations or active repeaters are used to distribute the DMX512 frames onto multiple wiring runs.

The next logical step from the dimmer pack is building dimming directly into fixtures, and far more than that has happened. A modern "moving head" fixture, even a relatively low-end one, can have two axes of movement (altitude-azimuth polar coordinates), four channels of dimming (red, green, blue, white), a multi-position filter or gobo wheel, and even one or two effect drive motors. Higher-end fixtures can have more features like motorized zoom and focus, additional filter wheels and motorized effects, cool white/warm white and UV color channels, etc. The point is that one physical fixture can require direct connection to the DMX bus on which it consumes 8 or more channels. That 512 channel limit can sneak up on you real fast, leading to "multi-universe" configurations where multiple separate DMX512 networks are used to increase channel count.

DMX, then, while cutting-edge in 1986, is a bit lacking today. Strand basically took AMX192 and shoved it into RS-485 to develop DMX512. Could you take DMX512 and shove it into IP? Consider that a cliffhanger! There's a lot more to this topic, particularly because I haven't even started on digital architectural control. While DMX512 can be used for architectural lighting control it's not really all that common and there's a universe of interesting protocols on "the other side of the fence."

[1] Nonetheless, the effect can be noticeable in theatrical lighting even at its small magnitude with halogen bulbs. As a result many theatrical light controllers have a "bulb warmer" feature where they keep all fixtures at a very low power level instead of turning them off. You can imagine that when mixing incandescent and LED fixtures with much more noticeable minimum brightness, making sure this is disabled for the LED fixtures can become a headache.

[2] Some may be familiar with the issue of dimmable vs. non-dimmable fluorescent fixtures, and the similar issue that exists with LEDs to a lesser extent. The difference here is actually less in the light than in the power supply, which for fluorescent fixtures and sometimes LED fixtures is usually called the (whether or not it is actually a ballast in the electrical engineering sense, which newer ballasts are usually not). In LED fixtures it is becoming more common to refer to it as a driver, since the prototypical form of an LED light power supply is a constant-current driver... although once again, many "LED drivers" are actually more complex devices than simple CC drivers, and the term should be viewed as an imprecise one. Dimmable fluorescent and LED drivers mostly use PWM, meaning that they rapidly switch the output on and off to achieve a desired duty cycle. This is slightly more complicated for fluorescent bulbs due to the need to get them warm enough to strike before they emit light, which means that modern dimmable fluorescent ballasts usually include "programmed start." This basically means that they're running software that detects the state of the lamp based on current consumption and provides striking power if necessary. This is all sort of complicated which is why the dimmable vs. non-dimmable issue is a big one for CFLs and cheaper LED bulbs: in these types of light bulbs the power supply is a large portion of the total cost and simpler non-dimmable ballasts and drivers keep the product price down. It's a much smaller issue in architectural lighting where the type of ballast is specified up front and the ballast is a separate component from the bulb, meaning that its price is a little less important of an issue.

--------------------------------------------------------------------------------

>>> 2022-11-16 local newspaper

And now for something completely different.

Today in the mail I received the latest issue of the New Mexico Sun, a lovely local newspaper that I have never heard of, nor received, before. An oddity of the addressing strongly suggests that it was sent based on the same address list used for a lot of the political advertising I've received, and the contents are... well, we'll go over that in detail in a moment, but I immediately got the impression that this "newspaper" was actually a piece of political advertising. The odd thing is when it arrived: at about noon on election day. In some ways this seems like a smart strategy because it will be so salient in the mind of its recipients when they go to the polls, except for the problem that I would imagine a lot of people wouldn't receive their mail until after they had voted (doubly so since early voting and mail-in voting are both pretty popular here). In any case, the election day timing seemed either intentional or like it had just arrived one or two days late of the target date.

I have a vague recollection that there is some sort of Postal Service regulation requiring that periodicals distributed by mail provide some standard information about the publisher, editor, etc., so I flipped through this paper in search of a masthead. There's none to be found. In fact, the only information the paper gives bout its origin is a domain name, NewMexicoSun.com. A quick search of postal regulations suggests that my memory is not entirely incorrect but also not very applicable here: Domestic mail manual, section 207, requires that an "identification statement" appear somewhere in the first five pages or on the editorial page, and that the identification statement include the address of the publisher.

But... section 207 gives the rules for periodical mail, which is a specific postage rate for items like magazines and newspapers. The address block on this item includes "ECRWSH" in the Optional Endorsement Line or OEL, the first line of the address on commercial mail that often has a lot of asterisks. The OEL serves mostly to speed up handling of bulk mail by providing some sorting information in a standard numeric format, and for many bulk mail services contains some type of abbreviation that identifies the type of bulk postage paid. Domestic mail manual section 240 tells us that ECRWSH indicates USPS Marketing Mail, high density rate. Marketing mail meaning that this was mailed at reduced rates for advertising, and high density that an additional discount was provided in exchange for the mail piece being sent to at least 125 addresses on each route (this high count per route simplifies sorting).

I still wonder if sending something that so much resembles a newspaper under Marketing Mail might run afoul of some postal regulations, but a qualified opinion on that would probably require a postal lawyer, which I imagine as being somewhat like a maritime lawyer as depicted in Arrested Development.

Let's consider the contents of the paper. The front page features a vertically stretched portrait of the incumbent governor above the fold with the headline "Career criminals found new victims after early release," and page 6 consists only of mugshots of individuals released from New Mexico prisons as a result of COVID protective orders. This forms an odd contrast with page 8, which is laid out identically but instead features mugshots of local high school athletes who have been recruited to college teams. The descriptions on this page are very oddly formatted and show a lack of local knowledge, e.g. the caption "Joah Flores played high school football at New Mexico." New Mexico what? where?

I strongly suspect that this page was automatically generated using data from a sports scouting service, and probably minimally reviewed by a human if at all. Like the Governor on the front page, many of the portraits have had their aspect ratios awkwardly changed. This repeat problem is, at least in my suspicion, indicative that this paper was mostly generated by pasting into an Adobe InDesign or Quark Xpress template. The newspaper also includes several bits of awkward blank space, something that newspapers habitually avoid (column inches are money) but also points to this being a fixed layout with locally relevant text pasted into it.

In fact the only material in the paper that doesn't have the whiff of a political hit job (i.e. consists primarily of criticism of an incumbent Democratic elected official) is the aforementioned "Athletes in Action" section and an events directory, which I suspect is also software-generated due to some telltales like blank template fields and a very odd selection of events to cover (the headline item is the band Agent Orange playing at Sister Bar, but no other items from Sister Bar's busy music schedule make it to this page).

Most articles have the byline "George Willis," although one opinion piece by Pete Dinelli stands out. Dinelli is a former city councilor and writes a somewhat prominent blog on local politics. He is also, as far as I can tell, the only byline that is definitely a real person. Apparent lead reporter "George Willis" has a fairly generic name but but seems most likely to be a freelance sports journalist.

The Dinelli piece is interesting. It closely parallels, but does not match, an article on Dinelli's blog. I reached out to Dinelli to ask how he came to contribute to the New Mexico Sun, but I didn't hear anything back.

Let's turn to the website, newmexicosun.com. Its contents are very similar to the printed paper, although it looks appreciable more polished and has a lot more general news content. A somewhat buried "About" page indicates that it is published by "PIPELINE Advisors LLC" and is part of their "family of Metro News Sites." Bradley Cameron is named as CEO and managing editor. No such entity, or foreign registration, exists in New Mexico, but it does exist in Texas where the secretary of state indeed lists Bradley Cameron. It's rather confusing to be the CEO of an LLC, and Texas records actually give the title managing member, along with Brian Timpone. The address given is a single family home in downtown Austin, which I always find a bit odd given the ready availability of virtual offices.

Some readers probably know exactly where this is going by now, and the names Cameron and Timpone might have been just a bit familiar to them. Cameron and Timpone run Metric Media, Locality Labs (formerly Local Labs), and the Local Government Information Service (LGIS), several organizations accused in the press of operating large numbers of websites that appear to be local news sources but actually operate as advertising for conservative political interests. While the line between news, opinion, and advertising can be somewhat thin in the world of politics, the most damning aspect of this operation is its volume. It's no coincidence that this newspaper seems hastily prepared, and probably mostly by the use of freelancers and automation. Cameron and Timpone operate over a thousand such websites according to an article in CJR, each of which is superficially a local operation but is in fact run out of Austin. The Guardian has reported on this group as well.

Indeed, printed versions of these papers are apparently not unique, as an article details that some printed copies were produced at the printing plant of the Des Moines Register. While it's common for newspapers to run commercial print jobs for smaller publications and marketing, this situation certainly has a bit of a smell to it.

So none of this is really new, and the New Mexico Sun as a website dates back to 2020 at least. What has changed is its unprompted appearance in my mailbox. Whether this is a new strategy on the part of Metric Media/Locality Labs/Pipeline Advisors, or just a decision to prioritize New Mexico this year due to the apparently close governor's race, is hard to say. It sure is a weird piece of mail, though.

But I don't write about journalism, do I? Let's take a look at the CYBER INTELLIGENCE.

newmexicosun.com is, unsurprisingly, registered with "domain privacy" although I find it somewhat interesting that it's registered through the fairly small registrar Epik. Like most of the internet these days, the domain name points at AWS and MX records indicate the use of G-Suite. Passive DNS information for AWS IP addresses can be questionable since they may change hands relatively frequently, but SecurityTrails' free lookup shows about a half dozen "local news" websites all being served up by the same IP. As with the shell companies behind these websites, they seem to organize their infrastructure regionally: the New Mexico Sun runs alongside the Austin Journal, the Houston Daily, and the Midland Times. One standout is the Suburban Marquee, apparently of Chicago, but an odder one is the Globe Banner.

The only Globe I know of is Globe, Arizona, which meets the regional theme but is a town of under 10,000. Perhaps this explains why I am having such a hard time determining just where the Globe Banner is supposed to be local to: there's absolutely nothing on that website that isn't low-effort international business news right off of PR Newswire. Whatever's going on in Globe, wherever, the front-page headliner is "HappyCo confirmed plans to acquire Toronto-based rental lifecycle management platform Yuhu, according to a press release." How regionally appropriate! The #1 most read article, according to the sidebar, is "Okonjo-Iweala: ‘We cannot afford to leave trade and WTO behind’ when tackling climate change."

So I think the Globe Banner might actually just be a mistake, one that might live on for years to come. It's sort of the liminal space of newspapers: endless empty hallways of acquisition announcements and industry association lobbying. The headline "Rubio and Gallagher call for national TikTok ban" seemed like it might be a politically-motivated insertion but is actually just "syndicated" straight out of a press release from those congresspeople. The Globe Banner's ambitious Europe section is here to tell us that a German company has introduced a new line of two-post car lifts. "According to the automotive equipment company, the products will help customers reduce installation times by up to 20%, feature three adjustable width positions allowing flexible installation of lifts and offer an efficient and safe obstacle-free working area."

Most of these websites are identical, although the Suburban Marquee also stands out with a different theme. They have something the feeling of WordPress templates, but I think these websites are actually running on homegrown software. It seems a little sloppy: the single minified Javascript file used for most functionality appears to be written by StackOverflow. The API endpoint used by the newsletter sign-up form returns {"message":"Page Not Found"} to a GET request, and {"message":"Page Not Found"} to a POST without the correct form fields. It's got that fun CSS grid framework thing going on where the CSS class list on every div is just inline styling in a less readable syntax and not at all semantic.

One tell in the markup of an article indicates that "Froala WYSIWYG Editor" was used to create its summary text. The developer doesn't seem to have kept the different properties very well separated: the New Mexico Sun loads media from Cloudfront with "houstondaily" in the path (I briefly got excited and thought this might be an S3 bucket name, but it seems Cloudfront wisely leaves those out).

Oh, I found an S3 bucket: jnswire, used for a few background images, seemingly only for articles that are genuinely about New Mexico - and thus perhaps those actually composed in this software and not syndicated automatically. Google has indexed a healthy range of court filings in that bucket, but little use. Looking for this term more broadly, a research service tells me that jnswire.com was registered via GoDaddy by Brian Timpone, one of the founders of this local news collective, in 2012. Indeed, this use of the JNS acronym refers to Journatic News Wire, a connection made by a Twitter profile that no longer exists.

And the New York Times has more on that: Journatic was apparently a service Brian Timpone developed that generated articles automatically based on data feeds. It would seem that this software probably backs most of these "local news websites," which the NYT suggests as well. The software is, apparently, not very good. The Chicago Tribune was apparently a customer and backed off of the service after it found it was distributing plagiarized articles.

How does an automated system produce a plagiarized article? Well, one way is by lying about the "automated" part. While I don't know that there is completely solid evidence, several sources online (including the NYT) make the accusation that Journatic was mechanical turking the problem, paying writers in the Philippines on a gig-work basis to write articles that were then distributed under either wire byline or a fake byline. Some suspect that the local news network that rose from Journatic's ashes does the same, but honestly from looking at the content I'm skeptical... it's not even really good enough to have been written under a cent a word. Most of the articles stray so little from the press release that they were less written than somewhat selectively copied and pasted.

So, where does this leave us? About ten years ago, a journalist started a service that would automate news by (supposedly) generating articles via software. Today, it seems that they've done just that, but it's not really news anyone wants to read... it's window dressing, sort of the journalistic equivalent of those five books at Ikea, providing a generally newspaper-like environment for the "payload" articles of the Locality/Metric/LGIS network. The wonders of the internet.

--------------------------------------------------------------------------------

>>> 2022-10-22 wireless burglary

Long time no see! The great thing about Computers Are Bad is that you get exactly what you pay for, and there's a reason I'm not on Substack. Rest assured I am still alive, just very occupied with client's AWS problems and the pleasantly changing weather here in New Mexico.

Speaking of pleasantly changing weather, it's the time of year when returning diurnal temperature swings start causing the shock sensors start 'to fall off my windows. I could provide a lengthy discourse on which adhesive products hold up to this abuse better (transparent 3M VHB seems like the best so far), but instead it's a good opportunity to return to a topic that I introduced right about a year ago: burglar alarms.

While there is a lot to talk about with commercial burglar alarms and their history, I want to start out our more in-depth discussion with something practical: security analysis of the modern home alarm market, which consists mostly of "consumerized" systems like Amazon Ring (the burglar alarm company they acquired, not the doorbell), SimpliSafe, etc. You've almost certainly seen or heard advertising for these products, and they have the major advantage of being surprisingly inexpensive. A complete Amazon Ring burglar alarm installation can sometimes cost less than just buying and mounting the cabinet for a conventional alarm system.

Just to be clear on the differentiation I'm making here, a "conventional" alarm would be one made by a long-established company like DSC, Interlogix, or Honeywell. It can be a bit hard to trace due to a lot of M&A history, but these companies have been manufacturing burglar alarms for 50+ years. Most ADT alarms, for example, are rebranded Honeywell (often from the era when it was Ademco, later acquired by Honeywell). These systems are mostly the "traditional" architecture of a control cabinet and wired sensors and panels, although all of these manufacturers offer "hybrid" solutions with various types of wireless sensors. Perhaps most importantly, these systems are mostly the same as commercial alarm systems. Sometimes the same model is certified and sold for both home and commercial use, sometimes the home version is feature-limited, but they use the same basic concepts.

When I talk about "consumerized" systems, I am referring to Simplisafe, Ring, Abode, etc. This whole new generation of alarm systems are typically cheap, completely wireless, and sometimes even "controller-free." They're catching on because of their low cost but, perhaps moreso because of the different sales model. Conventional alarm systems were distributed entirely through a dealer network. A home alarm system was typically installed, monitored, and maintained by a local dealer, who might even set a programming password... ostensibly to protect the consumer from compromising their own alarm, but often in practice to complicate "adoption" of the alarm by a different dealer. Dealers usually made most of their money off of monitoring contracts and viewed the actual alarm as a loss leader, which is why you might remember television advertising for "free alarms" from ADT. ADT's dealer network will indeed install a very basic alarm system for free, but they will lock you into multiple years of monitoring at a relatively high rate. Classic razor-and-blades behavior.

More consumerized systems today, though, are usually sold direct to consumer with low-cost monitoring from their vendor (or in practice, a UL certified monitoring center contracted by their vendor). Monitoring is often contract-free and they usually have a pretty rich "self-monitoring" feature set based around a mobile app [1]. In general, they feel a lot more like a consumer tech product and a lot less like a home utility advertised mostly on the side of aged Econoline vans.

Like most consumer tech products, they are also heavily cost-engineered, emphasize the appearance of features over good design, and are sometimes downright poorly thought out. Obviously I am rather critical of this generation of products, but I should make it clear that the news is not all bad: they are very cheap. There is an important trade-off here between cost and performance, and a low-cost option isn't necessarily bad. To twist a common expression, the best burglar alarm is the one you have, and the high installation price of conventional systems has long been a deterrent.

Let's take a look at some of the design decisions of these consumerized alarm systems and how they relate to security properties.

Controller vs. Controllerless

The most prominent brands of consumerized alarm systems make use of a controller which is separate from the panel(s). However, there are several vendors that sell "controllerless" systems where the "main panel" also contains the controller. To be clear, when I say "panel" I am talking about the thing that you usually mount next to an exterior door and use to arm and disarm. The user interface of the alarm. The controller is traditionally a metal cabinet with some PCBs in it but for these consumerized systems often looks more like an outdated Apple AirPort, and goes... somewhere. Traditionally a bedroom closet, but you could put it just about anywhere, preferably out of sight.

A different but related issue is whether or not there is a siren built into the controller, or the siren is a separate device. UL certification requires a siren and so there will always be one somewhere, but many systems reduce cost and installation complexity by building it into the controller... something that is exceptionally rare with conventional systems.

The main consideration when comparing a system with a controller to one without a controller is "smash-proofing." When a burglar violates the alarm by entering a secured house, they are typically free to roam for the duration of the entry delay period, which is usually 30-60 seconds [2]. Conventional alarm systems intentionally place the controller in an out-of-the-way location and preferably one protected by immediate zones (or at the minimum a key lock and tamper switch on the cabinet door), to avoid a burglar tampering with the controller during the entry delay period. The reason for this may be obvious: if the burglar can destroy the controller before the entry delay period ends, the alarm may never be reported. Of course if the controller is built into the panel next to the door this is very easy to do. It's also easy to do if the siren is built into the controller, since it provides a convenient homing signal [3].

Of course there is a technical method to avoid this problem. If the alarm reports to the monitoring service that the entry delay period has started, the monitoring station can independently time the entry delay. If an alarm disarmed message is not received within the entry delay period, the monitoring station can assume that the controller was destroyed or communications prevented, and treat the situation as an alarm. Different vendors have different names for this concept, which range from "smash-proof monitoring" to "asset protection logic" (???). Older conventional home alarms usually didn't do this, because they had to intercept the phone line, dial a call, report the message, and release the phone line for every monitoring event. This took long enough that it was to be avoided, and so it was common to not even report disarm events. Newer conventional and consumerized alarms report mostly by IP, and packets are cheap, so they're more likely to report every imaginable event.

One of the pain points here, though, is lack of clear communication. It can be hard to tell whether or not a given alarm system is smash-proof based on the marketing. As a matter of principle, I tend to mistrust controllerless designs since they are missing the first ring of the defense-in-depth tamper protection strategy (making it difficult to locate the controller). That said, a well-designed smash-proof monitoring strategy can mitigate this issue. It ultimately comes down to how much you trust the vendor's monitoring implementation.

Reporting paths

Another consideration in the design of alarm systems is the reporting path. If a burglar can prevent the alarm reporting in, they can roam the house undetected until a neighbor or passersby hears the siren and calls the police. In practice people mostly ignore sirens, so this could take a very long time. It's very important that the reporting mechanism of the alarm be reliable.

Conventional alarm systems perform reporting using a "communicator," typically a module installed in the cabinet. Communicators can be swapped out and there are many options available for most alarms, giving you a good degree of flexibility. Consumerized systems typically have one communication strategy built into the controller, but often enough it's the good one anyway.

Older alarm systems reported by telephone, leading to the trope (and reasonably common practice) of burglars cutting the phone line before entering. Modern alarm systems are much more likely to report over the internet. The great thing is that this makes "dual-path" reporting much easier: most consumerized alarm systems have either a standard or optional cellular data modem that allows them to report either via your home internet service or via the cellular network.

Given how low-cost it has become, dual-path reporting ought to be the minimum standard today. Fortunately basically all alarm systems offer it, conventional and consumerized. Consumerized systems typically ship with a fully integrated cellular feature with service sold as part of a monitoring plan. Modern communicators for conventional alarms are often LTE Cat-1 based but, unfortunately, it's not unusual for them to be locked down to a specific message broker.

Once pro of conventional communicators, though, is that all the major models are available in both AT&T and Verizon variants. This might be an important consideration for homes with poor cellular service from one or the other network. With consumerized systems it can be hard to know what network they use, and I haven't so far seen one with a solid site survey feature to establish that the cellular connection will be reliable. This is the kind of thing that feels like an artifact of the product having been designed and tested only in an urban area, where it's safer to assume that any given cellular provider will be workable from inside a closet.

Wireless Everything

And here we reach the elephant in the room: wireless sensors. The largest cost in installing an alarm system is usually the labor to run wiring, and it often ends up being installed in visible ways to hold down this cost. Homeowners hate visible wiring and they hate the high labor cost of installing wiring in walls, which can be hard to do without leaving visible artifacts anyway.

A variety of different protocols are in use for wireless alarm systems. Conventional alarm systems introduced "433MHz" sensors decades ago, and these are still in fairly common use. These sensors use a very simple PSK modulation to send typically around a dozen bits, including an address and some payload. There is no encryption or authentication. On the upside, the lack of any sort of cryptography makes it very easy to implement receivers for these sensors, and most conventional alarm radio modules can support 433MHz sensors from any manufacturer. This is viewed as an enduring feature of 433MHz sensors, since alarm dealers often perceive proprietary encrypted schemes as being mostly motivated by vendor lock-in.

So, about those proprietary protocols. The major alarm vendors have introduced various newer wireless protocols that are both more sophisticated (in terms of payload size, architecture, etc) and more secure (through use of cryptography). An example would be DSC's PowerG, which uses AES encryption, spread spectrum, and dynamic transmit power selection to deliver more reliable performance and resist jamming and replay attacks. PowerG is a far better design than traditional 433MHz, but of course you will have to buy all of your sensors from DSC... and they aren't cheap.

Consumerized alarms are more likely to use an industry standard protocol, and some conventional alarms do as well, at least as a secondary feature aimed at home automation. Z-Wave is probably the most popular (Amazon Ring, for example), although Zigbee is also in play and we can probably expect 6LoWPAN on some upcoming product. While this might seem to eliminate lock-in, well, it's typical for consumerized alarms to have a whitelist of approved sensors. Some of this is of course profit maximization, but there is also a real contradiction between open ecosystems and UL certification (which is typically required by insurers in order to offer a discount).

Use of open standards is not universal in these newer alarms. Abode, for example, uses a proprietary protocol in the 433MHz band for alarm sensors (although the controller also has Z-Wave connectivity for automation) [4]. All in all the whole thing is kind of a mess, and it's not necessarily easy to determine what radio protocol a given system uses. This is especially true for conventional alarms where the radio interfaces are often multi-protocol and there may be a mix of different devices in use, either to reduce cost or due to expansion over time.

All of this talk of radio raises a very obvious question: are these alarm systems adequately resistant to malicious interference?

Well, it's sort of hard to say. "Jamming" of WiFi-connected surveillance cameras has been observed in home robberies, so at least the more sophisticated thieves are clearly aware of the possibility. That said, I put "jamming" in scare quotes because as best I can tell the attack in question is actually deauth spamming. This is not proper radio jamming but rather exploits a weakness in the availability design of the WiFi protocol, and as a result it's significantly more practical and devices to perform the attack can be purchased online.

Actual jamming tends to be more difficult because it requires higher transmit power than legally manufactured (and thus readily available) radio ICs are capable of, and spread-spectrum or frequency-hopping designs are inherently resistant to interference. While there are academic papers describing practical jammers for 802.15.4 based protocols like Z-Wave and Zigbee, I have not been able to find any evidence that these devices are in use by burglars or even possible to obtain or build without the electronics and software engineering knowledge to build one based on the research reports. That said, with the magic of international ecommerce there can sometimes be an extremely rapid tipping point from "attack not practical" to "device available on AliExpress for $20" [5], so it's important to stay abreast of the developments here.

It should be said, too, that "classic" 433 MHz devices are so prone to jamming that it's not unusual for things like garage door openers to cause intermittent supervision troubles. Unfortunately these types of sensors really shouldn't be used, which is part of why major manufacturers don't really want to sell them. Plenty of alarm installers still go out of their way for the lower cost of these obsolete sensors, though, and like HID Classic far too many vulnerable examples can be found in the wild.

Manufacturers of wireless alarm systems sometimes contend that jamming is a non-issue because sensors are supervised, by sending regular pings to each sensor and confirming a response. Ignoring the obvious issue that most of these alarm systems seem to treat an unreachable sensor as a "trouble" rather than an alarm even in the armed state, supervision of wireless devices is truthfully only really designed to handle low batteries or hardware failure, not tampering. Consumerized alarm systems usually don't allow configuration of the supervision interval and don't even tell you what the supervision interval is. Conventional alarm systems often have a configurable supervision interval but it's usually still quite long as a minimum. One hour, four hours, and even 24 hours are all fairly common supervision intervals for RF senors. In practice, a burglar would be able to jam radio contact with sensors for quite a while before even a trouble was raised.

So is this all an argument that wireless alarm systems are bad? Well, once again, the best alarm system is the one you have. That said, there is a distinct advantage to conventional wired zones, if nothing else in that it eliminates batteries as a trouble point. A big upside to conventional alarm systems is that they virtually always support wired zones alongside wireless ones, allowing you to hardwire where practical and use radio where it would just be too laborious.

Sensor Selection

One of the benefits of wired alarm zones is that their simplicity means that you have almost complete interoperability of all sensors (the exception here is those alarms that use addressable zones, but these are still quite uncommon for security systems). A dizzying array of different types of sensors are available from the conventional (magnetic door contacts) to the exotic (capacitative proximity sensors). Even for proprietary wireless sensors, conventional alarm manufacturers offer a far wider range than the best consumerized systems.

For consumerized alarm systems, the selection of available sensors is often quite slim. This isn't necessarily an issue for home installations, but windows can be an issue. Windows are one of the easiest ways to break into a house, and yet some consumerized alarm vendors offer almost nothing to detect broken windows. Acoustic glass-break sensors are fairly widely available but somewhat notorious for false positives. Direct-contact impact sensors are one of the best options available for windows but aren't available for most consumerized alarms, and for others are obscenely expensive (considering that you ideally need one for each pane of glass) due to the placement of a complete radio module in each unit.

The sensor selection for consumerized systems can honestly be somewhat limiting, compared that outdoor PIR sensors for roof coverage and IR fences for outdoor perimeter protection are reasonably common on commercial alarms (and required by insurers for some types of businesses). These sensors are hard to find in any consumerized system, but widely available otherwise. In the past I've desoldered the reed switches from door closures to connect conventional alarm sensors to a proprietary wireless system. This works well but it's a hassle and requires a certain level of skill and equipment.

Monitoring Stations

Whenever possible, burglar alarms should be monitored by a central station. Traditionally many alarm dealers operated their own small central station, and many cities still have one or two of these small independents. Many conventional alarm vendors and all consumerized alarm vendors, though, contract with large nationwide central stations.

There is a certain degree of commonality between central stations because they all work to comply with the same UL (and sometimes FM) certification requirements. That said, consumerized alarm systems seem to choose their central stations based strictly on the lowest bid, and in my experience it shows. There can be a very obvious disconnect between the contracted central station and the alarm vendor that results in confusing information mismatches, and some cheaper central stations seem to use a live operator only to the minimum extent they think will pass UL muster... with most of the phone calls made by text-to-speech, making it difficult to get clarification or cancel false alarms.

Many municipalities and counties also have permitting requirements for alarm systems due to the burden false alarms can pose on police and fire departments. Some consumerized alarm vendors seem good at helping their customers take care of permitting, while others are not. A local company will pretty much always help you through this process and make sure your permits stay current... an important consideration since many municipalities will deprioritize calls from non-permitted alarms and then charge a fine for false alarms on non-permitted systems.

To be clear, I am depicting central stations in the most positive possible light. Large alarm dealers like ADT became notorious for their 2005-cellular-carrier behaviors of locking customers into long contracts at high rates. One of the advantages of having a conventional system that you own outright, though, is the ability to pick and choose central stations and change when you aren't happy. Consumerized systems tend to offer cheaper pricing on monitoring to start, but you are locked in to monitoring through the vendor until you replace the system. This is unusual with conventional systems where competitive monitoring services are usually able to "adopt" existing alarms, even if it requires sending a technician to reset the programming password.

Video Verification

Perhaps the greatest innovation in burglar alarms in some time is "video verification," in which a monitoring center dispatcher receives either a recorded video clip or live access to surveillance cameras during an alarm condition. It's thought that police departments will prioritize calls more highly when the monitoring center operator is able to confirm that there is an actual break-in. Unfortunately, while there are common standards for video verification established by the Security Industry Association (SIA), in practice it's usually only workable when the burglar alarm and video surveillance were purchased from the same vendor. Since consumer video surveillance systems tend to be uniformly terrible, there's a lot of downside here. Still, if you are willing to stay entirely in one vendor's cloud-based ecosystem, look for one that offers video verification.

Conclusions

This is sort of a grab-bag of thoughts on how conventional and consumer systems compare. Generally speaking, conventional alarm systems are superior on pretty much every measure except for installation cost, where even in fully-wireless applications they tend to run a lot more. Compare, for example, a basic Amazon Ring startup kit at $249 vs. a basic DSC PowerSeries startup kit at $580 from one vendor that sells direct-to-DIY.

As one mitigation to the higher cost of conventional systems, they routinely stay in service for a good decade or more, which seems unlikely for products like Ring. Conventional alarm communicators are mostly monitoring-station-agnostic, allowing you to shop around for a monitoring contract and potentially save quite a bit of money. As an added advantage, in most cities it's possible to get a monitoring contract with a local company that is either also a private security firm or has a relationship with one, resulting in a private security response that's typically quite a bit faster than police. You can usually make this kind of arrangement with consumerized systems by including a local private security dispatch as a contact phone number, but the contract monitoring centers aren't really set up for it and you lose the benefit of the responding private guard having direct contact with the alarm monitoring station so that they can receive regular updates.

[1] This is not to say that mobile apps are exclusive to consumerized alarms. Most conventional systems are now sold with mobile app integration, based on an IP communicator module and a message broker like alarm.com.

[2] Actually this not the case with well-installed conventional alarms, which usually have interior motion detectors not near the entry doors set up as "immediate zones" meaning that they will trigger the alarm immediately if violated during the entry delay. This avoids a burglar wandering the house during the entry delay period, when motion is expected only near the door. Some systems even correlate motion detectors to the zone that triggered the entry delay and will alarm immediately if motion is detected near an exterior door other than the one initially opened. But this also assumes that the alarm dealer did a good job designing and installing the system, which cannot be assumed. Consumerized systems usually aren't even capable of this kind of configuration, though, as it's more complicated to implement and even more complicated to explain to consumers who are designing and configuring their own system.

[3] This might seem like less of a concern since the siren doesn't sound until the alarm goes into the alarm state. There are two problems: first, some consumerized alarm systems play their entry delay warning sound from the controller as well. Second, to minimize nuisance false alarm dispatches most alarm systems don't actually report an alarm event until the siren has been sounding for a certain period of time, sometimes as long as 2 minutes.

[4] Most Abode hardware, and the protocol itself, seem to be white-labeled from a manufacturer called Climax. The fact that they mostly don't design their own hardware only makes it more embarrassing that their software is such a mess. This is quite common among these consumerized alarm systems, though. Look at enough of them and you will start to notice suspicious similarities: almost all of them use white-label hardware for at least the sensors, and some are little more than a new logo on a product that's been around for a while in the EMEA market.

[5] This is basically what happened with WiFi deauth attacks, which went from the topic of security conference talks to a 32 EUR portable device rather suddenly... and that price is from a reputable manufacturer, there are many cheaper options. Similarly the programming interface defect in Onity locks was viewed as being a mostly theoretical problem in the hospitality industry until suddenly people were going through hotels with pocket-sized implementations of the attack. The point I am trying to make here is that "the attack is possible but not practical" is usually sort of a gamble with the future.

--------------------------------------------------------------------------------

>>> 2022-09-25 the nevada national security site pt 4

part 1 | part 2 | part 3 | you are here

And now, the conclusion.

Lunch at the Nevada National Security Site is a strange experience of its own. Our coach dropped us off at the Bistro, a second, smaller cafeteria located further into the site and thus more conveniently to many of the workers in the field. The chef of this small operation, who was equipped with quite a bit of personality, made an admirable effort to keep up with the rush of tour guests but we still had to wait for some time. This allowed the opportunity to take in the posters, one of which advised that cash would soon no longer be accepted... starting 2018. There was plentiful evidence that the workplace posters of the NNSS enjoy far longer lives than anyone anticipated.

Our guide took us to eat in a nearby conference room. One long wall of the room was covered in butcher paper and delineated into sections, with printed pages (apparently work orders) pinned up in the various columns. Kanban, originally a method of scheduling manufacturing operations, has found wide adoption in federal contracting. Whether this is attributable to its efficacy or to the needs of the defense industry's large contingent of Agile consultants is unclear.

After lunch, we re-boarded our coach to head back out into the range. Along the way, our guide pointed out the yard where drilling equipment was stored, ready to be put back to use if the order ever came. There was a reprisal here of an interesting point our guide had mentioned previously: in his opinion, at least, the methods used to drill the large, straight boreholes used for the tests had been largely lost to history.

This is something of a recurring theme with nuclear weapons, and one of the more troubling challenges to stockpile stewardship. Some readers may be familiar with the widely reported case of FOGBANK. FOGBANK is a classified material used for a classified (although disclosed by a previous Undersecretary of Energy) purpose in several nuclear weapons. Originally manufactured in the '70s and '80s, FOGBANK had become more of a secret than ever intended by the time a need arose to produce more, in the '00s. Little documentation had been kept on the manufacturing process, the facility had been decommissioned, and few people involved in the '70s were still around. It took nearly a decade and over $100 million to reverse-engineer the process that the same organization had run successfully less than 50 years before.

The deeper context of FOGBANK provides a good example of the challenges of stockpile stewardship. FOGBANK is a component of the W76 nuclear warhead, in use to this day on the Trident II submarine-launched ballistic missile (SLBM). The W76 was designed at Los Alamos and manufactured across various facilities from '78 to '87. The W76 was originally designed, though, for the Trident I, and the Trident II was to carry its successor, the W88.

Because of the abrupt shutdown of the Rocky Flats Plant near Denver, where plutonium pits were manufactured, the W88 project was effectively canceled very early in its production run. The rapid and unexpected shutdown of Rocky Flats has had a pervasive impact on the modern state of the stockpile, as it was one of the most sophisticated facilities for manufacturing with special nuclear material and had been the planned site of several different major manufacturing runs. This, of course, raises the question of why exactly Rocky Flats abruptly closed its doors in the early '90s. The answer is one of the foremost embarrassments of the weapons program, and a story that (almost certainly intentionally) is not widely known.

To make a long story short (the Rocky Flats saga could easily make up multiple posts on its own), Rocky Flats was plagued throughout the '80s by extensive complaints and demonstrations related to the plant's environmental impact and alleged releases of contamination. These might have been dismissed as typical objections to a weapons facility near a major city and Denver in particular. Certainly most other similar facilities had also been the site of such demonstrations. Something was different at Rocky Flats, though, and the root of that difference was the 1980 passage of CERCLA, the Comprehensive Environmental Response, Compensation, and Liability Act.

Some readers may know that besides Cold War history, the history of CERCLA and broader efforts to address environmental contamination is one of my key interests. CERCLA is a landmark piece of legislation, spurred by incidents like the Valley of the Drums (a massive, unmanaged open chemical dump in Kentucky) and broadly increasing public concern about environmental contamination. CERCLA is expansive, but is best known for creating the National Priorities List or NPL, more often referred to as the Superfund program. More generally, CERCLA established in federal law an important basic principle: that organizations which cause environmental contamination are liable to remediate it, and that the federal government is empowered to force them to do so.

And so, unlike the nuclear protests on environmental grounds of the '60s and '70s, the issue of Rocky Flats became a series of complaints to federal regulators. Contrary to the insistence of the Department of Energy and Rocky Flats operating contractor Rockwell, regulators found these complaints quite credible.

On June 6, 1989, the FBI arranged a meeting at Rocky Flats to discuss a threat of terrorism against the plant. The threat wasn't real, or at least it wasn't a threat of terrorism. It was a search warrant. Following this unprecedented raid on a nuclear weapons facility by federal law enforcement, a long and fraught series of proceedings lead to criminal and civil fines against Rockwell and drafted indictments against both Rockwell and Department of Energy officials. A grand jury alleged pervasive, systematic practices of violating the plant's EPA and state environmental permits and then covering it up. Although Rockwell did pay the largest fine ever at the time for environmental contamination, the criminal proceedings were ultimately dropped as a result of a settlement agreement. This settlement agreement remains controversial to this day, with many alleging that the Department of Energy's cover-up effort extended into the Department of Justice, which agreed to "quietly resolve" the scandal and seal the court proceedings. Although substantial documents including the Grand Jury's report were leaked to Westword (one of my favorite papers), large portions of the Rocky Flats scandal remain sealed to this day.

As a result of the Rocky Flats raid, the DoE replaced Rockwell with EG&G and launched a massive environmental remediation program at the site. Despite DoE's defensive reaction to the federal (and later congressional) investigation, the environmental problems at Rocky Flats were at the time already known to be incredibly, almost intractably, severe. The writing was on the wall for Rocky Flats, and in '92 the W88 program was canceled and shutdown of the Rocky Flats plant began. Both cleanup efforts and lawsuits continue today, but most of the Rocky Flats site is now Rocky Flats National Wildlife Refuge. Most of the cleanup is completed, although often under modified, more relaxed requirements under the agreement that the site will remain under federal management and limited use.

With the W88 canceled just about as soon as production began, something needed to be placed on top of the Trident II, and that was the already thirty- year-old W76. No serious efforts were made to replace the W76 after the embarrassing failure of the W88 program, and so in 2000 a Lifetime Extension Program was initiated to refurbish the aging W76s. This program was delayed for years by the inability to reproduce FOGBANK. The program eventually designed the "W76 Mod 1," essentially a "minor revision" of the design, and original W76s were modified to the Mod 1 design. Starting in 2019, some W76 Mod 1s were further converted to a Mod 2 design.

The W76 is now solidly 50 years old, and we are still tinkering with them to both keep them working and modernize them with current fuzing and firing systems. To a very real extent, the nuclear weapons complex created this problem for itself through its long-running lack of concern for environmental stewardship and frequent inability to play well with other federal government priorities. The maintenance and modernization weapons is made slow and expensive by the steady atrophy of expertise and experience in the weapons program, resulting from the "boom and bust" nature of nuclear weapons where the program tends to alternate between full tilt and near dormancy depending on the political climate.

And throughout all of these problems is the pervasive issue of secrecy. The weapons program is widely accused of actively avoiding oversight. It operates under such secrecy that it's difficult to tell where this accusation has merit, although it's hard to imagine there are many places where it doesn't have at least some.

This is all a wide tangent from the NNSS, but it's the kind of thing you think about as you watch miles of mostly undistinguished desert go by. It was a bit of a drive to our next destination, the Sedan crater.

The Sedan crater is probably one of the best-known artifacts at the NNSS. It is certainly the most visually striking. Sedan is one of several experiments from the short-lived Plowshares program, which aimed to find peaceful civilian uses for nuclear weapons. While many of the more practical-seeming civilian uses of nuclear weapons were evaluated under other programs (e.g. nuclear rocket propulsion), the Plowshares project focused mostly on uses of a civil engineering nature. Major Plowshares efforts included excavation, blasting of mountain passes for road construction, and oil and gas stimulation.

The Sedan crater at the NNSS is the result of an experiment in excavation by nuclear weapons. A nuclear device was buried at a depth that was calculated to produce the largest possible crater, and then detonated. The 104 KT device moved 12 million tons of earth, creating a 320 foot deep crater measuring 1,280 feet from ridge to ridge. These numbers do not quite convey the size of the crater, which rises abruptly from the level desert and feels from the edge like its own small world. A viewpoint has been erected at a low spot in the crater ridge, and so from the platform the ridge blocks the view out of the crater. The resulting effect is very much like the alien planets of Star Trek, seeming both like a pedestrian bit of California desert and like nothing here on Earth.

By the viewing platform is a metal frame that was once used to winch a wheeled "moon buggy" down into the bottom of the crater, where workers drilled an exploratory hole to collect soil samples. The bottom of the crater shows a few remains of this drilling operation but mostly old tires, which our guide assumes are the result of occasional bored workers trying to get them to roll the whole way down. Radiation levels at the crater are quite low at this point, although travel to the bottom of the crater is not permitted for safety reasons.

The Plowshares project was not particularly successful or well-received by the public, for all the reasons you would think. The Sedan test did involve the release of an appreciable amount of radioactive contamination, and it would have been unwise to use the site for some time after. Plowshares-like efforts to both excavate and stimulate oil and gas production were more successful in the Soviet Union, but the USSR suffered a correspondingly higher number of serious accidents and some of the resulting contamination still poses a danger today.

As we left the Sedan crater, we went from the (attempted) civilian use of nuclear weapons to their impacts on civilians in wartime: the Apple-2 site.

Most readers have probably seen the films of anonymous suburban houses decimated by nuclear shockwaves. These well-known newsreels and propaganda films come from a series of atmospheric nuclear detonations at the NNSS that evaluated the survivability of civilian infrastructure. For the most part these civilian infrastructure tests were a secondary purpose of the detonations, which were primarily designed to accommodate military survivability tests out of Desert Rock. Apple-2 was one such test: it was prepared mostly to test methods of protecting military records from nuclear attack. As a secondary purpose, a set of structures were built using different construction methods. Somewhat like the mythical three little pigs (down to involving houses of wood and brick, although no house of straw is evident), the different construction methods were expected to produce a direct comparison of how well common structures could be hardened against nuclear attack.

Some of the Apple-2 structures were destroyed entirely, but two of the houses survive to this day, and we visited one of them. The two-story wood house loomed over us in a dramatic state of decay. While it had survived the blast largely intact, a decision has been made not to actively maintain it despite its historic value---in part to maintain its historic integrity. The heat of the blast burned away most of the paint, and the years have removed the rest. Entry into the house is no longer permitted as its structural integrity has become suspect. Almost no indications of the house's original domestic appearance remain, except a set of sculptural iron details that quite conspicuously remain near the second floor windows. Our guide thinks they were latches for storm shutters, although it's hard to tell now.

At the time of the blast the house was fitted with furniture and mannequins, mostly for the purposes of the film that was made of the test. It was also fitted with instruments: pressure and temperature sensors were placed throughout the structure, and today the experimental nature of the house is illustrated by the coaxial cables and hoses that hang out of the walls and ceilings. Near the house some remains of metal posts can be seen, the mounts for the film cameras used during the test. And some distance away, across a road that is unfortunately in too poor of condition for our coach, is the brick house in a similar state of disrepair.

Tests of this type were extensively conducted during the atmospheric testing era. Everything from subdivision houses to electrical substations and telephone lines were constructed in the NNSS to see how they'd hold up to the next test. The resulting information was important in the development of civil defense administration plans, but the idea of hardening residential construction against nuclear attack (for example by the use of 2x6 studs instead of 2x4 and other framing techniques) never really caught on. It wasn't entirely a bad idea, the improved construction techniques really do work. But they also add cost, and the Cold War hysteria of the '60s just didn't persist long enough to see nuclear hardening as a common real estate selling point.

The Apple-2 houses might be the tour stop that has the strongest relationship to the public perception of nuclear testing. The very house we walked around is depicted in the film "Operation Cue," a fairly well known piece of Cold War nostalgia. Yet, to be honest, it's not all that interesting to me. I am more fascinated by the two concrete-framed but open towers nearby, maybe six stories tall. Our guide tells me that they were built long after the Apple 2 test as part of a research project into earthquake-resistant construction. They wanted to build full-scale structures that could be repeatedly subjected to earthquakes, and no place could offer "earthquakes on demand" like the NNSS during active underground testing.

Leaving the Apple-2 houses, we pass the T-1 Training Area. This unique facility offers training courses to first responders on radiological detection equipment. These course are made more engaging by the location: ground zero of an atmospheric test. First responders practice handling various radiological emergencies ranging from terrorist attacks to transportation accidents in the background radiation environment of a real nuclear detonation.

To support these training programs, a whimsical array of scenarios are found improbably close together in the barren desert. A derailed train, a crashed airplane, and an abandoned mine are all reconstructed in the space of a few acres. Our guide points out the footings of the original test tower, and tells an anecdote about a satellite imagery enthusiast having once reported their plane crash as a real emergency. This is quite improbable, he points out, because the various broken apart sections of the plane don't even belong to the same model of aircraft. Similarly the derailed boxcars have somehow neatly lost their trucks, and the urban environment is oddly heavy on stacked sea containers. There are affordances to the budget.

Still, this is probably the most complete and realistic radiological emergency training facility available anywhere and the NNSS advertises that over 120,000 people, mostly firefighters, have visited the site since it began operation in 1999.

Emergency response to radiological emergencies is a tricky issue, because the actual scenarios happen quite rarely. We were told that the Department of Energy had been involved in the distribution of a huge amount of radiation survey equipment to fire departments after the Oklahoma City bombing. Shortly after, it was found that most departments had put the boxes they received in a closet and forgotten about them. No one knew how to actually conduct a radiation survey, and so there was little interest in fielding Geiger-Muller counters. The T-1 training complex is one of several facilities the DoE now operates to offer practical experience in locating and assessing radioactive contamination, and this type of training has become much more common for emergency departments nationwide.

On the topic of nuclear counter-terrorism, we also pass by the Radiological Countermeasures Test and Evaluation Complex. At this facility once operated by the Department of Homeland Security, a variety of radiation sources are available to test detection methods used to prevent nuclear terrorism. As a key example, the Advanced Spectroscopic Portal or ASP system installed at land border crossings was tested for its ability to detect cat litter (naturally slightly radioactive due to its mineral content) in semi-trailers. The main goal of this operation was to determine whether or not the ASP actually lived up to its manufacturer's claims for sensitivity.

Of course, our guide notes, the facility was defunded by congress not long after opening and so has been in a mothballed state for some time. One of the key values of American politics is the inability to commit to any program that will independently validate the claims of defense contractors.

As the end of our tour approaches, we head towards Frenchman Flat itself. Frenchman Flat is a large dry lake bed that hosted the first generation of tests at the NNSS. Because it is large, flat, and quite remote, it is attractive for many different types of dangerous or secret tests, and so it is is littered with the remains of many generations of projects.

As we drive onto the flat, our guide points out a number of items in quick succession. Footings of a tower that supported the device for an atmospheric test. A bank vault, installed by its manufacturer to validate their claim that it could survive a nuclear attack. A concrete-frame building with no walls on one side referred to as the hotel, built to test how different types of building wall systems (installed on the open side) would hold up to nuclear war.

One of the more grim aspects of the nuclear weapons program can be seen here: animal testing. Our modern understanding of the effects of radiation on living things (and thus of radiation safety) comes largely from animal testing performed under the auspices of the Department of Energy. At the "EPA Farm" in the NNSS, cows, horses, pigs, goats, and chickens were all dosed with radiological material to monitor uptake and resulting injury. In Frenchman Flat, there are the remains of pens and cages where farm animals were subjected directly to nuclear detonations.

Some of the cages once held pigs with their eyes sewn open. The eyes of these pigs were later dissected to determine how the initial flash damaged their eyes, and how susceptible humans were to the same blinding. Other pigs were dressed up in various types of clothing and subjected to blasts, to compare how natural and synthetic fabrics affected flash burns.

Animals were not the only form of life that met their dramatic end in Frenchman Flat. At one time, a small forest of Ponderosa Pines was installed in the flat and then destroyed. Modular steel buildings, railroad trusses, metal cylinders, and prototype fallout shelters were all built in Frenchman Flat and many of them survive (in heavily damaged form) to this day.

After the end of atmospheric testing, Frenchman Flat lost much of its utility since surface structures could no longer be directly tested. Instead, a new use was found for the flat: the hazmat spill center. A tank farm was built on the flat to store a variety of dangerous chemicals, and those chemicals can be pumped through a series of pipes to spill out into the open desert. The facility is used to test cleanup and containment methods and protective equipment for hazmat responders. A good portion of the modern methods for management of chlorine spills, for example, were developed or verified here. A wind tunnel and other controlled-environment facilities allow for more complex tests on dispersion behavior. This facility is still in operation today, although it has been renamed the Nonproliferation Test and Evaluation Complex and focuses more and more on counter-terrorism testing.

Frenchman Flat is the last real stop on our tour, and so we head from there back to Mercury and then on to Las Vegas. On the drive from the flat to Mercury, though, our guide points out the collapsed remains of Gravel Gerties. Named after a Dick Tracey character, these structures were designed at Sandia in the '50s to address a difficult but critical problem: how to safely assemble and disassemble nuclear weapons, when the high explosive material and nuclear material were in close proximity. Even though a weapon should never produce a nuclear yield unless properly triggered, there is a great quantity of high explosive in a nuclear weapon and any accidental detonation would scatter the accompanying nuclear material over a wide area.

This was a big problem at the time. In the early days of nuclear weapons, they were stored near Air Force bases with the pits removed. While this was thought to make storage much safer, it necessitated an assembly facility close to each Strategic Air Command base where the pits could be installed and removed when weapons were placed in and out of service [1]. As it happens, the gravel gertie design wasn't used at all weapons storage sites. Instead, early assembly facilities were deep underground (the first two stockpile sites, Manzano Base/Albuquerque and West Fort Hood/Killeen TX). Gravel Gerties did see widespread construction, though, at later SAC installations in the US as well as Atomic Weapons Establishment facilities in the UK. Several remain in use at the Pantex Plant in Texas for assembly and disassembly of weapons.

The design of the gravel gertie is simple but clever: a cylindrical concrete bunker is built with a fabric roof supported by steel cables. Above and around the bunker, a huge amount of gravel (about 7 meters thick by one report) is piled up. In the event of an accidental high explosive detonation, the cables should fail, causing many tons of gravel to collapse into the bunker and cover the radioactive material. This design was thought to be able to contain up to a 1 KT explosion, and at the NNSS at least two were built and tested. Today, that leaves a conspicuous pile of gravel with a few utility poles and chunks of concrete sticking haphazardly out of it. It is reassuring that this ought to be all that remains from a real major accident, although a bit disconcerting that the bodies of the unlucky technicians would probably never be retrieved.

There's nothing left on the tour now except for the hour or so drive back to Las Vegas. This might be something of a return to normalcy, to the "real world," but the Las Vegas Strip is not exactly normal. This series started with the strange contrast between Las Vegas, a globally-known tourist destination, and the NNSS, a little-known site with a history of secrecy. The National Atomic Testing Museum displays this same contrast today, with artifacts of the nuclear testing program displayed alongside posters of a showgirl made up as "Miss Atom Bomb."

Las Vegas often feels like it runs mostly on nostalgia for a bygone era. The NNSS has much the same feeling. Posters proclaim that "we are national security" to a staff involved mostly in containing the legacy of national security work. The "Welcome to Las Vegas" sign provides a grand entrance to almost no one, now positioned much too far south on the strip to see traffic between the airport and the resorts. Our tour guide fondly recounts the era of underground testing while speaking of atmospheric testing like a lost art. At the Flamingo, the showgirls working the casino floor are so sparse that they do more to highlight the disappearance of the form than preserve it.

I've been playing a lot of Fallout lately. Fallout: New Vegas, despite its spin-off status, is often considered the best of the series. Perhaps this is simply due to Bethesda's reduced involvement in the development, but I think it's at least partially because New Vegas is just more deeply rooted in reality than other entrants in the franchise. The nickname "Atomic City" is no longer common, but Las Vegas itself still feels like a relic of the Cold War, in ways both good and bad.

There's some sort of allegory between gambling and nuclear weapons, here. Both destructive, but both oddly seductive. The desert is full of strange and wonderful things.

part 1 | part 2 | part 3 | you are here

[1] I say "near" each AFB as the weapon repositories were usually technically independent installations, usually built and operated by the Army. This reflects the unusual civilian-military divide in the nuclear weapons complex: while the Air Force used the bombs, they were stored and handled under the auspices of the Atomic Energy Commission, which relied mostly on the Army. Initially these stockpile sites were all managed by civilians, and the Air Force had to request nuclear ordinance on loan. This divide has broken down over the years but is still influential on nuclear bureaucracy.

--------------------------------------------------------------------------------
                                                                        older ->