_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2023-02-17 something up there pt II

As we discussed previously, the search for UAP is often contextualized in terms of the events of 2017: the public revelation of the AATIP and alien-hunting efforts by Robert Bigelow and Tom DeLonge. While widely publicized, these programs seem to have lead to very little. I believe the termination of the AATIP (which lead to the creation of TTSA) to be a result of the AATIP's failure to address the DoD's actual concern: that UAP represented a threat to airspace sovereignty.

I just used a lot of four- and five-letter acronyms without explaining them. These topics were all discussed in the previous post and if you are not familiar with them I would encourage you to read it. Still, I will try to knock it off. Besides, now there is a new set of four- and five-letter acronyms. The end of the AATIP was not the end of the DoD's efforts to investigate UAP. Instead, military UAP research was reorganized, first into Naval intelligence as the UAP Task Force, and later in the cross-branch military intelligence All-Domain Anomaly Resolution Office, or AARO.

It is unclear exactly what the AARO has accomplished. As a military intelligence organization, the DoD will not comment on it. Most of what we know comes from legislators briefed on the program, like Sen. Gillibrand and Sen. Rubio. In various interviews and statements, they have said that AARO's work is underway but hampered by underfunding---underfunding that is, embarrassingly, a result of some kind of technical error in defense appropriation.

Administratively confused as they may be, the DoD's UAP efforts have lead to creation of a series of reports. Issued by the Director of National Intelligence (DNI) at the behest of congress, the June 2021 unclassified report appeared to be mostly a review of the same data analyzed by AATIP. The report was short---9 pages---but contained enough information to produce a lot of reporting. One of the most important takeaways is that, up to around 2020, the military had no standardized way of collecting reports of UAP. Later reporting would show that even after 2020 efforts to collect UAP reports were uneven and often ineffective.

Much of the reason for this is essentially stigma: advocates of UAP research have often complained that through the late 20th century the military developed a widespread attitude of suppressing UAP incidents to avoid embarrassment. As a result, it's likely that there are many more UAP encounters than known. This is particularly important since analysis (including that in the 2021 report) repeatedly finds that the majority of UAP reports are probably explainable, while a few are more likely to result from some type of unknown object such as an adversarial aircraft. In other words, the signal to noise ratio in UAP reports is low. Taken one way this might discourage reporting and analysis, since any individual report is unlikely to amount to anything. The opposite is true as well, though: if most UAP encounters are not reported and analyzed, it's likely that the genuinely troubling incidents will never be discovered. The 2021 report broadly suggests that this is exactly what was happening for many years: so few UAP incidents were seriously considered that no one noticed that some of them posed real danger.

The 2021 report briefly mentions that some UAP incidents were particularly compelling. For example, in 18 incidents the UAP demonstrated maneuvering. This doesn't mean "shot into the sky as if by antigravity," but rather that the objects appeared to be navigating towards targets, turning with intention, or stationkeeping against the wind. In other words, they are incidents in which the UAP appears to have been a powered craft under some type of control. Even more importantly, the report notes that in a few cases there were indications of RF activity. The military will never go into much detail on this topic because it quickly becomes classified, but many military aircraft are equipped with "electronic warfare" systems that use SDR and other radio technology to detect and classify RF signals. Historically the main purpose of these systems was to detect and locate anti-aircraft radar systems, but they have also been extended to general ELINT use.

ELINT is an intelligence community term for "electronic intelligence." Readers are more likely to be familiar with the term SIGINT, for signals intelligence, and the difference between the two can be initially confusing. The key is that the "electronic" in ELINT is the same as in "electronic warfare." SIGINT is about receiving signals in order to analyze their payloads, for example by cryptologic means. ELINT is about receiving signals for the sake of the signals themselves. For example, to recognize the chirp patterns used by specific adversarial radar systems, or to identify digital transmission modes used by different types of communications systems, thus indicating the presence of that communications system and its user. A simple and classic example of ELINT would be to determine that an adversarial force uses a certain type of encrypted digital radio system, and then monitor for transmissions matching that system to locate adversarial forces in the field. The contents don't matter and for an encrypted system may not be feasible to recover anyway. The mere presence of the signal provides useful intelligence.

The concept of ELINT becomes important in several different ways when discussing UAP. First, the 2021 DNI report's mention that several UAP were associated with RF emissions almost certainly refers to ELINT information collected by intelligence or electronic warfare equipment. These RF emissions likely indicate some combination of remote control and real-time data reporting, although a less likely possibility (in my opinion) is that it reflects electronic warfare equipment on the UAP engaged in some type of active countermeasure.

It's meaningful to contrast this view of the matter with the one widespread in the media in 2017. A UAP that maneuvers and communicates by radio is not exactly X-Files material, and almost by definition can be assumed to be an sUAS---small unmanned aerial system, commonly referred to as a drone. Far from the outlandish claims made by characters like Tom DeLonge, such a craft is hardly paranormal in that we know such devices exist and are in use. What is a startling discovery is that sUAS are being spotted operating near defense installations and military maneuvers and cannot be identified. This poses a very serious threat not only to airspace sovereignty as a general principle but also to the operational security of the military.

Perhaps the component of the report that generated the most media interest is its analysis of the nature of the reported UAP. In the vast majority of cases, in fact all but one, the DNI report states that it was not possible to definitively determine the nature of the UAP. This was almost always because of the limited information available, often just one or two eyewitness accounts and perhaps a poor photo and radar tracks. Most of these incidents presumably do have explanations within the realm of the known that simply could not be determined without additional evidence. On the other hand, the report does state that there are some cases which "may require additional scientific knowledge" to identify.

It is not entirely clear how dramatically this statement should be taken. It's possible, even likely, that the phrase mostly refers to the possibility that new methods of evidence collection will need to be developed, such as the new generation of radar systems currently emerging to collect more accurate information on sUAS with very low radar cross section due to their small size. It's also possible that the phrase reflects the fact that some reported UAP incidents involve the UAP behaving in ways that no known aerial system is capable of, such as high speeds and maneuvers requiring extreme performance. Once again, there is a temptation to take this possibility and run in the direction of extraterrestrial technology. Occam's razor at the very least suggests that it's more likely that some adversarial nation has made appreciable advancements in aviation technology and kept them secret. While perhaps unlikely this is not, in my mind, beyond reason. We know, for example, that both Russia and China have now made more progress towards fielding a practical hypersonic weapons system than the United States. This reinforces the possibility that their extensive research efforts have yielded some interesting results.

Following the 2021 UAP report, Congress ordered the DNI to produce annual updates on the state of UAP research. The first such update, the 2022 report, was released a few months ago. The unclassified version is quite short, but it is accompanied by a significantly longer and more detailed classified version which has been presented to some members of Congress. The unclassified document states that the number of known UAP incidents has increased appreciably, largely due to the substantial effort the military has made to encourage reporting. To provide a sense of the scale, 247 new reports were received in the roughly 1.5 years between the preliminary and 2022 reports. A number of additional incidents occurring prior to the 2021 report also came to the attention of military intelligence during the same period, and these were analyzed as well.

Perhaps the most important part of the 2022 report is its statement that, of the newly analyzed incidents, more than half were determined to be "unremarkable." In most cases, it was judged that the incident was probably caused by a balloon. While these are still of possible interest, they are less interesting than the remainder which are more difficult to explain. Intriguingly, the report states that some UAP "demonstrated unusual flight characteristics or performance capabilities." This supports the more dramatic interpretation of the 2021 report, that it is possible that some incidents cannot be explained without the assumption that some adversary possesses a previously unknown advanced technology.

While it already attracted a great deal of media attention, this entire matter of DNI reports was only the opening act to the spy balloon. The airspace sovereignty aspect of the UAP reports is not something that attracted much discussion in the media, but it has become much more front of mind as a UAP of the first kind drifted across the United States. This UAP was not unidentified for long, with the military publicly attributing it to China---an attribution that China has both formally and informally acknowledged.

Balloons are not new in warfare. Indeed, as the oldest form of aviation, the balloon is also the oldest form of military aviation. The first practical flying machine was the hot air balloon. While the technology originated in France, the first regular or large-scale example of military aviation is usually placed at the US Civil War. Hot air balloons were routinely used for reconnaissance during the Civil War, and the slow movement and long dwell times of balloons still make them attractive as reconnaissance platforms.

Military ballooning in the United States is not limited to the far past. During World War II, the Japanese launched nearly 10,000 balloons equipped with incendiaries. The hope was that these balloons would drift into the United States and start fires---which some of them did, although a concerted press censorship program largely prevented not only the Japanese but also Americans learning of the campaign. Ultimately the impact of the balloon bombs was very limited, but they are still often considered the first intercontinental weapon system. They might also be viewed as the first profound challenge to US air sovereignty, as the balloons required no nearby support (as aircraft of the era did) and the technology of the time provided no effective means of protection. Indeed, this was the calculus behind the press censorship: since there was no good way to stop the balloon bombs, the hope was that if the US carefully avoided any word of them being published, the Japanese might assume they were all being lost at sea and stop sending them.

While the Cold War presented Soviet bombers and then missiles as top concerns, it could be said that balloons have always been one of the greatest practical threats to airspace sovereignty. Despite their slow travel and poor maneuverability, balloons are hard to stop.

Balloons remain surprisingly relevant today. First, modern balloons can operate at extremely high altitudes, similar to those achieved by the U-2 spy plans. This provides an advantage both in terms of observation range and secrecy. Second, balloons are notoriously difficult to detect. While the envelope is large, the material is largely transparent to RF, resulting in a very low radar cross section. Careful design of the suspended payload can give it a very low radar cross section as well... often easier than it sounds, since the payload is kept very lightweight. The sum result of these two factors is that even large balloons are difficult to detect. They are most obvious visually, but the United States and Canada have never had that substantial of a ground observer program and the idea has not been on the public mind for many decades. Many people might see a balloon before any word reached air defense.

On January 28th, a large balloon operated by China entered US airspace over Alaska. During the following week, it drifted across the country until leaving the east coast near South Carolina, where it was shot down with a Sidewinder missile. Circumstances suggest that both the Chinese and US administrations may have intended to downplay the situation to avoid ratcheting tensions, as the US government did not announce the balloon to the public until about a day after it had initially been detected entering US airspace. Publicly, China claimed it to be a weather balloon which had unintentionally drifted off course. The New York Times reports that, privately, Chinese officials told US counterparts that they had not intended for the balloon to become such a public incident and would remove it from US airspace as quickly as possible.

Modern balloons of this type are capable of a limited but surprisingly flexible form of navigation by adjusting their buoyancy, and thus altitude, to drift in different winds. Perhaps the balloon spent a week crossing the US by intention, perhaps an unfortunate coincidence of weather created a situation where they were not able to navigate it out more quickly, or perhaps some equipment failure had rendered the balloon unable to change its altitude. I tend to suspect one of the latter two since it is hard to think of China's motivation to leave the balloon so publicly over the United States. In any case, that's what happened.

We now know more about the balloon, not so much because of analysis of the wreckage (although that is occurring) but more because the military and administration have begun to share more information collected by means including a U-2 spy plane (one of few aircraft capable of meeting the balloon's altitude) and other military reconnaissance equipment. The balloon had large solar arrays to power its equipment, it reportedly had small propellers (almost certainly to control orientation of the payload frame rather than for navigation), and it bristled with antennas.

This is an important point. One of the popular reactions to the balloon was mystery at why China would employ balloons when they have a substantial satellite capability. At least for anyone with a background in remote sensing the reason is quite obvious: balloons are just a lot closer to the ground than satellites, and that means that just about every form of sensing can be performed with much lower gain and thus better sensitivity. This is true of optical systems where balloons are capable of much better spatial resolution than satellites, but also true of RF where atmospheric attenuation and distortion both become very difficult problems when observing from orbit. Further, balloons are faster and cheaper to build and launch than satellites, allowing for much more frequent reconfigurations and earlier fielding of new observation equipment. The cost and timeline on satellites is such that newly developed intelligence technology takes years to make it from the lab to the sky... Chinese intelligence balloons, on the other hand, can likely be fabricated pretty quickly.

It's useful here to return to the topic of ELINT. First, it's very likely that ELINT was a major mission of this balloon. Sensing RF emissions from military equipment at close range is invaluable in creating ELINT signatures for equipment like radar and encrypted communications systems, which directly translates into a better capability to mount an offensive from the air. SIGINT was likely also a mission. One of the advantages of ELINT collection is that the data acquired for ELINT purposes can typically be processed to glean SIGINT information, and even provides valuable material for cryptologists attempting to break codes.

ELINT is also relevant in the detection of the balloon. While the spy balloon in the recent incident was detected by conventional means, the DoD has reported that they are now able to assert that this is at least the fifth such balloon to enter US airspace. For those not familiar with ELINT methods this might be surprising, but it makes a great deal of sense. The fact that this balloon was tracked by the military for days provided ample opportunities to collect good quality ELINT signatures of the communications equipment used by the balloon. The military possesses a number of aircraft dedicated to the purpose of ELINT and SIGINT collection, such as the RC-135---a modified C-135 Stratolifter equipped with specialized antennas and hundreds of pounds of electronic equipment. These type of aircraft could orbit the balloon for hours and collect extensive recordings of raw RF emissions.

ELINT information is also collected by ground-based and orbital (satellite) assets, including a family of satellites that deploy large parabolic reflectors to collect RF signals with extremely high gain. The data collected by these platforms is likely retained in raw form, allowing for retrospective analysis. Information collected by similar means has been publicly used in the past. And this is most likely how the first four balloons were discovered: by searching historic data collected by various platforms for matching ELINT signatures. The presence of the same digital data modem as in the recent spy balloon, in US airspace, almost certainly indicates a similar Chinese asset operating in the past.

It's important to understand that the RF environment is extremely busy, with a great deal of noise originating from the many radio devices we use every day. It's simply not feasible for someone in some military facility to carefully review waterfall displays of the RF data collected by numerous ELINT assets. What is much more feasible is to develop signatures and then use automation to search for instances of similar traffic. It's the practical reality of intelligence at scale.

The discovery of the recent spy balloon has had an incredible effect on air defense. I am of the general opinion, and have occasionally argued in the past, that the US government has significantly under-invested in air defense since the end of the Cold War. While we do need to move on from the hysteria of the 1970s, the lack of investment in air surveillance and defense over the last fifty years or so has lead to an embarrassing situation: our ability to detect intrusion on our airspace is fairly poor, and when we do it can take well over an hour to get a fighter in the air to investigate it. The balloon brought this problem to the attention of not only the government but the public, and so some action had to be taken.

Primary radar [1] is quite complex. Even decades into radar technology it remains a fairly difficult problem to pick objects of interest, such as aircraft, out of "clutter"---the many objects, ranging from the ground to wind-blown dust, that can produce primary radar returns. One of the simplest approaches is to ignore objects that are not large and moving fast. This type of filtering is usually adequate for detection of aircraft, but fails entirely for some objects like balloons and sUAS that may be small and slow moving.

Further, the US and Canada are very large. Integrating data from the many radar surveillance sites and presenting it in a way that allows an air defense controller to identify suspicious objects in the sea of normal air traffic is a difficult problem, and a problem that the US has not seriously invested in for decades. The information systems used by both the FAA and NORAD for processing of radar data are almost notoriously poor. In the wake of the spy balloon, officials have admitted to the press that the military is struggling to process the data from radar systems and identify notable objects.

Air defense is one of the oldest problems in computing as an industry. One of the first (perhaps the first, depending on who you ask) networked computer systems was SAGE: an air defense radar processing system. These problems are still difficult today, but we are no longer mounting cutting-edge research and development projects to face them. Instead, we are trapped in a morass of defensed contractors and acquisition projects that take decades to deliver nothing.

In response to the discovery of the spy balloon, NORAD has changed the parameters used to process radar data to exclude fewer objects. They have also made a policy change to take action on more unknown objects than they had before. This lead directly to NORAD action to intercept several balloons over the past two weeks. There are now indications that at least some of these balloons may have been ordinary amateur radio balloons, not presenting a threat to air sovereignty at all. Some will view this as an embarrassment or indictment of NORAD's now more aggressive approach, but it's an untenable problem. If China or some other adversary is sending small balloons into our airspace, we need to make an effort to identify such balloons. But currently, no organized system or method exists to identify balloons and other miscellaneous aerial equipment.

One could argue (indeed, here I am) that up to about two weeks ago NORAD was still looking for Soviet bombers, with a minor side project of light aircraft smuggling drugs. Air defense largely ignored anything that wasn't large and actively crossing a border (or more to the point an ADIZ). And that's how about four large intelligence platforms apparently wandered in unnoticed... with UAP reports suggesting that there may be much more.

My suspicion is that the coming year will involve many changes and challenges in the way that we surveil our airspace. I think that we will likely become more restrictive in airspace management, requiring more aircraft than before to have filed flight plans. Otherwise it is very difficult to differentiate a normal but untracked object from an adversarial intelligence asset.

And indications are that adversarial intelligence assets are a very real problem. China's spy balloon program is apparently both long-running and widespread, with similar balloons observed for years in other countries as well. This shouldn't be surprising---after all, reconnaissance balloons are the oldest form of military aviation. The US and allies made enormous use of reconnaissance balloons during the Cold War, sending many thousands into the USSR. It's likely the case that we only really slowed down because our modern reconnaissance balloon projects have all become notorious defense contracting failures. We're still trying, but projects like TARS have run far overbudget and still perform poorly in operational contexts.

It might feel like this situation is new, and in terms of press reporting it is. But we should have seen it coming. In an interview following a classified briefing, Senator John Kennedy said that "These objects have been flying over us for years, many years. We've known about those objects for many years."

Robert Bigelow got into UAP research because he was searching for aliens. Maybe aliens are out there, maybe they aren't, but there is one thing we know for sure: our adversaries are out there, and they possess aviation technology at least as advanced as ours. For decades we ignored UFOs as folly, and for decades we ignored potential aviation advancements by our adversaries along with them. Now those advancements are floating across the northern United States and perhaps worse---the DNI is hoping they'll find out, if they can just get people to report what they see.

[1] Radar that operates by detecting reflections or attenuation of an RF field by an object. This is as opposed to secondary radar, more common in air traffic control, that works by "interrogating" a cooperative transponder installed on the aircraft.

--------------------------------------------------------------------------------

>>> 2023-02-14 something up there pt I

Over the last few weeks, there has been an astounding increase in the number of objects shot down by North American air defense. Little is yet known about some of these objects, but it is clearly one of the more dramatic UFO turns in recent memory. Some of the mystery is simply the fog of war, and the time it takes for defense organizations to collect and publicize information. I think that much of it, though, is attributable to a few frustrating factors: the limited familiarity most of the public has with the reality of military operations today; the tendency of the most vocal parts of the public to attempt to fit all events into a preconceived theory (often of the more out-of-this-world kind); and the poor job the media has done of contextualizing these events.

I have written once before about UFOs, and I try not to do it too much for fear of coming off as a crazy person. Still, though, UFOs and their colorful history are one of my greatest interests. Over the last week I have done a lot of yelling at the television and internet comment sections. So here, I am going to attempt something ambitious. I would like to put together for you a possible, even likely, story of the UFO news of the last two years: of AATIP, balloons, and how they all fit together.

Most of what I am about to write is fairly well-established fact, but the way that I connect these facts together is a matter of speculation and opinion. Still, my knowledge of both the history and present of aerial phenomena and the military and intelligence communities, with particular focus on air defense, gives me a set of opinions on this topic that feel extremely obvious to me but are seldom presented in the media or online discussions.

I can't promise I'm correct, but I do hope you'll consider the possibility that the story I will tell here is indeed what has happened: that, far from disclosure, we are currently living out the consequences of a sophisticated adversary, government inefficacy, and one man's eccentric swindle.

And that's where we'll start: with one man.

Robert Bigelow made his wealth in the hospitality business. Budget Suites of America is his marquee brand, but his empire spreads far beyond with a huge hotel and multi-family housing portfolio. Through most of the second half of the 20th century, hotels kept Bigelow busy and made him rich, but by the 1990s he turned towards his true passion: the paranormal.

Most reporting on Bigelow focuses on Bigelow Aerospace (BA). When he's identified as an eccentric, it's usually in regards to BA's research into UFOs. And yet, Bigelow's paranormal investigations begin years earlier: in 1995, he founded the National Institute for Discovery Science, or NIDSci. NIDSci's focus was not UFOs but paranormal phenomena more broadly, including parapsychology. Bigelow was joined in this venture by his friend, journalist George Knapp.

Knapp is perhaps best known in paranormal communities for his extensive reporting on the claims of Bob Lazar [1]. In the mid 1990s, Knapp turned his focus towards cattle mutilation and related phenomena, the same field of inquiry that made Linda Multon Howe's fame. Cattle mutilation has a long history and in the '90s was seen as one of the more credible forms of paranormal activity. Quite a few paranormal researchers chased mutilated cattle like ambulances, but Knapp had a remarkable lead on the topic: Skinwalker Ranch.

Also known as the Sherman Ranch after the brief owners that first shared stories of its haunting, Skinwalker Ranch is a 512 acre property in rural Utah. It takes its common name from a frightening creature of Navajo belief, "yee naaldlooshii." The Dine feel it to be unwise or at least improper to discuss the Skinwalker, and so I will not dwell on it. We can avoid the topic quite easily, as the relation of Skinwalker Ranch to the Skinwalker itself is loose and a result of white settlers rather than anyone who would know better. What we can certainly say about Skinwalker Ranch is this: it is popularly associated with spooky shit.

Summarized briefly, the stories of Skinwalker Ranch encompass just about every paranormal modality you can think of. Crop circles, mutilated cattle, strange lights in the sky, footsteps heard at night, a quiet but disconcerting sound that you cannot escape, bedroom doors locked at night to fend off something that has been scratching at the walls, creatures that are felt rather than seen, bright apparitions like spotlights chasing people on ranch roads, et cetera.

Whether that spooky shit is the consequence of aliens, secret military projects, Bigfoot, ghosts, or otherwise depends largely on who you ask. The legends of Skinwalker Ranch also originate almost entirely with the Shermans who owned it for only two years, which has produced some obvious questions about their veracity. Still, it is one of the most famous sites of paranormal activity and a household name among paranormal enthusiasts [2].

In 1996, Knapp joined with Bigelow and biochemist Colm Kelleher to resolve the mystery of Skinwalker Ranch once and for all, or at least publish a book about it. That year, NIDSci bought the ranch. A small staff of scientists and paranormal enthusiasts was recruited to perform research on the site, and it was otherwise closed to access. It has remained privately owned and guarded since then, perpetuating its paranormal associations.

Bigelow owned Skinwalker Ranch for about twenty years, but serious investigation seems to have only occurred for the first half of that period. In 2005, Knapp and Kelleher published a book, "Hunt for the Skinwalker," presenting their results. The results are, well, minimal. The book is mostly a recounting of the legends told by the Shermans, along with similar encounters during NIDSci's tenure.

In any case, the details of Skinwalker Ranch are not all that important to the story I am telling here. The reason I bring this whole thing up is because of what it tells us about Robert Bigelow. Bigelow is fascinated with paranormal phenomena and has the wealth and connections to bring journalists and scientists into his projects. His projects do not necessarily produce results.

Most of all, remember this: Bigelow has done this before.

George Knapp had another friend of note: the late Harry Reid, a long-serving senator from Nevada. In fact, Knapp and Reid were in conversation on the topic of UFOs the same year that Bigelow bought Skinwalker Ranch. I do not know to what extend Reid was aware of NIDSci's efforts, but I think it must have been at least a bit, as Reid writes in a New York Times editorial that Knapp had invited him to a conference in 1996. In any case, Reid found Knapp credible, and became the principal congressional advocate of serious investigation of UAPs. Reid was quite clear about his interest in UFOs, and while he viewed extraterrestrial origin as only one possibility, he felt it to be a possibility worth investigating.

Here I should discuss terminology. I tend to use the term UFO, or unidentified flying object. The problem with "UFO" is that it is widely understood to refer specifically to phenomena of ostensibly extraterrestrial origin, and it's closely associated with conspiracy theories and loons. In modern government research, the term UAP, for unidentified aerial phenomena, is preferred. This is indeed mostly a matter of optics. I do think the distinction is important, though, as even within the UFO community "UFO" tends to have an alien connotation, and "UAP" is not intended to. The term UAP allows us to be a bit more flexible in our thinking by not assuming the existing body of extraterrestrial-oriented UFO research. From this point on I will prefer the term UAP for consistency with reporting on the topic.

In 1999, Robert Bigelow founded Bigelow Aerospace (BA). The history of BA is confusing in some ways. On the one hand, it seems that Bigelow was genuinely interested in developing aerospace technology, perhaps particularly for the purpose of space tourism... right in line with his history in hospitality. On the other hand, BA was founded right in the middle of the Skinwalker Ranch project, and it's hard to imagine that it wasn't related. BA has held various contracts in space systems development but has never had a very large staff. It is mostly known today for the way that it, too, interacted with Senator Reid: the Advanced Aerospace Threat Identification Program, or AATIP.

AATIP, by Reid's own account, started in 2007. It was a highly secretive program and so the early details are somewhat obscure. The main gist of AATIP was to collect reports of UAPs and then analyze those incidents to develop a possible explanation. Like many military projects, AATIP was contracted out to private industry. Also like many military projects, the AATIP contract was awarded to the same person who had lobbied for the program's creation: Robert Bigelow, through a division of BA called Bigelow Aerospace Advanced Space Studies or BAASS. Reid makes it fairly clear that AATIP started and ended with Robert Bigelow.

Many aspects of AATIP are unknown or questionable. Perhaps most notable is the question of AATIP's leadership. Long-time military intelligence analyst Luis Elizondo claimed, after his 2012 separation from the military, to have been AATIP's director. The Pentagon denies this, and journalists have questioned various aspects of Elizondo's story, but he has a notable supporter: Senator Reid concurs that Elizondo lead the program. As a general matter it seems fairly certain that Elizondo was at least a senior leader of AATIP, but the confusion underscores the uncertainty around the history, mission, and outcomes of the DoD's UAP efforts in the late 2010s. One gets the impression that no one is telling the whole story, probably because everyone is trying to make themselves look good.

What we do know about AATIP is that the program ended in 2012, and that BAASS produced a lengthy report on its findings. This report has never been released to the public, but it is thought to be largely similar to more recent reports from the DoD's in-house UAP program, mostly summarizing BA's conclusions after attempting to identify the cause of a large number of individual UAP incidents. Various parties involved in AATIP, from Elizondo to Reid, have made large claims about AATIP having identified possible extraterrestrial technology, but nothing has emerged to substantiate these claims. I find it most likely that they were exaggerations of more commonplace anomalies in AATIP data.

This is where I will diverge somewhat from undisputed history and share my opinion. AATIP demonstrates that at least a few in congress and likely some individuals in the DoD had a genuine interest in UAP. I believe, though, that most journalists have been entirely too credulous in their reporting on AATIP. While the DoD's and likely Reid's interest in the topic were more out of concern for national security, BAASS had something else in mind. One thing we know about Bigelow is that he is fascinated by the paranormal and can spin very little evidence in to a huge story, as he did at Skinwalker Ranch. Moreover, there are clear indications that AATIP did not exactly operated as planned. Besides the general confusion around the exact operating details of AATIP, which suggest that the program operated with very little DoD oversight, I find it likely likely that AATIP diverged entirely from its original purpose.

AATIP was originally funded as a research program into possible advanced weapons systems possessed by adversaries, but it ended as a research program into extraterrestrial presence on Earth. Multiple journalists report that this change in focus occurred at the behest of Bigelow himself, and the Pentagon's awkward termination of the program in 2012 suggests that it did not occur with DoD approval.

I believe that Bigelow won the AATIP contract more by connections and luck than competence, and that AATIP went "off the rails" essentially from the beginning. Bigelow was hunting for aliens and the powerful Senator Reid shared this intention. Through confidence and political savvy, hanging mostly off of Senator Reid's considerable influence on defense spending, Bigelow was able to separate the pentagon from some $22 million to fund his personal hobby. While I believe his passion was real and his intent good, AATIP was largely Bigelow's flight of fancy and was not aligned with actual DoD interests in the topic. As senior leadership in the executive branch and Congress became more aware of the situation, AATIP was quietly ended. To support its own interest in adversarial systems, the Pentagon replaced AATIP with an internal program: the UAP Task Force, later reorganized as the All-Domain Anomaly Resolution Office.

The former members of the AATIP did not take to this change well and attempted to pivot their work from government funding to the private sector. These efforts eventually reached wealthy UFO enthusiast Tom DeLonge, of Blink-182 fame. DeLonge had by this point connected with Hal Puthoff. Puthoff is an electrical engineer, former Scientologist, and paranormal researcher long known for his research into psychics and remote viewing. Puthoff worked in these fields in an opportune time: most who are familiar with the concept of remote viewing know of it because of the military's efforts depicted in "The Men who Stare at Goats." Puthoff was directly involved in these programs as a researcher at Stanford University spinoff and defense contractor SRI, which administered some of the military's psychic research on contract. After these efforts, Puthoff founded EarthTech International, which continues research in parapsychology, cold fusion, and other fields which can be generally categorized as "woo."

DeLonge, Puthoff, and former CIA agent and UFO experiencer Jim Semivan founded an organization called To the Stars Academy of Arts and Sciences (TTSA) in 2017. TTSA was somewhere between a spinoff and new parent organization for a media company called To The Stars that had distributed records and books for Tom DeLonge. Through an odd series of announcements, TTSA basically transformed from DeLonge's private record label to a rough continuation of AATIP, but one that would be publicly funded through the sales of media. While TTSA has made claims to extraterrestrial technology and breakthroughs in UAP research, almost nothing that they've put out has ever made any sense, and unsurprisingly the organization has faded into obscurity. TTSA's ambitions to original UAP research basically disappeared by 2018, and today TTSA is little more than DeLonge's online merch store. Given the questions around Elizondo's history, it's unclear how much TTSA had to do with AATIP in the first place, but it certainly didn't amount to anything.

This whole matter of AATIP and TTSA is sort of a flash in the pan, but it set critical context for events to come. The DoD had invested real money and effort into the question of UAPs. The organization that spent that money, AATIP/BAASS, and its loose successor TTSA, seemed to very openly consider UAP research to be research into extraterrestrial presence and other paranormal phenomena. The media, for the most part, has not differentiated between Bigelow's interests and the Pentagon's interests in this regard. I believe that Bigelow was very much hunting for aliens, but the Pentagon was not... the Pentagon was looking for explanations for UAP, and aliens were probably not high on the list of expected outcomes. It does not help matters that Senator Reid seems to have been more on Bigelow's side of this divide.

The real crux of the contemporary UAP issue is that UAPs returned to public attention due to Bigelow's eccentric goose chase and DeLonge's self-promotion, but Bigelow's DoD contract and Elizondo's military past gave these otherwise incredulous stories the imprimatur of government. The media's unquestioning reporting on AATIP and even, to some extent, TTSA gave the impression that these were sophisticated programs endorsed by the government. In fact, they were haphazard efforts by just a few people with long histories in quackery.

AATIP was public knowledge years earlier but became a major news item in 2017 due to DeLonge and Elizondo's promotion of TTSA. Bigelow, DeLonge, Elizondo, and even Senator Reid openly spoke about AATIP's ostensible extraterrestrial research, while the DoD declined to speak about an apparently classified program. In fact, it was not until some time later that it became evident that DoD had continued UAP research at all after 2012, and that research was done under conditions of secrecy as well.

What the public heard is that the Pentagon was hunting for UFOs. How that related to actual DoD interests or programs was irrelevant, because the Pentagon wouldn't talk about it and the media didn't particularly care. The UFOs made headlines. Pentagon UAP reporting procedures and incident databases were boring details.

This particular outcome of the 2017 news cycle, a series of crazed front-page articles that I believe to have been nothing but Bigelow and DeLonge promoting their own business ventures, massively influenced the way UAPs are viewed by the public today. What was really Bigelow's personal lark enabled by his Senate connections became a new MKULTRA but less sinister. No one took it seriously. Well, except for people who thought UAPs were definitely aliens, who took it as seriously as they do Bob Lazar.

What about the Pentagon's side of the story, though? Why was the military interested in UAPs, and why did it continue UAP research (and, it seems, expand it) after Bigelow's involvement ended? I believe that we recently saw the answer floating eastwards across the northern United States.

The thing is, aliens are one of the less likely explanations for UAPs, and to be honest they are one of the less interesting. Most UAPs, it stands to reason, originate here on earth. And that is very much a military concern.

Foo fighters, strange aircraft reported by military pilots, are just about as old as military aviation. The term "foo fighter" comes from WWII, and indeed WWII was lousy with strange aerial encounters. It has always been assumed that the vast majority of foo fighters were mistaken perceptions, but they have always been of interest to military intelligence because of the possibility that they were simply misidentified enemy aircraft. From this perspective the strange, otherworldly behavior of foo fighters is all the more interesting: they might represent enemy aircraft of a novel kind.

The mass publicity around UAPs in 2017 spurned a great deal of public interest which resulted in some media reporting on UAP incidents as they happened. The Drive's Tyler Rogoway has perhaps become today's Linda Multon Howe but more credible, as he has repeatedly written some of the most detailed analysis of UAP incidents. Put together, Rogoway's articles on UAPs from 2017 to the present don't come together into any particular narrative except for the broad one of challenges to airspace sovereignty.

Airspace sovereignty is a general term used to describe a state's control of its airspace. The United States exercises air sovereignty through the civilian operations of the FAA and the military operations of NORAD, a joint US-Canadian command that shares the FAA's radar network to observe for Soviet bombers and other aerial threats. Obviously Soviet bombers are no longer a great concern, but the technical and bureaucratic infrastructure of NORAD are still mostly organized around that threat.

The FAA-Air Force Joint Surveillance System consists of radar instruments that are about 30 years old at the newest, with some equipment dating back to the '60s still in use. It is a common misconception that the FAA, NORAD, or someone has complete information on aircraft in the skies. In reality, this is far from true. Primary radar is inherently limited in range and sensitivity, and the JSS is a compromise aimed mostly at providing safety of commercial air routes and surveillance off the coasts. Air traffic control and air defense radar is blind to small aircraft in many areas and even large aircraft in some portions of the US and Canada, and that's without any consideration of low-radar-profile or "stealth" technology. With limited exceptions such as the Air Defense Identification Zones off the coasts and the Washington DC region, neither NORAD nor the FAA expect to be able to identify aircraft in the air. Aircraft operating under visual flight rules routinely do so without filing any type of flight plan, and air traffic controllers outside of airport approach areas ignore these radar contacts unless asked to do otherwise.

The idea I am trying to convey is that airspace sovereignty is a tricky problem. The US and Canada are very large countries and so the airspace over them is very large as well. Surveiling that airspace is expensive and complex. Since the decline of the Cold War there has been no interest in spending the money that would be required for complete airspace awareness, and indeed the ability of the FAA and military to field airspace surveillance technology seems to have declined over recent decades rather than increased. We don't really know what's out there all the time, and it seems very possible that a determined adversary might be able to sneak in and out of US airspace largely undetected.

There are incidents and accidents, hints and allegations, that suggest that this concern is not merely theoretical. In late 2017, air traffic controllers tracked an object on radar in northern California and southern Oregon. Multiple commercial air crews, asked to keep an eye out, saw the object and described it as, well, an airplane. It was flying at a speed and altitude consistent with a jetliner and made no strange maneuvers. It was really all very ordinary except that no one had any idea who or what it was. The inability to identify this airplane spooked air traffic controllers who engaged the military. Eventually fighter jets were dispatched from Portland, but by the time they were in the air controllers had lost radar contact with the object. The fighter pilots made an effort to locate the object, but unsurprisingly considering the limited range of the target acquisition radar onboard fighters, they were unsuccessful. One interpretation of this event is that everyone involved was either crazy or mistaken. Perhaps it had been swamp gas all along. Another interpretation is that someone flew a good sized jet aircraft into, over, and out of the United States without being identified or intercepted. Reporting around the incident suggests that the military both took it seriously and does not want to talk about it.

This incident is not unique. Over the last few years there have been multiple instances of commercial aircrews reporting unidentified aircraft, which were sometimes fantastical and sometimes quite mundane. Fewer incidents of radar contact with unknown aircraft are known, but these are less likely to make it to the press. Moreover, air traffic controllers with the FAA and, apparently, military air defense controllers both have a tendency to filter their radar scopes to hide objects that are not "of interest." Several aviation accidents in the last five years have resulted in investigations that found that radar did detect concerns such as flocks of birds but those contacts were not displayed due to the configuration of the radar scope. This suggests that controllers may have been willfully ignorant of some oddities, not unsurprisingly since they are focused primarily on the aircraft with which they have contact.

All of this sounds a little bit wild, and a little but unbelievable, right? That's one of the biggest problems that DoD seems to grapple with. As long as military aviators have been seeing strange things, they have been laughed at for it. Skeptical reactions are not at all undeserved, but the DoD has communicated that a major motivation of current UAP efforts are to encourage people to report strange things in the sky, instead of staying quiet for fear of sounding crazy.

To be clear, the vast majority of these incidents are almost certainly mistakes of some kind. Perceptual effects can make stars appear to move strangely, atmospheric phenomenon can appear as solid objects, and sometimes you just get disoriented and something very ordinary looks very strange. But there is a matter of baby and bath water. Even though the majority of UAP sightings amount to nothing, it is possible, even likely, that a few of them were sightings of real objects. Real objects which were not tracked by air traffic controllers or air defense. Real objects which represent a challenge to airspace sovereignty.

And that brings us up to a few weeks ago: there was evidence, scant evidence but still evidence, that unidentified objects were operating in US airspace. Troublingly, these objects were sometimes reported close to military installations, and even dwelling near them for extended periods of time. The DoD, I believe, was deeply concerned that at least some of these reports might be indications that an adversary was successfully placing aerial surveillance equipment over the United States undetected. And that's why the Pentagon has spent years encouraging military personnel to report UAP sightings, and analyzing those reports for plausible explanations: not because they might be aliens, but because they might be the enemy.

And then, something happened with a balloon. What's up with that?? We'll talk about it next time, in part II.

[1] I will not expand on the story of Bob Lazar here, but for those not familiar it is useful to know that Lazar's stories of secret underground alien bases and military collaboration with aliens are both completely discredited and extremely influential on modern UFO thought.

[2] Here I will caution you that the horror film "Skinwalker Ranch" is both almost entirely unrelated to the real story (or even doubtful claims) about the place and, well, bad.

--------------------------------------------------------------------------------

>>> 2023-02-13 my homelab

I have always found the term "homelab" a little confusing. It's a bit like the residential version of "on-premises cloud," in that it seems to presuppose that a lab is the normal place that you find computer equipment. Of course I get that "homelab" is usually used by those who take pride in the careful workmanship of their home installation, and I am not one of those people.

Welcome to Computers Are Bad - in color.

Note: if you get this by email, the images may or may not work right. We're going to find out together! I don't plan to make a habit of including images and they don't look that good anyway, so I'm not too worried about it.

closet rack

They say that necessity is the mother of invention, but I think often mere desire will suffice, and I am sort of particular about how I want things to work. Perhaps the bigger problem is that I started my career in technology in a way that was both mundane and hands-on: in high school I found a poorly paying job as a sort of technical jack-of-all-trades for a local managed service provider (MSP). The term MSP is not even that familiar to many in the technology industry today. This was the kind of company that would set up and maintain Microsoft Active Directory for businesses that were big enough to have ten computers but not big enough to have an IT department. The owner, though, was a wheeler-dealer if I ever knew one, and generally jumped into whatever line of business he thought would make some money.

I was hired ostensibly as a computer technician, repairing laptops as a Lenovo contract warranty service center. Then I was repairing photocopiers, then I was selling them. Not long after I was running common-spaces WiFi for a fairly large office tower (the World Trade Center... of Portland, Oregon). Along with some video surveillance installation, I developed the kind of addiction that doesn't pay well enough to be a career unless you are smart enough to go to trade school instead of a university: cabling.

And I think that's how I became the person I am today: I want computer networks to operate in as straightforward and tangible a fashion as they did in 2009. And I want a lot of cabling.

I don't have a large house, and I do have a lot of stuff. Most equipment is crammed into a 14U wall-mount rack in the upper part of the office closet. Two sets of fan grilles, in a push-pull arrangement, ventilate the top of the closet and as a bonus circulate air from the office to the laundry room. Closet shelving stands in for things that are not amenable to rackmounting, such as my "breadbox" form factor AT&T Merlin model 206 KSU. This small-business telephone system dates back to around 1985 but still operates well after a repair to the power supply. It supports 6 extensions (conveniently connected by 8P8C cabling, ethernet-compatible) and 2 outside lines, which are provided by an ATA connected to the Asterisk server I run "in the cloud." It is one of two phone systems in the house, the other being all IP.

I installed the Merlin instead of the significantly more capable, late-'90s vintage Comdial PABX I have (with voicemail!) because it is incredibly fashionable and because I love the simple logic of key systems. I do also love the Comdial for how over-the-top complicated its hybrid PABX/key system design is, complete with text messaging, but it just doesn't have the charm of a system where phones were offered in a color called Cinnabar. Unfortunately I don't have any phones in Cinnabar; they've proven very hard to find on the second-hand market.

Also on the shelf, due to lack of motivation to mount it more neatly, is a PiStar/MMDVM hotspot. While it is configured for DMR (I sometimes monitor the Southwest and New Mexico Brandmeister groups, AE5JL) I use it mostly as a POCSAG pager transmitter. A simple daemon I wrote bridges messages from MQTT to the MMDVM remote control interface, notifying me of various events like violation of the IR optical fence across the end of the driveway via the finest communications technology of the '80s: a beeper. I have started acquiring hardware to replace it with a 35 watt transmitter which will properly introduce DAPNET amateur paging to Albuquerque, but I only have so much free time and money.

closet rack

I take great pride in my work, but no one pays me for this, so I try not to consider it work. About once a year I make a sincere effort to tidy the patch cables but it never lasts.

An Arris cable modem is where The Cloud arrives in my home. I am fortunate enough to have slightly faster than gigabit internet service, although I haven't bothered to set up link aggregation so it is de facto 1gbps. It's okay, the router doesn't really make 1gbps in some scenarios due to PPS performance limitations anyway. I am unfortunate enough to obtain that internet from Comcast, which means that it is expensive and the upstream only hits 45mbps on a good day. My favorite feature of this Arris modem is that no matter how many times I reset the password for the management interface I can never get back into it later. I'm pretty sure this is my fault, but cable modems are loathsome so I'll blame it on Arris anyway. The city recently completed a franchise agreement with an FTTH provider out of Texas and it is possible I will be able to get service from them inside of the next six months. Given the history of new ISPs in this area I am not holding my breath.

Because of my strident objection to Comcast's existence, for about the first six months after I bought this house I obtained my internet connection only via LTE, using a used Cradlepoint and roof-mounted diversity antennas. The performance was actually quite good at night, but it was very poor during the day. I live very close to downtown and so I assume this was determined mostly by the occupancy of the office towers. The bigger problem is that the tiny MVNO I used, on a grandfathered contract with AT&T that had exceptionally good terms, was also one person with a FedEx Office mailing addresss that was not very good at subscription management. Every couple of months the internet would stop working and I would have to call them to nag them to update the expiration date on my service plan in their provisioning system, which was of course not at all integrated with their billing system.

From the modem, bits flow downstream to a PC Engines APU4D4 SBC running Opnsense. This is one of two APU4D4s that sit side-by-side in a very tidy 1U enclosure I imported from France at a completely exorbitant price. Why I spent something like EUR 150 on getting this nicely silk-screened front panel for the APUs only to Tetris most of the rest of the equipment onto a rack shelf is a mystery to me as well.

I am mostly pretty happy with Opnsense except for all of the ways I hate it. It replaced a Unifi Security Gateway which replaced an old Sonicwall, so I figure I am at least moving upwards in usability. My favorite thing about Opnsense is that it brings me the warm comfort of using BSD. My least favorite thing about it is how many clicks it takes to get to the DHCP lease table, which I am constantly looking at because I do not keep the internal DNS records up to date at all.

The core switch is a TP-Link 24-port PoE switch. It's Omada-manageable, along with a couple of other TP-Link switches elsewhere in the house, and I figure I will eventually buy into Omada when I get tired of mapping VLANs by hand. This switch does have fans but is very quiet, an impressive feat in a PoE switch. I am only using around 50W of its 250W capacity, if I ever go for that PoE++ troffer lighting I like to window shop for it might end up a whole lot louder. Currently the PoE load is mostly the result of infrared illuminators in exterior surveillance cameras. The SFP cages will be much appreciated when I finally lose my mind and run fiber to the shed.

Next to the router, the second APU4D4 runs Pihole, Home Assistant, and Plex Media Server in Docker containers. I run Plex in a docker container because they only build it for ARM as a Debian package, and I'm a Red Hat person. Well, Red Hat in the streets, Fedora, erm, at home. It's also a Tailscale subnet router, although I haven't really bought into Tailscale that much yet and still have a lot of manually-configured Wireguard tunnels.

Home Assistant is perhaps the most complicated thing here. I am not as bought into Home Assistant as I maybe should be, and so I make extensive use of various homegrown services that speak MQTT. I have, at times, been tempted to improve performance and "simplify" (for select definitions of "simplify") by writing my own simple logic engine to implement automations, but I'd probably just end up creating a bad version of Home Assistant with fewer features. A chintzy USB Z-wave stick is a major bridge to the Real World, and I am particularly fond of the Zooz multi-relays as a practical way to handle various physical inputs and outputs. A Philips Hue hub tidily slapped on the side of the rack controls most lighting, though, besides a few Z-wave wall dimmers for integral LED fixtures.

My latest home automation achievement is something I call "Giant Voice" after the historic Altec outdoor address system once popular on military bases. It receives simple commands via MQTT and plays back audio clips and speech synthesis via Microsoft Azure Cognitive Services Speech (a Microsoft product name if I have ever seen one). So it's sort of like a doorbell, and basically functions as one, except it plays clips of Star Trek computer beeps and announces which part of my small lot a visitor has intruded on. It's not at all reliable because, for reasons of being built out of things I had on hand, it's running on a Pi Zero W connected to a cheap Bluetooth speaker. Trying to keep a reliable connection to a Bluetooth audio sink on Linux without X running may actually be impossible.

Pihole forms part of a split-horizon DNS arrangement on the top-level domain I use, which is such a nice name I made it available on FreeDNS where it is used by a dozen poorly run Minecraft servers. This introduces an interesting set of DNS hijacking and misconfiguration hazards, which I find aesthetically pleasing. Systemd-resolved machines, for example, are prone to acting up due to resolved's well-known oddities around split-horizon systems. Of course, in all truth I completely agree with Poettering that split-horizon DNS is sin, but why live if we can't sin a little?

On a rack shelf below is a 5-bay NAS made by company called Kobol that doesn't exist any more. I like it because it's a simple arrangement of an ARM SBC (running Fedora of course) with a lot of SATA controllers and yet they made an unreasonably nice aluminum enclosure for it. I use btrfs because every time I use ZFS I end up having to tune it, and for how much I appreciate the inanities of computers tuning ZFS is actually somewhere near dental surgery in my list of favorite activities. I follow btrfs development just closely enough to figure that there is about a 10% chance of massive data loss, which is why I back the entire thing up to a cloud provider. What I really want is to back it up to LTO tape, just for appearance, but LTO drives stay expensive until they're several generations old and I have a hard time getting excited about LTO7 when I know that LTO9 exists.

One day the NAS will probably die or I will get annoyed with how slow it is CPU-wise, but I really don't know what I'll do to replace it. Maybe the NVR is an omen of things to come.

And right, the NVR, or network video recorder, which records the surveillance cameras. It's a small-form-factor Dell workstation I bought used off a friend to replace a failed NUC. Neither the NUC nor it have reasonable internal storage capacity (on account of their small size), so it has most of its storage in a Startech 2-bay USB3.0 enclosure that I am surprisingly in love with. It's fast and reliable, and has no-fuss RAID0/1 in hardware. It even comes apart to install the drives in a pleasing way. It has 8TB of storage which is enough for around a month of history. I do have 2TB of SSD storage in the NVR which is used for live recording so that a less performance-sensitive batch job can move older recordings to the slow platter drives in the enclosure.

When it comes to software, the NVR runs a commercial package called Blue Iris on Windows. I am not particularly interested in defending this choice, other than to explain that I have been using Blue Iris for years. Well, I will be a little argumentative. Open-source NVR packages suck. All of them are just incredibly bad. For some reason all of the replacements for Zoneminder either almost single-mindedly target Raspberry Pis with barely the performance for a single UHD camera or are nodejs monstrosities. Most are both. If you get cameras on the cheap and sometimes from surplus auctions like I do, you need support for a lot of video and PTZ protocols, and Blue Iris is mature enough to have out-of-the-box support for every bit of hardware I've come up with. It has both a reasonably good web interface and the ability to run the full desktop console remotely. Although it's not open source, it has simple but functional HTTP and MQTT APIs that have made it easy to integrate with my broader tangled mess, and CodeProject AI server support for object classification to boot. It definitely seems like there should be a suitable open-source replacement at this point but I just haven't found one. Maybe growing up on Milestone VMS just ruined my taste the way growing up on Perl did.

Jammed below the NVR and next to its drive enclosure is a NUC. This is the warranty replacement for the one that failed. There's a whole story here, I wasn't expecting to get a warranty replacement, but then it showed up in the mail. I hooked it up so that I can WoL it when needed to run longer, more performance-intensive tasks like video encoding that I don't want to have to keep my laptop plugged in for. In this regard it replaces my old laptop, which used to be shoved into the rack with its screen always on for some reason.

Also sharing the lower rack shelf is an HDHomeRun TV tuner cabled to a nice active antenna on the roof. Would you believe that I can get some 60 channels of infomercials and televangelism, completely free? My favorite part is just how heavily compressed it all is, now that DTV broadcasters realized they can cram something like eight SD channels onto one carrier. There's also a Davis WeatherLinc back there somewhere, it's sort of an IP gateway for Davis Vantage weather instruments also mounted on the roof. A small service I wrote on the Home Assistant machine loads data from it into Prometheus for use elsewhere. There's also a second, separate wireless weather instrument system elsewhere in the house that also goes into Prometheus. That one is by Ecowitt and it's just for temperature and soil moisture sensors in the small heated greenhouse (Home Assistant controls the heater and irrigation via Z-Wave).

At the bottom of the rack is a not-great-but-okay Cyberpower UPS. I have a slight bias against Cyberpower because another of their products I own has twice taken down the computer plugged into it due to what seemed to be a software bug that could only be resolved by leaving it unplugged for long enough for the battery to die... a long time since it stops producing output in that state. Admittedly it's done this twice in about five years and that issue hasn't stopped me from buying a new battery for it occasionally. This rackmount one doesn't seem to have that problem, or at least hasn't so far, but it's really just the cheapest rackmount UPS I could find with readily replaceable batteries.

On the left side, a Ubiquity AP-AC-Lite. This thing, along with its compatriot in the living room, is showing its age. The problem is that I have been holding out for TP-Link to release their Omada-managed WiFi 6E AP in the US, which keeps getting pushed back. I own three of these total, and one of my favorite things about them is that one of the three is an older hardware revision that only supports 24v PPoE, and the other two support 802.3af. Guess how good I am at not mixing them up.

To facilitate all this junk, I have installed a power outlet in the closet and ethernet runs from various parts of the house and exterior. Most of the ethernet runs land at the patch panel at the top, but not all of them for reasons of laziness.

Most ethernet is run through the attic, although the extremely low overhead in the attic (due to a very shallowly pitched roof) makes many areas difficult to access. For this reason I own my friend, Mr. Longarm, a 35' telescopic fiberglass pole. I have found that a great many practical problems in cabling can be solved with the use of a long enough pole. Fiberglass pushrods and a magnet fishing set are invaluable. In some cases I have had to open sections of wall, but I try to avoid it because drywall repair becomes tedious. An inventory of "installer bits," semi-flexible drill bits several feet long, can minimize the need for opening drywall but come with hazards when used blind. Sometimes you can achieve a medium of drilling small pilot holes into each stud bay, inspecting with a borescope to locate electrical wiring and whatever else, and then driving an installer bit through several stud bays at a time. The exploratory holes are fairly quick to repair and paint.

Some aspects of my home technical infrastructure are more whimsical, or perhaps more directly reflect my personal neuroses. I have always been tremendously frustrated at the lack of time synchronization in modern clocks considering the several different technical approaches available. I run an NTP server on one of the APU4s and all of the wall clocks in the house synchronize to it. For the most part these are used/surplus clocks from Primex's now discontinued SNS series, which used to be easy to get in both battery-powered analog and mains-powered LED versions. The supply of these seems to be drying up, but the Primex OneVue series is also NTP-over-WiFi capable. Unfortunately I'm less confident that the OneVue clocks can be configured to use a local NTP server without the Primex enterprise management system, which makes them less appealing for small systems.

clock

Personally I prefer the LED versions for their over-the-top size, although unfortunately the six-digit (seconds-indicating) version seems hard to get in the larger 6" digit height option. This one, a 2.5" model in the bedroom, has had a couple of layers of neutral gray theater gel added to the lens since the lowest brightness setting will still illuminate a room in red.

I have a similar bent when it comes to "smart home" control. I find the industry's focus on phone apps and voice controls infuriating. It's nearly always faster and more convenient to press a button, but the industry as a whole has apparently deemed buttons to be too expensive. Architectural lighting controls used to universally offer "scene controllers," panels with a few buttons that each select a scene, but these are oddly hard to find in the modern home automation market. I make my own.

buttons

This is a programmable keypad scanned by a little Python program running on a cheap SBC with WiFi. Right now it actually hits the Hue controller API directly, but I have been planning for months to re-implement it to send MQTT messages instead. The most obvious (and probably best) choice for a keypad would be X-Keys, but this Genovation ControlPad is popular in warehouse and picker automation so there's a good supply of used ones on eBay. The major disadvantage to Genovation is uglier programming software and no backlighting (the X-Keys models have individually-addressable two color backlighting). I'd highly recommend everyone try these out and help bring physical buttons back to the industry. You could even make it look a lot nicer if you put in even slightly more effort than I did.

And I think that's the grand tour. I'm not sure that I would say that I am completely proud of any of this because it is all so cobbled together and I change things frequently, but that's kind of why I wanted to respond to the genre of "my homelab" or "my home network" posts. I always sort of cringe at these because the focus on aesthetics, with modified Ikea furniture or whatever, is going to make modification down the road much more difficult. There is a big advantage to the 19" rack as a form factor, and wall-mount units are easy to come by. If you're especially space-constrained you might even consider a swing-down vertical one. Whatever you do, just make sure you run a lot of cables. Cables everywhere!

--------------------------------------------------------------------------------

>>> 2023-02-07 secret government telephone numbers

Very nearly a year ago, I wrote a popular article about secret military telephone buttons. To be clear, the "secret" here was a joke and these buttons are in fact well documented. The buttons I was talking about were the AUTOVON call precedence buttons, used for a five-level prioritization scheme within the AUTOVON military telephone network. The labels on these buttons, FO, F, I, P, for Flash Override, Flash, Immediate, and Priority, directly reflected the nuclear C2 scheme at the time. The AUTOVON telephone network is long retired, but military telephone systems continue to provide a call precedence scheme today, admittedly usually without dedicated buttons.

Well, the idea of call precedence without the priority buttons leads naturally to a followup that I promised: government and defense call prioritization schemes on the general, civilian telephone network. It has long been recognized that in the event of a national emergency, many people involved in the response would not have access to a dedicated government telephone network. This is particularly true when you view civil defense as a wider remit, beyond just military reprisal. Recovery from a disaster of any type will involve federal, state, and local government leaders, as well as staff of response organizations like critical utilities, hospitals, and disaster relief organizations. Not all of these people can realistically be furnished with a phone on a dedicated network. The only way to practically ensure prioritized communications in a disaster is to provide that capability as a feature of the public switched telephone network.

It is perhaps useful to provide a little bit of background on the need for call prioritization. In the landline phone network, the major limitation on capacity is long-distance trunks. Particularly in the era before the TDM digital telephone network, long-distance trunk capacity was relatively expensive. In the 1950s, a small city might have only one or two dozen trunks for outbound long distance calls. This means that only one or two dozen simultaneous calls outside of the city could be connected. By the mid-1960s, rapid expansion of the Long Lines microwave network had dramatically increased long-distance trunk capacity (in part through added routing flexibility), so it was no longer necessary to request most long-distance calls in advance. Still, into the '90s it was not unusual for long-distance calls during peak periods like Christmas to result in an "all trunks are busy" intercept message.

The need for some sort of emergency prioritization of traffic on the landline network has long been known, but solutions have been uneven, at least in the US. In the UK, where telephone service was a state enterprise until 1984 and the first and second World Wars created a more expedient need, some type of basic telephone prioritization scheme has long been in place. Through WWII and the Cold War it was a simple one: in cases of emergency, all local loops not flagged as being required for emergency service would be disconnected. This had the dual benefit of freeing capacity for government traffic and denying an invading enemy the use of the telephone network. A somewhat more sophisticated version of the same idea, called the Government Telephone Preference Scheme, remained in service until 2017.

In the US, though, there was little effort towards such a system during WWII and the Cold War. It's hard to know what exactly to make of this situation. First, the situation has always been a little different in the US than in the UK: during WWII, Great Britain was the target of ongoing bombing campaigns. Excepting mostly minor terrorist attacks (such as during the Escobar Rebellion in Mexico), the US has not suffered a military assault on its homeland since the Civil War. During the Cold War era, an attack on the US was assumed to be a nuclear one, and most likely an all-out nuclear campaign. The end result is that civil defense in the UK tended to be viewed as a practical system oriented around sustaining operations during an attack, while civil defense in the US was viewed as more of a theoretical exercise in preparation for reconstruction. This is all just background to explain that part of the reason the US did not have a telephone prioritization scheme through most of the Cold War is simply because the US is historically pretty bad at deploying civil defense infrastructure.

But that's not the whole story, there are other, more positive, reasons as well. For one, since WWII AT&T had operated a dedicated telephone network for military use (known as AUTOVON for most of the Cold War) and it was relatively large in scope and well-hardened against attack. For the most part, the assumption was that all emergency traffic would be on AUTOVON, not the PSTN. Another factor is the government's close relationship with AT&T. Despite AT&T being a private entity, it was exceptionally close to the federal government and provided many government services in secret. It is possible, even likely, that AT&T had arranged for some sort of emergency call prioritization scheme that wasn't discussed in public.

The cellular network faces similar problems but at a different point. In the landline network, it is possible but rare for a local exchange switch to become overwhelmed by the number of phones off-hook. Conventional telephone switches use a relatively low "oversubscription" ratio from local loops to call-processing elements (the nature of which depends on the switch architecture) and are often designed so that most components of the switch are only in use during the setup stage of the call, and are disconnected once the call is established and made available to process additional calls. This means that overwhelming a telephone exchange to the point that it can no longer offer dial tone and accept dialing from telephones requires picking up a significant number of the connected telephones at the same time. In practice, the long distance trunks were virtually always the bottleneck. The cellular network is not so lucky.

Before data became the major driver of cellular network architecture, cellular carriers used a much higher oversubscription ratio. Cell phones spend only a very small portion of the time connected to calls, less than landlines early on due to the higher pricing, and so cellular base stations were designed to track associations with far more phones than they could actually handle call traffic with. In an emergency that affected even a small area, it was very easy for enough people to make cellphone calls that the tower hit its capacity limit and began rejecting additional calls. Since the limitation was often in the actual bandwidth on the radio side, this limit applied to all simultaneous calls, not just calls in setup. Some freeway accidents were dramatic enough emergencies to cause cellular calls to fail.

This problem, in the case of civil emergencies, is not at all far-fetched. We will refer to 9/11 repeatedly because it is main origin of modern emergency telecommunications programs. On 9/11, Verizon reported a 1.5x to 2x increase in national call volume. Cingular a 20% increase. Nextel, a carrier that I will write a whole article on eventually because of the interesting technology they employed, saw an increase of several hundred percent in the hours immediately following the first attack. These statistics, though, reported at a national level, understate the impact. There was a problem nationwide, but there was a far greater problem in New York City, where local cellular base stations became so overwhelmed that the call completion rate in the Manhattan area is thought to have dropped to a few percent for some time. Normal cellular service to Manhattan was not restored for several days.

And during that same time period, basically the entire government and commercial disaster response capability was attempting to communicate plans... via cell phone. 9/11 is currently the most memorable event demonstrating the need for a call prioritization system, but it is only one on a string of incidents both before and after where emergency response was significantly impeded by capacity limitations in the PSTN.

As you would expect from a federal emergency preparedness initiative, the government set out to address the problem of limited telephone network capacity through a confusing maze of several separate programs, administered by separate organizations, and accessed by separate means. Without a clear place to start, I will choose the best-known of these services: GETS, the Government Emergency Telecommunications Service.

The history of GETS is surprisingly obscure, perhaps because it has been tossed like a hot potato between a good half dozen executive branch agencies over the last few decades. Here's what I have put together from various archives and a helpful former AT&T source: the story of GETS starts with the National Communications System, or NCS, established by Kennedy in response to the Cuban Missile Crisis. NCS had an objective of ensuring reliable communications for the civilian government during nuclear conflict, and seems to have started down the path of developing a civilian version of AUTOVON. This would be essentially an independent, parallel telephone network employing dedicated resources engineered for survivability.

This program, called the National Emergency Telecommunications System or NETS at least late in its life, faltered mostly due to cost. AUTOVON had been a tremendously expensive project that went through mostly because of the military's tremendous budget. To save money, NETS was re-imagined as an "overlay" network built on top of the PSTN. Conveniently, this later version of the proposed NETS is detailed in a 1987 National Research Council report evaluating the plan on the request of NCS.

The 20,000 federal government users of NETS would each be issued an Access Security Device, which would use some sort of digital signaling to authenticate itself to a device called a Call Controller integrated into (or attached as a peripheral to) a telephone switch. The NETS Call Controller would then communicate with other call controllers to attempt to establish a route to the dialed number. This is the real magic of NETS: the telephone network at the time, organized largely around hierarchical long-distance routing, had fixed, pre-planned routes in place between tandem switches. If the configured route to a destination was damaged, the call would fail---there was no self healing the sense we expect from modern networks. NETS implemented self-healing as a feature of the CC, which would initially attempt to establish the call via a default route, check for the ability to communicate with a CC at the other end, and give up and try a different route if the remote CC didn't respond. This way NETS CCs should be able to "discover" a working route even with appreciable damage to the telephone network.

NETS, according to the National Research Council report, had numerous shortcomings. For one, the proposal dealt almost purely with federal employees, even though it was clear from NCS's objectives that many non-government users would need access in an emergency (hospitals and infrastructure operators, for example). This was an especially large problem because of the seemingly over-paranoid design of the "access security device," a piece of hardware which would apparently have to be stockpiled in various federal offices to be issued in the event of an attack. The report recommends eliminating the ASD and replacing it with a dialed PIN code, a far simpler arrangement that didn't seem much worse considering that the ASD didn't implement encryption of the call contents but instead just a challenge-response authentication process.

Further, unlike AUTOVON which provided dedicated service all the way to the user, NETS was essentially a feature of the long-distance network only. In an emergency when a local exchange switch was overwhelmed and unable to service a newly off-hook like, NETS would be of no hope. Still, the report finds that this is a relatively small problem, and although users might have to wait a while it is expected that after taking their phone off-hook they would get a dial tone eventually. Because the telephone switches of the time worked by fabric-switching digit receivers to off-hook phone lines, each person picking up a phone was essentially placed in a queue to receive dial tone. Normally the queue was empty and so this happened instantaneously, but during especially busy calling periods it was not all that unusual for their to be a very brief delay.

Finally, the NETS proposal included not only voice but fax and data. Because the "damage-avoidance" routes developed by the NETS CCs would often be longer than typical long-distance phone routes, the connections would be of relatively poor quality. This required additional electronics at the ends of the call, and potentially integrated into user PBXs, for line conditioning. It seems like the use-case for the data mode wasn't very well established, and so the report recommended dropping the requirement for additional line conditioning.

The conclusions of the report seem to have been influential, because most of the features the report identified as problematic disappeared from later consideration. Unfortunately, some of the beneficial features did as well. NETS had included a call preemption system, for example, which did not make it past the end of the NETS project. The use of additional dedicated controllers for routing also seems to have been abandoned, but on this issue we must consider that in the late '80s computer-controlled switches were becoming the norm and were capable of much more complex routing logic. This may have just eliminated the need for an additional control system dedicated to routing emergency calls.

The evolution of NETS, now called GETS, launched in 1994. Contemporaneous reporting supports the odd lack of historical information on GETS. A 1996 Chicago Tribune article about the 710 area code (see, people have been bringing up 710 as a weird area code for a long time!) says that while GETS was never a secret, it also received very little promotion even within the government. As of that article, the director of the NCS said that GETS would be "fully operational" in 2001... an auspicious date in emergency telecommunications.

Indeed, the events of September 2001 lead to a massive federal reconsideration of continuity of government, emergency communications, and emergency preparedness in general. To make a long story short, the attacks of 9/11 immediately prompted an almost complete failure of federal emergency communications systems. Everything from complex continuity of government plans to the phones in the White House basement were found to be completely nonfunctional, the result of decades of under-investment in an increasingly incompetent civil defense apparatus [1]. In the year after 9/11, federal bureaucrats found themselves blowing the metaphorical dust off of a number of half-finished or half-forgotten communications programs, GETS included. It seems that in the 2001-2002 time period, NCS launched sort of a public relations program to promote GETS not just in the federal government but across state and local governments and industry. While GETS is technically a Gulf War-era program, for most purposes it is a post-9/11 program.

GETS is perennially discussed in telephone arcana circles, including Computers Are Bad, because it has the distinction of being the sole use of the "federal government" 710 area code. GETS is accessed by dialing 1-710-NCS-GETS, where NCS is left over from the former National Communications Service. Amusingly, the special nature of the 710 area code poses a challenge: 710 is so rarely used (to the degree that it is sometimes listed as a "reserved" or unused area code) that some phone systems with trunk routing logic, like larger corporate PBXs, will not route it correctly.

The decision to put GETS in the little-used (at the time, entirely unused) 710 area code was probably more practical than whimsical. GETS is expected to work across the entirety of the landline telephone network, which in 1994 contained a lot of legacy switching systems. Trunk selection based on area code would have been nearly universal at the time (I say "nearly" because dial service was not yet universal in 1994), so it would have been relatively easy to configure older switches to direct calls to the 710 area code via dedicated trunks. Basically, the prioritization of GETS could be implemented as a special case in the "LERG" routing table used to connect long-distance calls. On the other hand, as a workaround for phone systems that do not handle the 710 area code correctly, GETS also offers toll-free access numbers as an alternate. These were likely put in place later on, when more flexible 4ESS/5ESS or DMS-100 switches had become the strong majority of the telephone network.

GETS users, after completing an application, are issued a card with the GETS access number and a frustratingly long 12-digit PIN. They call the GETS access number, enter the PIN, and then dial a phone number. While GETS looks a lot like a prepaid calling card to the user, internally the telephone network prioritizes calls from and to GETS to ensure a higher chance of completion. Perhaps more importantly, at least early on GETS calls were routed using more complex logic than normal calls so that---similar to the original goals of NETS---GETS calls should automatically route around unavailable trunks. Today, with a higher degree of automation in traffic engineering, it's unclear how much of a difference that functionality would make.

GETS achieves the goal of call prioritization on the landline network, but it hits its limit when it comes to cellphones. The typical capacity problem with cellphones is not long distance trunks, but local tower capacity. GETS won't help with completing a call when your phone isn't able to set up a voice session in the first place. Well, in practice, some cellular carriers do seem to use more flexible modern GSM baseband capabilities to automatically prioritize GETS calls at the cellular network level as well, but this isn't universal and wasn't even really possible with earlier cellular standards.

So, for cellphones, we have a separate system: WPS, the Wireless Priority Service.

To use WPS, you have to apply to DHS to get an authorization letter and then set up WPS service with your cellular carrier. Once enabled for your line, the dialing prefix *272 will cause your call to be prioritized within the cellular network. WPS explicitly does not provide preemption, so you might still have to try multiple times, but you will have a higher chance of getting through eventually.

Internally, WPS is implemented as a vertical service code, a generalized capability of phone switches that allows codes prefixed with * and # to operate special features. The GSM specification also has extensive treatment of vertical service codes, making WPS prioritization easier to implement in the cellular network layer.

Like how GETS doesn't (necessarily) affect the cellular network, WPS doesn't affect the landline network. To ensure end-to-end prioritization of a call made on a cellphone, it's necessary to use both. For example, if I needed to reach Comcast in a genuine internet emergency, I would dial *2721710627438712345678901218009346489. This is so incredibly convenient that the government offers a phone app for WPS/GETS users just to do the dialing for you.

Neither GETS nor WPS addresses the actual availability of a working telephone, though. For this, there is yet another program, but one that is less technical and more bureaucratic. The Telecommunications Service Priority program, or TSP, is an FCC program that allows government and infrastructure users to receive higher priority on telecommunications requests. This mostly means that TSP telephone services will be the top priority for repair after damage to the telephone network, but it also means that TSP users can order new service with higher priority as well. Like GETS and WPS, TSP starts out with an application form to DHS, and it's available to all government agencies and to industry in recognized critical sectors. TSP seems to have been formally created in 1988, indicating that it may be a result of the same NCS efforts that lead to GETS.

All of these services might seem somewhat antiquated and, well, they are, with even 2001 being quite a ways in the past now. As an effort to modernize emergency telecommunications, the First Responder Network Authority was established in 2012. Ultimately part of the National Telecommunications and Information Administration (NTIA) rather than DHS, the Authority is commonly referred to as FirstNet. This can be a bit confusing since the term FirstNet also refers to the actual service, delivered by AT&T.

This public-private situation can be a bit confusing.

While FirstNet is a government agency that includes a technical board and policy efforts in multiple areas, the bulk of FirstNet was contracted to AT&T. The objective of FirstNet in this sense is to develop a nationwide broadband network for emergency communications. AT&T, being a cellular carrier, is currently implementing this vision using cellular technology. In many practical senses, FirstNet is just a tier of AT&T cellular service that comes with prioritization.

It didn't totally start out this way. The original goal on FirstNet was more like a unification of the existing public safety communications systems across the US. These systems, based primarily on the APCO P25 trunking radio protocol but with increasing use of private LTE, are mostly operated by states and municipalities and could not easily be counted. FirstNet proposed to build new state-level unified networks or to integrate networks developed by the states. NTIA reserved a substantial federal radio spectrum allocation for this purpose, one that was suitable for broadband radio protocols like LTE. States were given the option to develop their own FirstNet access layer for integration into the nationwide system, or to allow the national FirstNet operator to do so. In practice, AT&T proposed to build out the new state network in each state and all 50 states accepted this proposal.

AT&T developed their access network mostly by addition to their existing cellular sites. FirstNet deployment started properly in 2017, and FirstNet now covers pretty much AT&T's entire network footprint with LTE service. FirstNet is being upgraded to 5G along with the broader AT&T network. AT&T handles the entirety of FirstNet administration, and you get FirstNet service simply by applying to AT&T for a FirstNet cellular plan. Many FirstNet users have their personal phones enrolled in FirstNet for the cost savings compared to a dedicated phone, so it really does look and feel like normal cellular service.

Much of the FirstNet product is really just a matter of configuration (FirstNet provides prioritization to users an will even preempt non-FirstNet traffic when needed), but the dedicated bandwidth set aside by NTIA is still around in the form of band 14. Even this detail of FirstNet is confusing when it comes to the lines of AT&T vs. government services. Band 14 is dedicated to emergency communications, but as part of its deal with the NTIA, AT&T is allowed to use band 14 for their other customers as well, as a means of defraying their cost in deploying band 14 access points. So band 14 really consists mostly of plain old AT&T customers, but with the expectation that they will all be booted off the band when FirstNet traffic requires the bandwidth.

FirstNet is sort of a complex creation, heavily promoted by AT&T for reasons of their own profit motive, and supported by billions in government funding. As you would imagine, it has not been without controversy. The program has been expensive and slow to roll out, even with the significant advantage of mostly just using AT&T's existing network. Even after all that time and money, the original vision of unifying first responder communications hasn't really been achieved. While FirstNet is broadly used by groups like firefighters for cellular service, states and municipalities continue to operate their existing radio networks alongside FirstNet. Most of this is attributable to the high cost of FirstNet devices with ergonomics similar to existing systems (e.g. PTT) and the complexity of deployment relative to the land-mobile radio (LMR) systems public safety agencies are familiar with.

The issue of PTT is worth dwelling on. Users with a great degree of telecom nostalgia probably fondly remember Nextel, the short-lived cellular service provider that used the Motorola trunking-radio protocol iDEN. Nextel aggressively advertised the key advantage that iDEN held over the CDMA and 3GPP standards: with roots as an LMR protocol, it offered excellent support for PTT conversation between groups of phones. This made Nextel extremely popular with businesses like towing companies and the trades, where easy PTT communications between the office and employees in the field was convenient but did not justify the cost of an LMR system. Unfortunately, cellular PTT technology largely died with Nextel, replaced by IP-based systems that failed to offer the low latency and reliability of iDEN.

The concept of cellular PTT is not dead forever, though. The lack of good PTT support has long been seen as a critical deficiency of FirstNet and probably a complete blocker on its regular use by most first responders. Fortunately, AT&T has been an ongoing proponent of the 3GPP MCPTT or Mission-Critical PTT standard. MCPTT employees IP quality of service and multicast technology in the access layer to provide IP-based PTT that still mostly performs as well as radio-based systems. MCPTT in the form of AT&T FirstNet PTT (registered trademark) has been included in the FirstNet offering for just a couple of years now, and MCPTT-capable "LTE radios" like the Sonim XP5plus are now available (at a steep cost) to FirstNet users. These devices are essentially phones, some feature-phones and some just Android devices, but have a physical form factor more like a handheld radio including a large PTT button and top volume/channel controls. They may change the fortune of FirstNet in years to come.

On a more personal level, there seems to be a (probably justified) degree of mistrust when it comes to AT&T's performance in a severe emergency. Much of the US is now covered by a state-run shared public safety radio system. These systems usually use well-established technology (APCO P25) over state-owned microwave and fiber networks. Since they are relatively isolated from non-government traffic, they don't require any potentially complex prioritization schemes to ensure reliability in an emergency. Because they're state-owned, they're often easier for states to expand and modify. This gets at perhaps the most significant criticism of FirstNet: it's just so complicated. It's technically complicated, but moreover it's bureaucratically complicated, offered as a joint venture of a private company and a semi-independent government authority to user through several degrees of contractual separation.

Today, between GETS, WPS, PTS, and FirstNet, it is now hopefully possible for a government or critical infrastructure user to reliably make a phone call in some cases some of the time. It took decades to get here, and there remain questions about the actual reliability of the service, but massive contracts with AT&T to deliver critical services of questionable quality has become a fine American tradition since divestiture. I saw post-divestiture because I feel like prior to that point even more massive contracts with AT&T usually delivered services that actually worked, and on schedule even, but maybe I'm just being nostalgic.

[1] I may be inserting some amount of opinion here, but honestly, the situation does not allow much room for debate. 9/11 made it extremely clear that the federal government had no meaningful civil defense capability.

--------------------------------------------------------------------------------

>>> 2023-01-29 the parallel port

A few days ago, on a certain orange website, I came across an article about an improvised parallel printer capture device. This contains the line:

There are other projects out there, but when you google for terms such as "parallel port to usb", they drown in a sea of "USB to parallel port" results!

While the author came up with a perfectly elegant and working solution, on reading that article I immediately thought "aren't they just being an idiot? why not just use a USB parallel port controller?" Well, this spurred me to do some further reading on the humble parallel port, and it turns out that it is possible, although not certain, that I am in fact the idiot. What I immediately assumed---that you could use a USB parallel controller to receive the bytes sent on a parallel printer interface---is probably actually true, but it would depend on the specific configuration of the parallel controller in question and it seems likely that inexpensive USB parallel adapters may not be capable. I think there's a good chance that the author's approach was in fact the easier one.

I wrote a popular post about serial ports once, and serial ports are something I think about, worry about, and dream about with some regularity. Yet I have never really devoted that much attention to the serial port's awkward sibling, always assuming that it was a fundamentally similar design employing either 8 data pins each way or 8 bidirectional data pins. It turns out that the truth is a lot more complicated. And it all starts with printers. You see, I have written here before that parallel ports are popular with printers because they avoid the need to buffer bits to assemble bytes, allowing the printer to operate on entire characters at a time in a fashion similar to the electromechanical Baudot teleprinters that early computer printers were based on. This isn't wrong, it's actually more correct than I had realized---the computer parallel port as we know it today was in fact designed entirely for printers, at least if you take the most straightforward historical lineage.

Let's start back at the beginning of the modern parallel port: the dot matrix printer.

There's some curious confusion over the first dot matrix printer, with some Wikipedia articles disagreeing internally within the same sentence. In 1968, Oki introduced the Wiredot. The Wiredot is probably the first "dot matrix impact printer" (dot matrix thermal and dyesub printers had existed for about a decade before), but the title of "first dot matrix impact printer" is still often given to the Centronics 101 of 1970. I have a hard time telling exactly why this is, but I can offer a few theories. First, the Oki Wiredot was far less successful on the market, seemingly only released in Japan, and so some people writing about printer history may just not have heard of it. Second, the Wiredot made use of an electromechanical character generator based on a permanent metal punched card, so it's arguably a different "type" of machine than the Centronics. Third, the Wiredot may have actually been more of a typewriter than a printer in the modern sense. The only photos I have seen (I think all of the same specimen in a Japanese computer history museum) show it mounted directly on a keyboard, and I can't find any mention of the interface used to drive it.

In any case, while the Centronics 101 is unlikely to be the very first, it is almost certainly the first widely successful example of dot-matrix impact printing technology. Prior to the Centronics 101, the most common types of computer printers had used either typewriter mechanisms or chain-driven line-printer mechanisms. These machines were expensive (I have heard as much as $30,000 in 1970 money for IBM line printers), physically large, and very tightly integrated with the computers they were designed for. With the inexpensive and small Centronics 101 a printer was, for the first time, a peripheral you could buy at the store and connect to your existing computer. The dot matrix printer was to the line printer somewhat as the PC was to the mainframe, and a similar (albeit smaller) "dot matrix printer revolution" occurred.

This posed a challenge, though. Earlier printers had either relied on the computer to perform all control functions (somewhat like the inexpensive inkjet printers that became ubiquitous in the '00s) or had a high-level interface to the computer (such as an IBM mainframe I/O channel) and a large internal controller. These approaches kept printers some combination of computer-specific and expensive, usually both. The new generation of desktop printers needed a standardized interface that was inexpensive to implement in both the printer and the computer.

For this purpose, Centronics seems to have turned towards the established technology of teletypewriters, where the bits of each character were used to select an appropriate type matrix. The dot matrix printer would work very similarly, with the bits of each character directly selecting an impact pattern template from a simple character generator (unlike the Oki's metal plates, the Centronics seems to have used some type of ROM for this purpose). This was far easier if all of the bits of each character were available at the same time. While computers mostly used serial data interfaces for components, the extra hardware to buffer the bits into each byte (given that the printer had no other reason to need this kind of storage) would have been an expensive addition in 1970. Since ASCII was a 7-bit standard, there was an implication that any interface to the printer should be at least 7 bits wide.

Centronics was an interesting company to make a printing breakthrough. Effectively a division of Wang (best known for their CRT word processors), Centronics had originally built special terminals for casino cashiers. A major component of these terminals was a slip printer for receipts and account statements, and the Centronics dot-matrix impact head was originally developed as a faster, smaller print head for these slip printers. As sometimes happens with innovations, this new form of high-speed printer with simplified head (consisting of one solenoid for each pin and only 7 pins needed for acceptable quality) became a more interesting idea than the rest of the terminal.

Centronics was not a printer company, and did not have the expertise to develop a complete printer. To close this gap they entered a partnership with Brother International, the US arm of Japanese typewriter (and sewing machine) maker Brother, whose president incidentally lived next door to the president of Centronics. And thus came about the Centronics 101, with the print head and control electronics by Centronics and basically all other components by Brother based on their typewriter designs.

I go into this history of Centronics as a company both because it is mildly interesting (Brother printers today are more the descendent of the original Centronics than the hollow shell of Centronics that survives as Printronix) and because the general flavor of Centronics going into the printer business with a next-door neighbor makes an important property of the Centronics 101 more explicable: the choice of interface. Centronics was getting this printer together quickly, with an aim towards low cost, and so they stuck to what was convenient... and incidentally, they had a significant backstock of 36-pin micro-ribbon connectors on hand. This style of connector was mostly used in the telecom industry (for the RJ21 connector often used by key telephones for example), but not unheard of in the computer world. In any case, since there was no established standard for printer interfaces at the time, it was just about as good as anything else.

And so, largely by coincidence, the Centronics 101 was fitted with the 36-pin micro-ribbon (also called Amphenol or CHAMP after telecom manufacturers) connector that we now call the Centronics interface.

The pinout was kept very simple, consisting mostly of a clock (called Strobe), 8 data pins, and a half dozen pins used for printer-specific status information and control. For example, a Busy pin indicated when the printer was still printing the character, and a Paper Out pin indicated that, well, the paper was out. Control pins also existed in the forward direction, like the Autofeed pin that indicated that the printer should linefeed whenever it reached the end of the line (useful when the software generating output was not capable of inserting correct linebreaks). Since 36 pins is a whole lot of pins, Centronics provided generous features like a pull-up reference and a separate ground for every data pin, besides chassis ground and shield.

Let's take a look at the details of the Centronics protocol, since it's fairly simple and interesting. Centronics printers received data one character at a time, and so I will use the term character, although the same applies to any arbitrary byte. To print a character, the computer first checks the "busy" pin to determine if the printer is capable of accepting a new character to print. It then sets the character values on the data 0 through data 7 pins, and produces a 1uS (or longer) pulse on the "strobe" pin. This pulse instructs the printer to read out the values of the data pins, and the printer places a voltage on the "busy" pin to indicate that it is printing the character. Once completed, the printer pulses the "acknowledge" pin and resets the "busy" pin. At this point, the computer can send another character. A couple of sources suggest that the "acknowledge" signal is largely vestigial as most computer implementations ignore it and send another character as soon as "busy" is reset.

Critically, the Centronics interface was completely directional. From the computer to the printer were the data bus and several control pins. From the printer to the computer were four control pins but no data bus at all. Returning to my original inspiration to read about the parallel port, this suggests that a parallel controller actually can't be used to receive data from a printer output, because it is only capable of transmitting on the data bus.

Of course, reality is yet more complicated.

The Centronics interface we're speaking of here is strictly the interface on the printer, and especially in the '70s that bore only a loose relation to the interface on the computer. Most computers would still need a dedicated printer controller card and a variety of proprietary interfaces were in use on the computer side, with various cables and adapters available to make a Centronics printer work.

As with many things in the world of microcomputers, there wasn't really any standardized concept of a computer parallel interface until the IBM PC in 1981. The IBM PC included a parallel printer controller, but not one intended for use with Centronics printers---the IBM PC was supposed to be used with the IBM printers, rebadged from Epson, and the IBM PC's printer port wasn't actually compatible with Centronics. IBM chose a more common connector in the computer world, the D-shell connector, and specifically the DB25. Because there are only a limited number of practical ways to implement this type of printer interface, though, the IBM proprietary parallel printer protocol was substantially similar to Centronics, enough so that an appropriately wired cable could be used to connect an IBM PC to a Centronics printer.

And that pretty much tells you how we arrived at the printer interface that remained ubiquitous into the late '90s: an adapter cable from the IBM printer controller to the Centronics printer interface.

The thing is, IBM had bigger ambitions than just printers. The printer port on the original IBM PC was actually bidirectional, with four pins from the printer (used to implement the four printer control pins of the Centronics standard) available as a general-purpose 4-bit data bus. The IBM printers seem to have used the interface this way, ignoring the Centronics assignments of these pins. Even better, the original IBM PC was capable of a "direction-switching" handshake that allowed the peripheral to use the full 8 bit data bus to return data. This feature was apparently unused and disappeared from subsequent PCs, which is ironic considering later events.

The four printer control pins provided by Centronics were rather limited and more sophisticated printers had more complex information to report. To address this need, HP (which had adapted the Centronics interface nearly from the start of their desktop printers) re-invented IBM's simpler bidirectional arrangement in 1993. The HP "bi-tronics" standard once again made the four printer pins a general purpose data bus. Since the computer already needed the hardware to monitor these four pins, bi-tronics was compatible with existing Centronics-style printer controllers. In other words, it could be implemented on the PC end as a software feature only. All parallel printer interfaces capable of the Centronics standard are also capable of bi-tronics, as long as suitable software is available.

The lopsided nature of this arrangement, with 8 bits forward and 4 bits reverse, became an irritation to the new generation of computer peripherals in the '90s that demanded high bandwidths. Scanners and ethernet interfaces were made with parallel ports, among many other examples. To address this need, Intel, Xircom, and Zenith introduced the Extended Parallel Port or EPP standard in 1991. I would love to know which product from Zenith motivated this effort, but I haven't been able to figure that out. In any case, EPP was basically a minor evolution of the original 1981 IBM PC parallel port, with a more efficient handshaking mechanism that allowed for rapid switching of the 8-bit data bus between the two directions.

Because standards need friends, Hewlett-Packard and Microsoft designed the Extended Capability Port or ECP just a year later. ECP was actually quite a bit more sophisticated than EPP and resembled later developments like Firewire in some ways. ECP parallel ports were not only fully bidirectional but supported DMA (basically by extending the ISA bus over the parallel port) and simple compression via run-length encoding. This might sound overambitious for the '90s but it found plenty of applications, particularly for storage devices. Perhaps one of the most notable examples is the first generation of Iomega Zip drives. They were internally all SCSI devices, but SCSI controllers were not common on consumer machines. Iomega offered a parallel port version of the drives, which relied on a proprietary ASIC to convey SCSI commands over an ECP parallel port. ECP's DMA feature allowed for appreciably better transfer speeds. This setup is very similar to the modern UASP protocol used by many higher-end USB storage devices... SCSI is one of those protocols that is just complex enough that it will probably never die, just adapt to new media.

The parallel printer port remained only loosely standardized, with most supporting some combination of "standard" mode (also called SPP), EPP, and ECP, until, surprisingly, 1994. IEEE 1284 aimed to nail down the mess of parallel ports and modes, and while 1284 is superficially viewed as a standardization of the Centronics connector, it really does take on the whole issue. IEEE 1284 specifies five modes:

Because EPP and ECP both specify distinct handshake protocols, it's mostly possible for a parallel controller to automatically detect which mode a device is attempting to use. This has some rough edges though and becomes more unreliable when dealing with devices that might use the pre-EPP byte mode, and so most parallel controllers provided some way to select a mode. In modern motherboards with parallel ports, this is usually in the form of a BIOS or EFI configuration option that allows the parallel port to be set to SPP only, EPP, or EPP/ECP automatic detection. Usually this can be left on EPP/ECP, but some legacy devices that do not implement normal IEEE 1284 (and legacy EPP/ECP) negotiation may not be able to set up bidirectional communications without manually putting the controller in a more basic mode.

And that brings us to the modern age, with parallel ports via EPP or ECP capable of bidirectional operation at around 2Mbps. But what of my original reaction to the parallel printer emulation article? Given what we know now, could the author have just used a USB parallel controller? I suspect it wouldn't have worked, at least not easily. From my brief research, common USB parallel adapters seem to only implement the the unidirectional "compatibility mode" of IEEE 1284 and actually appear as character printers. More expensive USB devices with ECP support are available (routinely $50 and up!) as well as PCI cards. The IEEE 1284 spec, though, requires one of a few handshake processes before the port enters a bidirectional mode. I suspect it would require modification to the drivers or worse to read the presented byte without completing a negotiation to byte mode or EPP, which of course an older device intended only for printers wouldn't support.

There's a bit of a postscript about Centronics. As with many early technology companies, Centronics made a few missteps in a quickly moving market and lost their edge. Centronics partnership with Brother may have proven fatal in this regard, as Brother introduced their own print head designs and rapidly became one of Centronics chief competitors. Epson, Citizen, and Oki gained significant ground in dot matrix printers at around the same time. Centronics, no longer a market leader, was purchased by Control Data (CDC) to be merged with their printer business, CPI, itself formed through CDC's previous purchase of slip and receipt printer businesses including that of National Cash Register (NCR).

This was in the early '80s, and while dot matrix printers were still very common in that time period it was becoming clear that business and accounting users would be one of the dominant ongoing markets: perhaps the greatest single advantage of dot-matrix printers is that they were the superior choice for filling out pre-printed forms, for several reasons. First, dot-matrix print heads could be made very small and with an impact ink-transfer design could print on just about any material. This was highly advantageous for slip printers that printed on small-format preprinted media. These types of printers are no longer especially common except at grocery stores where they are often integrated into the thermal receipt printer and referred to as check endorsers, because they are used to print a "deposit only" endorsement and transaction reference number on the back of check payments.

Second, impact dot matrix printers use a formidable impact force and can effectively print on several layers of carbon or carbonless multi-part forms. This was very convenient in applications where a multi-part form was needed to transfer handwritten elements like signatures, although it has to some extend faded away as it has become more common to use a laser printer to simply print each part of a carbonless form individually.

Third, dot-matrix printers were so popular for form-filling applications that many featured built-in registration mechanisms such as an optical sensor to detect notches and, later, black bars printed on the edges of forms. This allowed the "vertical tab" character to be used to advance the form to a known position, ensuring good alignment of printed information with preprinted fields even with a feed mechanism that was prone to slippage. Of course many dot-matrix printers used non-slipping drive mechanisms such as perforated edge tapes, but these drove up the cost of the paper stock appreciably and users found tearing them off to be pretty annoying. Unfortunately, as dot matrix printers have faded from use, so too has software and printed media support for registered vertical tab. Purchasing a car a year or two ago, I chuckled at the dealership staff using the jog buttons on an Oki printer to adjust the form registration part way through a particularly long preprinted contract. The form had black bars for vertical registration and the printer model was capable, but it seemed the software they were using hadn't implemented it. The Oki printer in question was of the true "form filler" type and both fed and returned media from the front, making it convenient to insert preprinted forms to be filled.

Centronics did not last especially long under CDC ownership, and met a particularly odd fate. In 1987, Centronics, then under ownership of an investment bank, sold its printing business to competitor Genicom (now part of Printronix). The remainder of Centronics, essentially just an empty shell of a corporation flush with cash from the sale of its former operation, purchased kitchen goods and housewares manufacturer Ekco and renamed itself to that brand. Ekco, in some sense once a leading innovator in computer printing, is now a brand of measuring spoons manufactured by Corelle.

--------------------------------------------------------------------------------
<- newer                                                                older ->