_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2023-03-24 docker

Lately I tend to stick to topics that are historic by at least twenty years, and that does have a lot of advantages. But I am supposedly a DevOps professional, and so I will occasionally indulge in giving DevOps advice... or at least opinions, which are sort of like advice but with less of a warranty.

There's been a lot of discussion lately about Docker, mostly about their boneheaded reversal following their boneheaded apology for their boneheaded decision to eliminate free teams. I don't really care much about this event in terms of how it impacts my professional work. I long ago wrote off Docker, Inc. as a positive part of the DevOps ecosystem. But what's very interesting to me is how we got here: The story of Docker, Docker Inc., Docker Hub, and their relation to the broader world of containerization is endlessly fascinating to me.

How is it that Docker Inc., creator of one of the most important and ubiquitous tools in the modern software industry, has become such a backwater of rent-seeking and foot-shooting? Silicon Valley continually produces some astounding failures, but Docker stands out to me. Docker as a software product is an incredible success; Docker as a company is a joke; and the work of computing professionals is complicated by the oddly distant and yet oddly close connection between the two.

Docker, from a technical perspective, is more evolutionary than revolutionary. It mostly glued together existing Linux kernel features, following a road that had at least been graded, if not paved and striped, by projects like LXC. Docker as a concept, though, had a revolutionary impact on the DevOps field. Docker quickly became one of the most common ways of distributing server-side software, and whole development workflows rearranged themselves around it. Orchestration tools like the ones we use today are hard to picture without Docker, and for many professionals Docker is on par with their text editor as a primary tool of the trade.

But underlying all of this there has always been sort of a question: what is Docker, exactly? I don't necessarily mean the software, but the concept. I have always felt that the software is not really all that great. Many aspects of Docker's user interface and API seem idiosyncratic; some of the abstraction it introduces is more confusing than useful. In particular, the union file system (UFS) image format is a choice that seems more academically aspirational than practical. Sure, it has tidy properties in theory, but my experience has been that developers spend a lot more time working around it than working with it.

All this is to say that I don't think that Docker, the tool, is really all that important. In a different world, LXC might have gained all this market share. Had Docker not come about, something like containerd would likely have emerged anyway. Or perhaps we would all be using lightweight VMs instead; academic and commercial research tends to show that the advantages containers have over more conventional paravirtualization are far smaller than most believe.

I would argue that the Docker that matters is not software, but a concept. A workflow, you might say, although I don't think it's even that concrete. The Docker that swept DevOps like a savior come to spare us from Enterprise JavaBeans isn't really about the runtime at all. It's about the images, and more about the ease of programatically creating images. Much of this benefit comes from composition: perhaps the most important single feature of Docker is the FROM keyword.

So Docker is an open-source software product, one that is basically free (as in beer and as in freedom) although hindered by a history of messy licensing situations. Docker is also a company, and companies are expected to produce revenue. And that's where other facets of the greater identity we call "Docker" come to light: Docker Desktop and Docker Hub.

Docker Desktop isn't really that interesting to me. Docker is closely coupled to Linux in a way that makes it difficult to run on the predominant platform used by developers [1]. Docker Inc. developed Docker Desktop, a tool that runs Docker in a VM using fewer clicks than it would take to set that up yourself (which is still not that many clicks). Docker Inc. then needed to make money, so they slapped a licensing fee on Docker Desktop. I responded by switching to Podman, but I get that some people are willing to pay the monthly fee for the simplicity of Docker Desktop, even if I feel that the particular implementation of Docker Desktop often makes things harder rather than easier.

Also I find the Docker Desktop "GUI" to be incredibly, intensely annoying, especially since Docker Inc. seems to pressure you to use it in a desperate attempt to dig what Silicon Valley types call a moat. But I fully acknowledge that I am a weird computer curmudgeon who uses Thunderbird and pines for the better performance of, well, pine.

Still, the point of this tangent about Docker Desktop is that Docker's decision to monetize via Desktop---and in a pretty irritating way that caused a great deal of heartburn to many software companies---was probably the first tangible sign that Docker Inc. is not the benevolent force that it had long seemed to be. Suddenly Docker, the open-source tool that made our work so much easier, had an ugly clash with capitalism. Docker became a FOSS engine behind a commercial tool that Docker Inc. badly wanted us to pay for.

Docker Desktop also illustrates a recurring problem with Docker: the borders between free and paid within the scope of their commercial products. Docker Desktop became free for certain use-cases including personal use and use in small businesses, but requires a paid subscription for use in larger companies. This kind of arrangement might seem like a charitable compromise but is also sort of a worst-of-both-worlds: Docker Desktop is free enough to be ubiquitous but commercial enough to pose an alarming liability to large companies. Some companies exceeding Docker's definition of a small company have gone as far as using their device management tools to forcibly remove Docker Desktop, in order to mitigate the risk of a lawsuit for violating its license.

There is a fundamental problem with "free for some, paid for others": it requires that users determine whether or not they are permitted to use the tool for free. Even well-intentioned users will screw this up when the rules require knowledge of their employer's financials and, moreover, are in small print at the very bottom of a pricing page that says "free" at the top. Personally, I think that Docker Inc.'s pricing page borders on outright deception by making the licensing restrictions on Docker Desktop so unobvious.

Docker Hub, though: Docker Hub is really something.

That most compelling feature of Docker, the ability to easily pull images from somewhere else and even build on top of them, depends on there being a place to pull images from. It's easy to see how, at first, Docker Inc. figured that the most important thing was to have a ubiquitous, open Docker registry that made it easy for people to get started. In this way, we might view Docker Hub as having been a sort of scaffolding for the Docker movement. The fact that you could just run 'docker pull ubuntu' and have it work was probably actually quite important to the early adoption of Docker, and many continue to depend on it today.

Docker Hub, though, may yet be Docker's undoing. I can only assume that Docker did not realize the situation they were getting into. Docker images are relatively large, and Docker Hub became so central to the use of Docker that it became common for DevOps toolchains to pull images to production nodes straight from Docker Hub. Bandwidth is relatively expensive even before cloud provider margins; the cost of operating Docker Hub must have become huge. Docker Inc.'s scaffolding for the Docker community suddenly became core infrastructure for endless cloud environments, and effectively a subsidy to Docker's many users.

It's hard to blame Docker Inc. too much for flailing. Docker Hub's operating costs were probably unsustainable, and there aren't a lot of options to fix this other than making Docker Hub expensive, or making Docker Hub worse, or both. Docker Inc. seems to have opted for both. Docker Hub is not especially fast, in fact it's pretty slow compared to almost any other option. Docker Hub now imposes per-IP quotas, which probably would have been totally reasonable at the start but was a total disaster when it was introduced post-hoc and suddenly caused thousands, if not millions, of DevOps pipelines to intermittently fail.

Docker Inc.'s goal was presumably that users would start using paid Docker plans to raise the quotas but, well, that's only attractive for users that either don't know about caching proxies or judge the overhead of using one to be more costly than Docker Hub... and I have a hard time picturing an organization where that would be true.

That's the strange thing about Docker Hub. It is both totally replaceable and totally unreplaceable.

Docker Hub is totally replaceable in that the Docker registry API is really pretty simple and easy to implement in other products. There are tons of options for Docker registries other than Docker Hub, and frankly most of them are much better options. I'm not just saying that because GitLab [2] has a built-in Docker registry, but that sort of illustrates the point. Of course GitLab has a built-in Docker registry, it's no big deal. It's not even that GitLab introduced it as a competitor to Docker Hub, that's sort of absurd, Docker Hub doesn't even really figure. GitLab introduced it as a competitor to Sonatype Nexus and JFrog Artifactory, to say nothing of the docker registries offered by just about every cloud provider. For someone choosing a Docker registry to deploy or subscribe to, Docker Hub has no clear advantage, and probably ranks pretty low among the options.

And yet Docker Hub is the Docker registry, and the whole teetering tower of DevOps is deeply dependent on it! What an odd contradiction, and yet it's completely obvious why:

First, Docker Hub is free. Implausibly free, and as it turns out, probably unsustainably free. There's an old maxim that if you're not paying, you're the product. But Docker Hub reminds us that in the VC-driven (and not particularly results-driven) world of Silicon Valley there is a potent second possibility: if you're not paying, there may be no product at all. At least not once your vendor gets to the end of the runway [3].

Second, Docker Hub is the default. Being the default can be a big deal, and this is painfully true for Docker. The dominance of short, convenient "user/image" or even just "image" references is so strong that Docker image references that actually specify a registry feel almost feels like an off-label hack, a workaround for how Docker is really supposed to be used. What's more, Docker Hub's original quotas (or rather lack thereof) left no need for authentication in many situations, so having to authenticate to a registry also feels like an extra hassle. Many tools built around Docker don't make the use of a non-Docker Hub registry, or any authentication to a registry, as convenient as it probably should be. Tutorials and guides for Docker often omit setup of any registry other than Docker Hub, since Docker Hub is already configured and has everything available in it. You only find out the mistake you've made when your pipelines stop working until the quota period resets, or worse, pulls in production start failing and you have to hope you're lucky enough to check the Kubernetes events before digging around a dozen other places.

So the solution to the Docker Hub problem is obvious: stop using Docker Hub. It was probably a bad idea all along. But the reality of the situation is much harder. Moving off of Docker Hub is a pain, and one that has a way of staying pretty far down priority lists. Docker Hub references, or rather references with no registry at all that default to Docker Hub, are so ubiquitous that any project moving their official builds off of Docker Hub will probably break a tremendous number of downstream users.

Docker Inc.'s behavior with Docker Desktop and especially Docker Hub feels like rent-seeking at best, and potentially extortionate. It's not exactly fair to blame all of this on Docker Inc.; both commercial users and the open-source community should have foreseen the retrospectively obvious risk of Docker actually thinking about the economics. Nonetheless, a cynical and not entirely unreasonable take on this story is that Docker hoodwinked us. Perhaps Docker has simply stumbled upon the "Embrace, Extend, Extinguish" of our age: employ FOSS software defaults and lazy developer practices (that were inculcated by Docker's documentation) to make everyone dependent on Docker Inc.'s free registry, then tighten the quota screws until they have no choice than to pay in. This is a very cynical take indeed! I don't really believe it, mostly because it involves far more strategic vision than I would credit Docker Inc. with.

I decided to write about this because I think there are lessons to be learned. Important lessons. No doubt some of this problem is directly attributable to the economic conditions that dominated Silicon Valley for the last decade. Docker Inc. probably wouldn't have gotten so far, burning so much money, had there not been an incredible amount of money to burn. Still, it seems inevitable that this kind of relationship between open-source software and corporate strategy, and between free and paid services, will happen again.

I propose these takeaways, as discussion topics if nothing else:

  1. Be skeptical of free services, especially ones that are required for any part of your business (or open source venture, or hobby project, etc). Free services should never become a deeply embedded dependency unless there is very good reason to believe they will remain free. Perhaps the backing of a large foundation or corporate sponsor with a good history with open source would count, but even that is no promise. Consider the example of Red Hat, its acquisition by IBM, and the impact of that business event on projects previously considered extremely reliable like CentOS.

  2. Free tools that rely on third-party services are only free for the time being. Sure, this might be obvious, but it's probably a deeper problem than you realize. Docker never relied on Docker Hub in that it has always been possible to use other registries. But Docker and the community strongly encouraged the use of Docker Hub through technical, economic, and social means. This had the result of making Docker Hub a de facto hard requirement for many projects and systems.

  3. When writing documentation, guides, blog posts, advice to coworkers, etc., think about long-term sustainability even when it is less convenient. I suspect that the ongoing slow-motion meltdown over Docker Hub would have been greatly mitigated if the use of multiple Docker registries, or at least the easy ability to specify a third-party registry and authenticate, were considered best practices and more common in the Docker community.

[1] I mean MacOS, but you can assume I mean Windows and it still works.

[2] My employer whose opinions these are not.

[3] I am here resisting the urge to write a convoluted aviation metaphor. Something about being passengers on a whale-shaped plane that is hitting the last thousand feet and still short of V_r, so the captain says we only get 100 builds per 6 hours per IP and the rest are going out the window.

p.s. I took so long to write this so late at night that now the date in the title is wrong, haha whoops not fixing it

--------------------------------------------------------------------------------

>>> 2023-03-13 the door close button

This will probably be a short one, and I know I haven't written for a while, but it has always been the case that you get what you pay for and Computers Are Bad is nothing if not affordable. Still, this is a topic on which I am moderately passionate and so I can probably stretch it to an implausible length.

Elevator control panels have long featured two buttons labeled "door open" and "door close." One of these buttons does pretty much what it says on the label (although I understand that European elevators sometimes have a separate "door hold" button for the most common use of "door open"). The other usually doesn't seem to, and that has lead to a minor internet phenomenon. Here's the problem: the internet is wrong, and I am here to set it right. This works every time!

A huge number of articles confidently state that "80% of door close buttons do nothing." The origin of this 80% number seems to be a 2014 episode of Radiolab titled "Buttons Not Buttons," which I just listened through while doing laundry. Radiolab gets the statistic from the curator of an elevator history museum, who says that most of them "aren't even hooked up." This is reason to doubt our curator's accuracy. I don't think there is anything malicious going on here, but I do think there is an element of someone who has been out of the industry for a while who is at least misstating the details of the issue.

The problem is not unique to Radiolab, though. An Oct. 27, 2016 New York Times article, "Pushing That Crosswalk Button May Make You Feel Better, but...," covers the exact same material as the Radiolab article a couple of years later. And the article was widely repeated in other publications, not by syndication but by "According to the New York Times..." paraphrasing. This means that often the repetitions are more problematic than the original, but even the original says:

But some buttons we regularly rely on to get results are mere artifices - placebos that promote an illusion of control but that in reality do not work.

Many versions of the article lean on this line even harder, asserting that door close buttons in elevators are installed entirely or at least primarily as placebos. But the NYTimes article provides brief mention of the deeper, and less conspiratorial, reality:

Karen W. Penafiel, executive director of National Elevator Industry Inc., a trade group, said the close-door feature faded into obsolescence a few years after the enactment of the Americans With Disabilities Act in 1990.

...

The buttons can be operated by firefighters and maintenance workers who have the proper keys or codes.

There are a few things to cover:

First, anyone who says that the "door close" buttons in elevators are routinely "not even hooked up" shouldn't be trusted. The world is full of many elevators and I'm sure some can be found with mechanically non-functional door close buttons, but the issue should be infrequent. The "door close" button is required to operate the elevator in fire service mode, which disables automatic closing of the doors entirely so that the elevator does not leave a firefighter stranded. Fire service mode must be tested as part of the regular inspection of the elevator (ASME A17.1-2019, but implemented through various state and local codes). Therefore, elevators with a "door close" button that isn't "hooked up" will fail their annual inspections. While no doubt some slip through the cracks (particularly in states with laxer inspection standards), something that wouldn't meet inspection standards can hardly be called normal practice and the affected elevators must be far fewer than 80%.

But perhaps I am being too pedantic. Elevator control systems are complex and highly configurable. Whether or not the door close button is "hooked up" or not is mostly irrelevant if the controller is configured to ignore the button, and it's possible that some of these articles are actually referring to a configuration issue. So what can we find about the way elevators are configured?

I did some desperate research in the hopes of finding openly available documentation on elevator controller programming, but elevator manufacturers hold their control systems very close to their chests. I was not lucky enough to find any reasonably modern programming documentation that I could access. Some years ago I did shoulder-surf an elevator technician for a while as he attempted to troubleshoot a reasonably new two-story ThyssenKrupp hydraulic that was repeatedly shutting off due to a trouble code. In the modern world this kind of troubleshooting consists mostly of sitting on the floor of the elevator with a laptop looking at various status reports available in the configuration software. The software, as I recall, came from the school of industrial software design where a major component of the interface was a large tree view of every option and discoverability came in the form of some items being in ALL CAPS.

The NYTimes article, though, puts us onto the important issue here: the ADA. Multiple articles repeat that door close buttons have been non-functional since 1990, although I think most of them (if not all) are just paraphrasings of this same NYTimes piece. The ADA is easy to find and section 4.10 addresses elevators. Specifically, 4.10.7 and 4.10.8 have been mentioned by some elevator technicians as the source of the "door close" trouble. With some less relevant material omitted:

4.10.7* Door and Signal Timing for Hall Calls

The minimum acceptable time from notification that a car is answering a call until the doors of that car start to close shall be calculated from the following equation:

T = D/(1.5 ft/s) or T = D/(445 mm/s)

where T total time in seconds and D distance (in feet or millimeters) from a point in the lobby or corridor 60 in (1525 mm) directly in front of the farthest call button controlling that car to the centerline of its hoistway door (see Fig. 21).

4.10.8 Door Delay for Car Calls

The minimum time for elevator doors to remain fully open in response to a car call shall be 3 seconds.

Based on posts from various elevator technicians, it's clear that these ADA requirements have at least been widely interpreted as stating hard minimums regardless of any user interaction. In other words, the ADA timing constitutes the minimum door hold time which cannot be shortened. Based on the 4.10.7 rule, we can see that that time will be as long as ten seconds in fairly normal elevator lobbies (16 feet, or about two elevators, from door centerpoint to furthest button). We can read the same in a compliance FAQ from Corada, an ADA compliance consulting firm:

User activation of door close (or automatic operation) cannot reduce the initial opening time of doors (3 seconds minimum) or the minimum door signal timing (based on 1.5/ ft/s travel speed for the distance from the hall call button to car door centerline).

One point here can be kind of confusing. The minimum time for the door to be fully open is 3 seconds, but the door signal timing is based on the time from the indication of which elevator has arrived (usually a chime and illuminated lamp) to the time that the doors start closing. This will be at least a couple of seconds longer than the minimum door time due to the open and close time of the door, but since it starts at 5 seconds and goes up from there it will usually be the longer of the two requirements and thus set the actual minimum door time. Where this is likely to not be the case are single-elevator setups where the 5 second minimum timing will apply and the time from chime to door open eats up the first two seconds... in that case, the 3 second fully open time will become the limiting (or really maximizing) factor.

From some elevator manuals such as one for the Motion Control Engineering VFMC-1000, we can gather that that the "minimum door hold time" and "door hold time" are separately configurable. I have seen several mentions online that in most elevators the "door close" button functions totally normally during the difference between the minimum door hold time and the door hold time. In other words, there may be some period during which pushing the door close button causes the door to close, but it will be after the end of the ADA-required minimum door time.

Here is the obvious catch: since reducing the door hold time will make the elevator more responsive (less time on the way to a call spent waiting with the doors open), elevator installers are usually motivated to make the door hold time as short as possible. Since the ADA requirements impose a minimum, it's likely very common for the minimum door hold time and the "normal" door hold time to be the same... meaning that the window to use the "door close" button is zero seconds in duration.

We can confirm this behavior by finding an elevator with a very long configured door hold time. That seems pretty easy to do: visit a hospital. Most hospitals set the door hold time fairly high to accommodate people pushing hospital beds around, so the normal door open time is longer than the ADA requirement (the ADA rules are of course written assuming a person can cover 1.5 ft/s which isn't very fast but still seems hard to achieve when accelerating a heavy hospital bed in a tight space). Call an elevator, step inside, wait for around ten seconds from the chime for the minimum door hold to elapse, and then push the "door close" button. What happens? Well, in my experience the door promptly closes, although I admit that I've only tested this on two hospitals so far. Perhaps your experience will vary: I can see the possibility of a hospital setting the minimum door hold time high, but of course that would get pretty annoying and probably produce pushback from the staff. In the hospitals where I've studiously observed the elevators the normal door hold time was close to 20 seconds, which feels like an eternity when you're waiting to get up one floor.

Another way we can inspect this issue is via door reopening rules. While older elevators used a rubber bumper on the door called a sensitive edge, most elevators you'll see today use a "light curtain" instead. This device, installed between the hall and car doors, monitors for the interruption of infrared light beams to tell if the door is clear. When the door is obstructed, ADA 4.10.6 requires the door to remain open for at least 20 seconds. After that point ADA just refers to the ASME A17.1 standard, which allows for a behavior called "nudging" in which the elevator controller encourages people to clear the door by closing it anyway (at slow speed). The light curtain can also be used to detect whether or not a person has entered the elevator, which can be used as an input to hold time. Some articles online say that you can "hack" an elevator waiting at an empty floor (because someone called the elevator and walked away, for example) by momentarily interrupting the light curtain so that the controller will believe that someone has entered.

Indeed this seems to work well on some elevators, but the ADA requirements do not allow an exception to minimum hold times based on light curtain detection. This means that the light curtain trick is basically equivalent to the door close button: we can expect it to, at most, shorten the door hold time to the ADA minimum. Nothing is allowed to decrease the time below the ADA minimum, except when the elevator is in a special mode such as fire or perhaps independent service.

So it seems that the reality of elevator "door close" buttons is rather less dramatic than Radiolab and the NYTimes imply: the "door close" button is perfectly functional, but details of the 1990 ADA mean that most of the time people are pressing it the elevator controller isn't permitted to close the door due to ADA rules. As far as I can tell, outside of the ADA minimum door time, door close buttons work just fine.

And yet tons of articles online still tell us that the button is installed as a placebo... something that is demonstrably untrue considering its significance in fire (and maintenance, independent, etc) modes, and shows a general lack of understanding of elevator codes and the ADA. Moreover, it seems like something you would find out is untrue with about five minutes of research. So why is it such "common knowledge" that it makes the rounds of major subreddits and minor local news websites to this day?

No doubt a large portion of the problem is laziness. The "placebo" theory has a lot of sizzle to it. Even though the NYTimes is somewhat noncommittal and only implies that it is the true purpose of the button, most of the online pieces about door close buttons I can find appear to be based solely on the 2016 NYTimes article and actually repeat the claim about the placebo affect more strongly than the NYTimes originally makes it. In other words, the "fact" that the door close button is a placebo seems to mostly just be a product of lazy journalists rewriting an NYTimes piece enough to not feel like plagiarists.

There is also a matter of aesthetic appeal: the placebo theory sounds great. It has the universal appeal of mundane reality but also hints at some kind of conspiracy to deceive in the elevator industry. And, of course, it makes everyone feel better about the high failure rate of mashing the "door close" button without the complexity of an accurate explanation of the 1990 ADA rules. The NYTimes piece basically makes it sound like the ADA banned door close buttons, and it's easy to read the ADA and see that that's not true... but it takes some real attention and thought to figure out how the ADA really did change elevator controls.

This type of phenomenon, a sort of "internet urban pseudo-legend," is not at all unique to elevator buttons. In fact the very same 2016 NYTimes article that started that year's round of elevator button "fun facts" is also to blame for another widespread belief in placebo buttons: crosswalk request buttons. The NYTimes article says that most crosswalk buttons do nothing, explaining that the buttons were made non-functional after an upgrade to computer light controls. What the article does say, but many readers seem to miss, is that this is a fact about crosswalk buttons in New York City.

Many traffic lights operate in "actuated mode," where they base their cycling on knowledge of who is waiting where. Older traffic lights mostly used buried inductive loops under the lanes to detect lane occupancy (that a vehicle is present), but a lot of newer traffic lights use either video cameras or compact radar sets. Since they don't require cutting into the pavement and then resealing it, these are cheaper and faster to install. Newer video and radar systems are also better at detecting cyclists than pavement loops---although earlier video systems performed very poorly on this issue and gave video lane presence detection a bad reputation in some cities.

New York City, though, was a very early adopter of large-area computer control of traffic lights. One of the main advantages of central computer control of traffic lights is the ability to set up complex schedules and dynamically adjust timing. Not only can centrally-controlled traffic lights operate in sequence timing matched to the speed limit of the street, they can also have the durations in different directions and sequence speed adjusted based on real-time traffic conditions.

The problem is that combining central timing control with actuated operation is, well, tricky. In practice, most traffic lights that operate under sequence timing or remote timing control don't operate in actuated mode, or at least not at the same time. What some traffic lights do today is switch: sequence timing during rush hour, and actuated mode during lower traffic. Even with today's developments combining scheduled timing with actuation inputs is tricky, and New York City adopted centralized control in the '70s!

So New York's adoption of central control was also, for the most part, an abandonment of actuated operation. The crosswalk buttons are actuation inputs, so they became non-functional as part of this shift. The 2016 NYTimes article explained that the city had estimated the cost of removing the now non-functional buttons at over a million dollars and so decided to skip the effort... but they are removing the buttons as other work is performed.

For the second time, this runs directly counter to the "mechanical placebo" argument the article is based on. The buttons weren't originally installed as placebo at all; when they were put in they were fully functional. A different decision, to switch to centralized timing control, resulted in their current state, and even then, they are being removed over time.

Moreover, the same does not apply to other cities. The NYTimes makes a very lazy effort at addressing this by by referring to a now-unavailable 2010 ABC News piece reporting that they "...found only one functioning crosswalk button in a survey of signals in Austin, Tex.; Gainesville, Fla.; and Syracuse." It is unclear what the extent of that survey is, and I lack the familiarity with traffic signaling in those cities to comment on it. But in a great many cities, most of them in my experience, actuated traffic signals remain the norm outside of very high-traffic areas, and so the crosswalk buttons serve a real purpose. Depending on the light configuration, you may never get a "walk" signal if you don't press the button, or the duration of the "walk" signal (prior to the flashing red hand clearing time) may be shorter.

Actually one might wonder why those crosswalk buttons have so much staying power, given the technical progress in lane presence detection. Video and radar options for waiting pedestrian detection do exist. I have occasionally even seen PIR sensors installed for this purpose in suburban areas. The problem, I think, is that detecting a pedestrian waiting to cross involves more nuance than a vehicle. Sidewalks don't have lane lines to clearly delineate different queues for each movement. A video or radar-based system can detect a pedestrian waiting on the corner, but not whether that person is waiting to cross one direction, or the other, or for an Uber, or just chose that spot to catch up on Tik Tok. Video-based waiting pedestrian detection may be too prone to false positives, and in any case the button is a robust and low-cost option that can also be used to meet ADA objectives through audible and tactile announcements.

So there's a story about buttons: the conspiracy about them being placebos is itself a conspiracy to get you to read articles in publications like "Science Alert." Or maybe that's just an old tale, and the reality of content-farmed news websites falls out of some implications of the ADA. It's a strange world out there.

--------------------------------------------------------------------------------

>>> 2023-02-17 something up there pt II

As we discussed previously, the search for UAP is often contextualized in terms of the events of 2017: the public revelation of the AATIP and alien-hunting efforts by Robert Bigelow and Tom DeLonge. While widely publicized, these programs seem to have lead to very little. I believe the termination of the AATIP (which lead to the creation of TTSA) to be a result of the AATIP's failure to address the DoD's actual concern: that UAP represented a threat to airspace sovereignty.

I just used a lot of four- and five-letter acronyms without explaining them. These topics were all discussed in the previous post and if you are not familiar with them I would encourage you to read it. Still, I will try to knock it off. Besides, now there is a new set of four- and five-letter acronyms. The end of the AATIP was not the end of the DoD's efforts to investigate UAP. Instead, military UAP research was reorganized, first into Naval intelligence as the UAP Task Force, and later in the cross-branch military intelligence All-Domain Anomaly Resolution Office, or AARO.

It is unclear exactly what the AARO has accomplished. As a military intelligence organization, the DoD will not comment on it. Most of what we know comes from legislators briefed on the program, like Sen. Gillibrand and Sen. Rubio. In various interviews and statements, they have said that AARO's work is underway but hampered by underfunding---underfunding that is, embarrassingly, a result of some kind of technical error in defense appropriation.

Administratively confused as they may be, the DoD's UAP efforts have lead to creation of a series of reports. Issued by the Director of National Intelligence (DNI) at the behest of congress, the June 2021 unclassified report appeared to be mostly a review of the same data analyzed by AATIP. The report was short---9 pages---but contained enough information to produce a lot of reporting. One of the most important takeaways is that, up to around 2020, the military had no standardized way of collecting reports of UAP. Later reporting would show that even after 2020 efforts to collect UAP reports were uneven and often ineffective.

Much of the reason for this is essentially stigma: advocates of UAP research have often complained that through the late 20th century the military developed a widespread attitude of suppressing UAP incidents to avoid embarrassment. As a result, it's likely that there are many more UAP encounters than known. This is particularly important since analysis (including that in the 2021 report) repeatedly finds that the majority of UAP reports are probably explainable, while a few are more likely to result from some type of unknown object such as an adversarial aircraft. In other words, the signal to noise ratio in UAP reports is low. Taken one way this might discourage reporting and analysis, since any individual report is unlikely to amount to anything. The opposite is true as well, though: if most UAP encounters are not reported and analyzed, it's likely that the genuinely troubling incidents will never be discovered. The 2021 report broadly suggests that this is exactly what was happening for many years: so few UAP incidents were seriously considered that no one noticed that some of them posed real danger.

The 2021 report briefly mentions that some UAP incidents were particularly compelling. For example, in 18 incidents the UAP demonstrated maneuvering. This doesn't mean "shot into the sky as if by antigravity," but rather that the objects appeared to be navigating towards targets, turning with intention, or stationkeeping against the wind. In other words, they are incidents in which the UAP appears to have been a powered craft under some type of control. Even more importantly, the report notes that in a few cases there were indications of RF activity. The military will never go into much detail on this topic because it quickly becomes classified, but many military aircraft are equipped with "electronic warfare" systems that use SDR and other radio technology to detect and classify RF signals. Historically the main purpose of these systems was to detect and locate anti-aircraft radar systems, but they have also been extended to general ELINT use.

ELINT is an intelligence community term for "electronic intelligence." Readers are more likely to be familiar with the term SIGINT, for signals intelligence, and the difference between the two can be initially confusing. The key is that the "electronic" in ELINT is the same as in "electronic warfare." SIGINT is about receiving signals in order to analyze their payloads, for example by cryptologic means. ELINT is about receiving signals for the sake of the signals themselves. For example, to recognize the chirp patterns used by specific adversarial radar systems, or to identify digital transmission modes used by different types of communications systems, thus indicating the presence of that communications system and its user. A simple and classic example of ELINT would be to determine that an adversarial force uses a certain type of encrypted digital radio system, and then monitor for transmissions matching that system to locate adversarial forces in the field. The contents don't matter and for an encrypted system may not be feasible to recover anyway. The mere presence of the signal provides useful intelligence.

The concept of ELINT becomes important in several different ways when discussing UAP. First, the 2021 DNI report's mention that several UAP were associated with RF emissions almost certainly refers to ELINT information collected by intelligence or electronic warfare equipment. These RF emissions likely indicate some combination of remote control and real-time data reporting, although a less likely possibility (in my opinion) is that it reflects electronic warfare equipment on the UAP engaged in some type of active countermeasure.

It's meaningful to contrast this view of the matter with the one widespread in the media in 2017. A UAP that maneuvers and communicates by radio is not exactly X-Files material, and almost by definition can be assumed to be an sUAS---small unmanned aerial system, commonly referred to as a drone. Far from the outlandish claims made by characters like Tom DeLonge, such a craft is hardly paranormal in that we know such devices exist and are in use. What is a startling discovery is that sUAS are being spotted operating near defense installations and military maneuvers and cannot be identified. This poses a very serious threat not only to airspace sovereignty as a general principle but also to the operational security of the military.

Perhaps the component of the report that generated the most media interest is its analysis of the nature of the reported UAP. In the vast majority of cases, in fact all but one, the DNI report states that it was not possible to definitively determine the nature of the UAP. This was almost always because of the limited information available, often just one or two eyewitness accounts and perhaps a poor photo and radar tracks. Most of these incidents presumably do have explanations within the realm of the known that simply could not be determined without additional evidence. On the other hand, the report does state that there are some cases which "may require additional scientific knowledge" to identify.

It is not entirely clear how dramatically this statement should be taken. It's possible, even likely, that the phrase mostly refers to the possibility that new methods of evidence collection will need to be developed, such as the new generation of radar systems currently emerging to collect more accurate information on sUAS with very low radar cross section due to their small size. It's also possible that the phrase reflects the fact that some reported UAP incidents involve the UAP behaving in ways that no known aerial system is capable of, such as high speeds and maneuvers requiring extreme performance. Once again, there is a temptation to take this possibility and run in the direction of extraterrestrial technology. Occam's razor at the very least suggests that it's more likely that some adversarial nation has made appreciable advancements in aviation technology and kept them secret. While perhaps unlikely this is not, in my mind, beyond reason. We know, for example, that both Russia and China have now made more progress towards fielding a practical hypersonic weapons system than the United States. This reinforces the possibility that their extensive research efforts have yielded some interesting results.

Following the 2021 UAP report, Congress ordered the DNI to produce annual updates on the state of UAP research. The first such update, the 2022 report, was released a few months ago. The unclassified version is quite short, but it is accompanied by a significantly longer and more detailed classified version which has been presented to some members of Congress. The unclassified document states that the number of known UAP incidents has increased appreciably, largely due to the substantial effort the military has made to encourage reporting. To provide a sense of the scale, 247 new reports were received in the roughly 1.5 years between the preliminary and 2022 reports. A number of additional incidents occurring prior to the 2021 report also came to the attention of military intelligence during the same period, and these were analyzed as well.

Perhaps the most important part of the 2022 report is its statement that, of the newly analyzed incidents, more than half were determined to be "unremarkable." In most cases, it was judged that the incident was probably caused by a balloon. While these are still of possible interest, they are less interesting than the remainder which are more difficult to explain. Intriguingly, the report states that some UAP "demonstrated unusual flight characteristics or performance capabilities." This supports the more dramatic interpretation of the 2021 report, that it is possible that some incidents cannot be explained without the assumption that some adversary possesses a previously unknown advanced technology.

While it already attracted a great deal of media attention, this entire matter of DNI reports was only the opening act to the spy balloon. The airspace sovereignty aspect of the UAP reports is not something that attracted much discussion in the media, but it has become much more front of mind as a UAP of the first kind drifted across the United States. This UAP was not unidentified for long, with the military publicly attributing it to China---an attribution that China has both formally and informally acknowledged.

Balloons are not new in warfare. Indeed, as the oldest form of aviation, the balloon is also the oldest form of military aviation. The first practical flying machine was the hot air balloon. While the technology originated in France, the first regular or large-scale example of military aviation is usually placed at the US Civil War. Hot air balloons were routinely used for reconnaissance during the Civil War, and the slow movement and long dwell times of balloons still make them attractive as reconnaissance platforms.

Military ballooning in the United States is not limited to the far past. During World War II, the Japanese launched nearly 10,000 balloons equipped with incendiaries. The hope was that these balloons would drift into the United States and start fires---which some of them did, although a concerted press censorship program largely prevented not only the Japanese but also Americans learning of the campaign. Ultimately the impact of the balloon bombs was very limited, but they are still often considered the first intercontinental weapon system. They might also be viewed as the first profound challenge to US air sovereignty, as the balloons required no nearby support (as aircraft of the era did) and the technology of the time provided no effective means of protection. Indeed, this was the calculus behind the press censorship: since there was no good way to stop the balloon bombs, the hope was that if the US carefully avoided any word of them being published, the Japanese might assume they were all being lost at sea and stop sending them.

While the Cold War presented Soviet bombers and then missiles as top concerns, it could be said that balloons have always been one of the greatest practical threats to airspace sovereignty. Despite their slow travel and poor maneuverability, balloons are hard to stop.

Balloons remain surprisingly relevant today. First, modern balloons can operate at extremely high altitudes, similar to those achieved by the U-2 spy plans. This provides an advantage both in terms of observation range and secrecy. Second, balloons are notoriously difficult to detect. While the envelope is large, the material is largely transparent to RF, resulting in a very low radar cross section. Careful design of the suspended payload can give it a very low radar cross section as well... often easier than it sounds, since the payload is kept very lightweight. The sum result of these two factors is that even large balloons are difficult to detect. They are most obvious visually, but the United States and Canada have never had that substantial of a ground observer program and the idea has not been on the public mind for many decades. Many people might see a balloon before any word reached air defense.

On January 28th, a large balloon operated by China entered US airspace over Alaska. During the following week, it drifted across the country until leaving the east coast near South Carolina, where it was shot down with a Sidewinder missile. Circumstances suggest that both the Chinese and US administrations may have intended to downplay the situation to avoid ratcheting tensions, as the US government did not announce the balloon to the public until about a day after it had initially been detected entering US airspace. Publicly, China claimed it to be a weather balloon which had unintentionally drifted off course. The New York Times reports that, privately, Chinese officials told US counterparts that they had not intended for the balloon to become such a public incident and would remove it from US airspace as quickly as possible.

Modern balloons of this type are capable of a limited but surprisingly flexible form of navigation by adjusting their buoyancy, and thus altitude, to drift in different winds. Perhaps the balloon spent a week crossing the US by intention, perhaps an unfortunate coincidence of weather created a situation where they were not able to navigate it out more quickly, or perhaps some equipment failure had rendered the balloon unable to change its altitude. I tend to suspect one of the latter two since it is hard to think of China's motivation to leave the balloon so publicly over the United States. In any case, that's what happened.

We now know more about the balloon, not so much because of analysis of the wreckage (although that is occurring) but more because the military and administration have begun to share more information collected by means including a U-2 spy plane (one of few aircraft capable of meeting the balloon's altitude) and other military reconnaissance equipment. The balloon had large solar arrays to power its equipment, it reportedly had small propellers (almost certainly to control orientation of the payload frame rather than for navigation), and it bristled with antennas.

This is an important point. One of the popular reactions to the balloon was mystery at why China would employ balloons when they have a substantial satellite capability. At least for anyone with a background in remote sensing the reason is quite obvious: balloons are just a lot closer to the ground than satellites, and that means that just about every form of sensing can be performed with much lower gain and thus better sensitivity. This is true of optical systems where balloons are capable of much better spatial resolution than satellites, but also true of RF where atmospheric attenuation and distortion both become very difficult problems when observing from orbit. Further, balloons are faster and cheaper to build and launch than satellites, allowing for much more frequent reconfigurations and earlier fielding of new observation equipment. The cost and timeline on satellites is such that newly developed intelligence technology takes years to make it from the lab to the sky... Chinese intelligence balloons, on the other hand, can likely be fabricated pretty quickly.

It's useful here to return to the topic of ELINT. First, it's very likely that ELINT was a major mission of this balloon. Sensing RF emissions from military equipment at close range is invaluable in creating ELINT signatures for equipment like radar and encrypted communications systems, which directly translates into a better capability to mount an offensive from the air. SIGINT was likely also a mission. One of the advantages of ELINT collection is that the data acquired for ELINT purposes can typically be processed to glean SIGINT information, and even provides valuable material for cryptologists attempting to break codes.

ELINT is also relevant in the detection of the balloon. While the spy balloon in the recent incident was detected by conventional means, the DoD has reported that they are now able to assert that this is at least the fifth such balloon to enter US airspace. For those not familiar with ELINT methods this might be surprising, but it makes a great deal of sense. The fact that this balloon was tracked by the military for days provided ample opportunities to collect good quality ELINT signatures of the communications equipment used by the balloon. The military possesses a number of aircraft dedicated to the purpose of ELINT and SIGINT collection, such as the RC-135---a modified C-135 Stratolifter equipped with specialized antennas and hundreds of pounds of electronic equipment. These type of aircraft could orbit the balloon for hours and collect extensive recordings of raw RF emissions.

ELINT information is also collected by ground-based and orbital (satellite) assets, including a family of satellites that deploy large parabolic reflectors to collect RF signals with extremely high gain. The data collected by these platforms is likely retained in raw form, allowing for retrospective analysis. Information collected by similar means has been publicly used in the past. And this is most likely how the first four balloons were discovered: by searching historic data collected by various platforms for matching ELINT signatures. The presence of the same digital data modem as in the recent spy balloon, in US airspace, almost certainly indicates a similar Chinese asset operating in the past.

It's important to understand that the RF environment is extremely busy, with a great deal of noise originating from the many radio devices we use every day. It's simply not feasible for someone in some military facility to carefully review waterfall displays of the RF data collected by numerous ELINT assets. What is much more feasible is to develop signatures and then use automation to search for instances of similar traffic. It's the practical reality of intelligence at scale.

The discovery of the recent spy balloon has had an incredible effect on air defense. I am of the general opinion, and have occasionally argued in the past, that the US government has significantly under-invested in air defense since the end of the Cold War. While we do need to move on from the hysteria of the 1970s, the lack of investment in air surveillance and defense over the last fifty years or so has lead to an embarrassing situation: our ability to detect intrusion on our airspace is fairly poor, and when we do it can take well over an hour to get a fighter in the air to investigate it. The balloon brought this problem to the attention of not only the government but the public, and so some action had to be taken.

Primary radar [1] is quite complex. Even decades into radar technology it remains a fairly difficult problem to pick objects of interest, such as aircraft, out of "clutter"---the many objects, ranging from the ground to wind-blown dust, that can produce primary radar returns. One of the simplest approaches is to ignore objects that are not large and moving fast. This type of filtering is usually adequate for detection of aircraft, but fails entirely for some objects like balloons and sUAS that may be small and slow moving.

Further, the US and Canada are very large. Integrating data from the many radar surveillance sites and presenting it in a way that allows an air defense controller to identify suspicious objects in the sea of normal air traffic is a difficult problem, and a problem that the US has not seriously invested in for decades. The information systems used by both the FAA and NORAD for processing of radar data are almost notoriously poor. In the wake of the spy balloon, officials have admitted to the press that the military is struggling to process the data from radar systems and identify notable objects.

Air defense is one of the oldest problems in computing as an industry. One of the first (perhaps the first, depending on who you ask) networked computer systems was SAGE: an air defense radar processing system. These problems are still difficult today, but we are no longer mounting cutting-edge research and development projects to face them. Instead, we are trapped in a morass of defensed contractors and acquisition projects that take decades to deliver nothing.

In response to the discovery of the spy balloon, NORAD has changed the parameters used to process radar data to exclude fewer objects. They have also made a policy change to take action on more unknown objects than they had before. This lead directly to NORAD action to intercept several balloons over the past two weeks. There are now indications that at least some of these balloons may have been ordinary amateur radio balloons, not presenting a threat to air sovereignty at all. Some will view this as an embarrassment or indictment of NORAD's now more aggressive approach, but it's an untenable problem. If China or some other adversary is sending small balloons into our airspace, we need to make an effort to identify such balloons. But currently, no organized system or method exists to identify balloons and other miscellaneous aerial equipment.

One could argue (indeed, here I am) that up to about two weeks ago NORAD was still looking for Soviet bombers, with a minor side project of light aircraft smuggling drugs. Air defense largely ignored anything that wasn't large and actively crossing a border (or more to the point an ADIZ). And that's how about four large intelligence platforms apparently wandered in unnoticed... with UAP reports suggesting that there may be much more.

My suspicion is that the coming year will involve many changes and challenges in the way that we surveil our airspace. I think that we will likely become more restrictive in airspace management, requiring more aircraft than before to have filed flight plans. Otherwise it is very difficult to differentiate a normal but untracked object from an adversarial intelligence asset.

And indications are that adversarial intelligence assets are a very real problem. China's spy balloon program is apparently both long-running and widespread, with similar balloons observed for years in other countries as well. This shouldn't be surprising---after all, reconnaissance balloons are the oldest form of military aviation. The US and allies made enormous use of reconnaissance balloons during the Cold War, sending many thousands into the USSR. It's likely the case that we only really slowed down because our modern reconnaissance balloon projects have all become notorious defense contracting failures. We're still trying, but projects like TARS have run far overbudget and still perform poorly in operational contexts.

It might feel like this situation is new, and in terms of press reporting it is. But we should have seen it coming. In an interview following a classified briefing, Senator John Kennedy said that "These objects have been flying over us for years, many years. We've known about those objects for many years."

Robert Bigelow got into UAP research because he was searching for aliens. Maybe aliens are out there, maybe they aren't, but there is one thing we know for sure: our adversaries are out there, and they possess aviation technology at least as advanced as ours. For decades we ignored UFOs as folly, and for decades we ignored potential aviation advancements by our adversaries along with them. Now those advancements are floating across the northern United States and perhaps worse---the DNI is hoping they'll find out, if they can just get people to report what they see.

[1] Radar that operates by detecting reflections or attenuation of an RF field by an object. This is as opposed to secondary radar, more common in air traffic control, that works by "interrogating" a cooperative transponder installed on the aircraft.

--------------------------------------------------------------------------------

>>> 2023-02-14 something up there pt I

Over the last few weeks, there has been an astounding increase in the number of objects shot down by North American air defense. Little is yet known about some of these objects, but it is clearly one of the more dramatic UFO turns in recent memory. Some of the mystery is simply the fog of war, and the time it takes for defense organizations to collect and publicize information. I think that much of it, though, is attributable to a few frustrating factors: the limited familiarity most of the public has with the reality of military operations today; the tendency of the most vocal parts of the public to attempt to fit all events into a preconceived theory (often of the more out-of-this-world kind); and the poor job the media has done of contextualizing these events.

I have written once before about UFOs, and I try not to do it too much for fear of coming off as a crazy person. Still, though, UFOs and their colorful history are one of my greatest interests. Over the last week I have done a lot of yelling at the television and internet comment sections. So here, I am going to attempt something ambitious. I would like to put together for you a possible, even likely, story of the UFO news of the last two years: of AATIP, balloons, and how they all fit together.

Most of what I am about to write is fairly well-established fact, but the way that I connect these facts together is a matter of speculation and opinion. Still, my knowledge of both the history and present of aerial phenomena and the military and intelligence communities, with particular focus on air defense, gives me a set of opinions on this topic that feel extremely obvious to me but are seldom presented in the media or online discussions.

I can't promise I'm correct, but I do hope you'll consider the possibility that the story I will tell here is indeed what has happened: that, far from disclosure, we are currently living out the consequences of a sophisticated adversary, government inefficacy, and one man's eccentric swindle.

And that's where we'll start: with one man.

Robert Bigelow made his wealth in the hospitality business. Budget Suites of America is his marquee brand, but his empire spreads far beyond with a huge hotel and multi-family housing portfolio. Through most of the second half of the 20th century, hotels kept Bigelow busy and made him rich, but by the 1990s he turned towards his true passion: the paranormal.

Most reporting on Bigelow focuses on Bigelow Aerospace (BA). When he's identified as an eccentric, it's usually in regards to BA's research into UFOs. And yet, Bigelow's paranormal investigations begin years earlier: in 1995, he founded the National Institute for Discovery Science, or NIDSci. NIDSci's focus was not UFOs but paranormal phenomena more broadly, including parapsychology. Bigelow was joined in this venture by his friend, journalist George Knapp.

Knapp is perhaps best known in paranormal communities for his extensive reporting on the claims of Bob Lazar [1]. In the mid 1990s, Knapp turned his focus towards cattle mutilation and related phenomena, the same field of inquiry that made Linda Multon Howe's fame. Cattle mutilation has a long history and in the '90s was seen as one of the more credible forms of paranormal activity. Quite a few paranormal researchers chased mutilated cattle like ambulances, but Knapp had a remarkable lead on the topic: Skinwalker Ranch.

Also known as the Sherman Ranch after the brief owners that first shared stories of its haunting, Skinwalker Ranch is a 512 acre property in rural Utah. It takes its common name from a frightening creature of Navajo belief, "yee naaldlooshii." The Dine feel it to be unwise or at least improper to discuss the Skinwalker, and so I will not dwell on it. We can avoid the topic quite easily, as the relation of Skinwalker Ranch to the Skinwalker itself is loose and a result of white settlers rather than anyone who would know better. What we can certainly say about Skinwalker Ranch is this: it is popularly associated with spooky shit.

Summarized briefly, the stories of Skinwalker Ranch encompass just about every paranormal modality you can think of. Crop circles, mutilated cattle, strange lights in the sky, footsteps heard at night, a quiet but disconcerting sound that you cannot escape, bedroom doors locked at night to fend off something that has been scratching at the walls, creatures that are felt rather than seen, bright apparitions like spotlights chasing people on ranch roads, et cetera.

Whether that spooky shit is the consequence of aliens, secret military projects, Bigfoot, ghosts, or otherwise depends largely on who you ask. The legends of Skinwalker Ranch also originate almost entirely with the Shermans who owned it for only two years, which has produced some obvious questions about their veracity. Still, it is one of the most famous sites of paranormal activity and a household name among paranormal enthusiasts [2].

In 1996, Knapp joined with Bigelow and biochemist Colm Kelleher to resolve the mystery of Skinwalker Ranch once and for all, or at least publish a book about it. That year, NIDSci bought the ranch. A small staff of scientists and paranormal enthusiasts was recruited to perform research on the site, and it was otherwise closed to access. It has remained privately owned and guarded since then, perpetuating its paranormal associations.

Bigelow owned Skinwalker Ranch for about twenty years, but serious investigation seems to have only occurred for the first half of that period. In 2005, Knapp and Kelleher published a book, "Hunt for the Skinwalker," presenting their results. The results are, well, minimal. The book is mostly a recounting of the legends told by the Shermans, along with similar encounters during NIDSci's tenure.

In any case, the details of Skinwalker Ranch are not all that important to the story I am telling here. The reason I bring this whole thing up is because of what it tells us about Robert Bigelow. Bigelow is fascinated with paranormal phenomena and has the wealth and connections to bring journalists and scientists into his projects. His projects do not necessarily produce results.

Most of all, remember this: Bigelow has done this before.

George Knapp had another friend of note: the late Harry Reid, a long-serving senator from Nevada. In fact, Knapp and Reid were in conversation on the topic of UFOs the same year that Bigelow bought Skinwalker Ranch. I do not know to what extend Reid was aware of NIDSci's efforts, but I think it must have been at least a bit, as Reid writes in a New York Times editorial that Knapp had invited him to a conference in 1996. In any case, Reid found Knapp credible, and became the principal congressional advocate of serious investigation of UAPs. Reid was quite clear about his interest in UFOs, and while he viewed extraterrestrial origin as only one possibility, he felt it to be a possibility worth investigating.

Here I should discuss terminology. I tend to use the term UFO, or unidentified flying object. The problem with "UFO" is that it is widely understood to refer specifically to phenomena of ostensibly extraterrestrial origin, and it's closely associated with conspiracy theories and loons. In modern government research, the term UAP, for unidentified aerial phenomena, is preferred. This is indeed mostly a matter of optics. I do think the distinction is important, though, as even within the UFO community "UFO" tends to have an alien connotation, and "UAP" is not intended to. The term UAP allows us to be a bit more flexible in our thinking by not assuming the existing body of extraterrestrial-oriented UFO research. From this point on I will prefer the term UAP for consistency with reporting on the topic.

In 1999, Robert Bigelow founded Bigelow Aerospace (BA). The history of BA is confusing in some ways. On the one hand, it seems that Bigelow was genuinely interested in developing aerospace technology, perhaps particularly for the purpose of space tourism... right in line with his history in hospitality. On the other hand, BA was founded right in the middle of the Skinwalker Ranch project, and it's hard to imagine that it wasn't related. BA has held various contracts in space systems development but has never had a very large staff. It is mostly known today for the way that it, too, interacted with Senator Reid: the Advanced Aerospace Threat Identification Program, or AATIP.

AATIP, by Reid's own account, started in 2007. It was a highly secretive program and so the early details are somewhat obscure. The main gist of AATIP was to collect reports of UAPs and then analyze those incidents to develop a possible explanation. Like many military projects, AATIP was contracted out to private industry. Also like many military projects, the AATIP contract was awarded to the same person who had lobbied for the program's creation: Robert Bigelow, through a division of BA called Bigelow Aerospace Advanced Space Studies or BAASS. Reid makes it fairly clear that AATIP started and ended with Robert Bigelow.

Many aspects of AATIP are unknown or questionable. Perhaps most notable is the question of AATIP's leadership. Long-time military intelligence analyst Luis Elizondo claimed, after his 2012 separation from the military, to have been AATIP's director. The Pentagon denies this, and journalists have questioned various aspects of Elizondo's story, but he has a notable supporter: Senator Reid concurs that Elizondo lead the program. As a general matter it seems fairly certain that Elizondo was at least a senior leader of AATIP, but the confusion underscores the uncertainty around the history, mission, and outcomes of the DoD's UAP efforts in the late 2010s. One gets the impression that no one is telling the whole story, probably because everyone is trying to make themselves look good.

What we do know about AATIP is that the program ended in 2012, and that BAASS produced a lengthy report on its findings. This report has never been released to the public, but it is thought to be largely similar to more recent reports from the DoD's in-house UAP program, mostly summarizing BA's conclusions after attempting to identify the cause of a large number of individual UAP incidents. Various parties involved in AATIP, from Elizondo to Reid, have made large claims about AATIP having identified possible extraterrestrial technology, but nothing has emerged to substantiate these claims. I find it most likely that they were exaggerations of more commonplace anomalies in AATIP data.

This is where I will diverge somewhat from undisputed history and share my opinion. AATIP demonstrates that at least a few in congress and likely some individuals in the DoD had a genuine interest in UAP. I believe, though, that most journalists have been entirely too credulous in their reporting on AATIP. While the DoD's and likely Reid's interest in the topic were more out of concern for national security, BAASS had something else in mind. One thing we know about Bigelow is that he is fascinated by the paranormal and can spin very little evidence in to a huge story, as he did at Skinwalker Ranch. Moreover, there are clear indications that AATIP did not exactly operated as planned. Besides the general confusion around the exact operating details of AATIP, which suggest that the program operated with very little DoD oversight, I find it likely likely that AATIP diverged entirely from its original purpose.

AATIP was originally funded as a research program into possible advanced weapons systems possessed by adversaries, but it ended as a research program into extraterrestrial presence on Earth. Multiple journalists report that this change in focus occurred at the behest of Bigelow himself, and the Pentagon's awkward termination of the program in 2012 suggests that it did not occur with DoD approval.

I believe that Bigelow won the AATIP contract more by connections and luck than competence, and that AATIP went "off the rails" essentially from the beginning. Bigelow was hunting for aliens and the powerful Senator Reid shared this intention. Through confidence and political savvy, hanging mostly off of Senator Reid's considerable influence on defense spending, Bigelow was able to separate the pentagon from some $22 million to fund his personal hobby. While I believe his passion was real and his intent good, AATIP was largely Bigelow's flight of fancy and was not aligned with actual DoD interests in the topic. As senior leadership in the executive branch and Congress became more aware of the situation, AATIP was quietly ended. To support its own interest in adversarial systems, the Pentagon replaced AATIP with an internal program: the UAP Task Force, later reorganized as the All-Domain Anomaly Resolution Office.

The former members of the AATIP did not take to this change well and attempted to pivot their work from government funding to the private sector. These efforts eventually reached wealthy UFO enthusiast Tom DeLonge, of Blink-182 fame. DeLonge had by this point connected with Hal Puthoff. Puthoff is an electrical engineer, former Scientologist, and paranormal researcher long known for his research into psychics and remote viewing. Puthoff worked in these fields in an opportune time: most who are familiar with the concept of remote viewing know of it because of the military's efforts depicted in "The Men who Stare at Goats." Puthoff was directly involved in these programs as a researcher at Stanford University spinoff and defense contractor SRI, which administered some of the military's psychic research on contract. After these efforts, Puthoff founded EarthTech International, which continues research in parapsychology, cold fusion, and other fields which can be generally categorized as "woo."

DeLonge, Puthoff, and former CIA agent and UFO experiencer Jim Semivan founded an organization called To the Stars Academy of Arts and Sciences (TTSA) in 2017. TTSA was somewhere between a spinoff and new parent organization for a media company called To The Stars that had distributed records and books for Tom DeLonge. Through an odd series of announcements, TTSA basically transformed from DeLonge's private record label to a rough continuation of AATIP, but one that would be publicly funded through the sales of media. While TTSA has made claims to extraterrestrial technology and breakthroughs in UAP research, almost nothing that they've put out has ever made any sense, and unsurprisingly the organization has faded into obscurity. TTSA's ambitions to original UAP research basically disappeared by 2018, and today TTSA is little more than DeLonge's online merch store. Given the questions around Elizondo's history, it's unclear how much TTSA had to do with AATIP in the first place, but it certainly didn't amount to anything.

This whole matter of AATIP and TTSA is sort of a flash in the pan, but it set critical context for events to come. The DoD had invested real money and effort into the question of UAPs. The organization that spent that money, AATIP/BAASS, and its loose successor TTSA, seemed to very openly consider UAP research to be research into extraterrestrial presence and other paranormal phenomena. The media, for the most part, has not differentiated between Bigelow's interests and the Pentagon's interests in this regard. I believe that Bigelow was very much hunting for aliens, but the Pentagon was not... the Pentagon was looking for explanations for UAP, and aliens were probably not high on the list of expected outcomes. It does not help matters that Senator Reid seems to have been more on Bigelow's side of this divide.

The real crux of the contemporary UAP issue is that UAPs returned to public attention due to Bigelow's eccentric goose chase and DeLonge's self-promotion, but Bigelow's DoD contract and Elizondo's military past gave these otherwise incredulous stories the imprimatur of government. The media's unquestioning reporting on AATIP and even, to some extent, TTSA gave the impression that these were sophisticated programs endorsed by the government. In fact, they were haphazard efforts by just a few people with long histories in quackery.

AATIP was public knowledge years earlier but became a major news item in 2017 due to DeLonge and Elizondo's promotion of TTSA. Bigelow, DeLonge, Elizondo, and even Senator Reid openly spoke about AATIP's ostensible extraterrestrial research, while the DoD declined to speak about an apparently classified program. In fact, it was not until some time later that it became evident that DoD had continued UAP research at all after 2012, and that research was done under conditions of secrecy as well.

What the public heard is that the Pentagon was hunting for UFOs. How that related to actual DoD interests or programs was irrelevant, because the Pentagon wouldn't talk about it and the media didn't particularly care. The UFOs made headlines. Pentagon UAP reporting procedures and incident databases were boring details.

This particular outcome of the 2017 news cycle, a series of crazed front-page articles that I believe to have been nothing but Bigelow and DeLonge promoting their own business ventures, massively influenced the way UAPs are viewed by the public today. What was really Bigelow's personal lark enabled by his Senate connections became a new MKULTRA but less sinister. No one took it seriously. Well, except for people who thought UAPs were definitely aliens, who took it as seriously as they do Bob Lazar.

What about the Pentagon's side of the story, though? Why was the military interested in UAPs, and why did it continue UAP research (and, it seems, expand it) after Bigelow's involvement ended? I believe that we recently saw the answer floating eastwards across the northern United States.

The thing is, aliens are one of the less likely explanations for UAPs, and to be honest they are one of the less interesting. Most UAPs, it stands to reason, originate here on earth. And that is very much a military concern.

Foo fighters, strange aircraft reported by military pilots, are just about as old as military aviation. The term "foo fighter" comes from WWII, and indeed WWII was lousy with strange aerial encounters. It has always been assumed that the vast majority of foo fighters were mistaken perceptions, but they have always been of interest to military intelligence because of the possibility that they were simply misidentified enemy aircraft. From this perspective the strange, otherworldly behavior of foo fighters is all the more interesting: they might represent enemy aircraft of a novel kind.

The mass publicity around UAPs in 2017 spurned a great deal of public interest which resulted in some media reporting on UAP incidents as they happened. The Drive's Tyler Rogoway has perhaps become today's Linda Multon Howe but more credible, as he has repeatedly written some of the most detailed analysis of UAP incidents. Put together, Rogoway's articles on UAPs from 2017 to the present don't come together into any particular narrative except for the broad one of challenges to airspace sovereignty.

Airspace sovereignty is a general term used to describe a state's control of its airspace. The United States exercises air sovereignty through the civilian operations of the FAA and the military operations of NORAD, a joint US-Canadian command that shares the FAA's radar network to observe for Soviet bombers and other aerial threats. Obviously Soviet bombers are no longer a great concern, but the technical and bureaucratic infrastructure of NORAD are still mostly organized around that threat.

The FAA-Air Force Joint Surveillance System consists of radar instruments that are about 30 years old at the newest, with some equipment dating back to the '60s still in use. It is a common misconception that the FAA, NORAD, or someone has complete information on aircraft in the skies. In reality, this is far from true. Primary radar is inherently limited in range and sensitivity, and the JSS is a compromise aimed mostly at providing safety of commercial air routes and surveillance off the coasts. Air traffic control and air defense radar is blind to small aircraft in many areas and even large aircraft in some portions of the US and Canada, and that's without any consideration of low-radar-profile or "stealth" technology. With limited exceptions such as the Air Defense Identification Zones off the coasts and the Washington DC region, neither NORAD nor the FAA expect to be able to identify aircraft in the air. Aircraft operating under visual flight rules routinely do so without filing any type of flight plan, and air traffic controllers outside of airport approach areas ignore these radar contacts unless asked to do otherwise.

The idea I am trying to convey is that airspace sovereignty is a tricky problem. The US and Canada are very large countries and so the airspace over them is very large as well. Surveiling that airspace is expensive and complex. Since the decline of the Cold War there has been no interest in spending the money that would be required for complete airspace awareness, and indeed the ability of the FAA and military to field airspace surveillance technology seems to have declined over recent decades rather than increased. We don't really know what's out there all the time, and it seems very possible that a determined adversary might be able to sneak in and out of US airspace largely undetected.

There are incidents and accidents, hints and allegations, that suggest that this concern is not merely theoretical. In late 2017, air traffic controllers tracked an object on radar in northern California and southern Oregon. Multiple commercial air crews, asked to keep an eye out, saw the object and described it as, well, an airplane. It was flying at a speed and altitude consistent with a jetliner and made no strange maneuvers. It was really all very ordinary except that no one had any idea who or what it was. The inability to identify this airplane spooked air traffic controllers who engaged the military. Eventually fighter jets were dispatched from Portland, but by the time they were in the air controllers had lost radar contact with the object. The fighter pilots made an effort to locate the object, but unsurprisingly considering the limited range of the target acquisition radar onboard fighters, they were unsuccessful. One interpretation of this event is that everyone involved was either crazy or mistaken. Perhaps it had been swamp gas all along. Another interpretation is that someone flew a good sized jet aircraft into, over, and out of the United States without being identified or intercepted. Reporting around the incident suggests that the military both took it seriously and does not want to talk about it.

This incident is not unique. Over the last few years there have been multiple instances of commercial aircrews reporting unidentified aircraft, which were sometimes fantastical and sometimes quite mundane. Fewer incidents of radar contact with unknown aircraft are known, but these are less likely to make it to the press. Moreover, air traffic controllers with the FAA and, apparently, military air defense controllers both have a tendency to filter their radar scopes to hide objects that are not "of interest." Several aviation accidents in the last five years have resulted in investigations that found that radar did detect concerns such as flocks of birds but those contacts were not displayed due to the configuration of the radar scope. This suggests that controllers may have been willfully ignorant of some oddities, not unsurprisingly since they are focused primarily on the aircraft with which they have contact.

All of this sounds a little bit wild, and a little but unbelievable, right? That's one of the biggest problems that DoD seems to grapple with. As long as military aviators have been seeing strange things, they have been laughed at for it. Skeptical reactions are not at all undeserved, but the DoD has communicated that a major motivation of current UAP efforts are to encourage people to report strange things in the sky, instead of staying quiet for fear of sounding crazy.

To be clear, the vast majority of these incidents are almost certainly mistakes of some kind. Perceptual effects can make stars appear to move strangely, atmospheric phenomenon can appear as solid objects, and sometimes you just get disoriented and something very ordinary looks very strange. But there is a matter of baby and bath water. Even though the majority of UAP sightings amount to nothing, it is possible, even likely, that a few of them were sightings of real objects. Real objects which were not tracked by air traffic controllers or air defense. Real objects which represent a challenge to airspace sovereignty.

And that brings us up to a few weeks ago: there was evidence, scant evidence but still evidence, that unidentified objects were operating in US airspace. Troublingly, these objects were sometimes reported close to military installations, and even dwelling near them for extended periods of time. The DoD, I believe, was deeply concerned that at least some of these reports might be indications that an adversary was successfully placing aerial surveillance equipment over the United States undetected. And that's why the Pentagon has spent years encouraging military personnel to report UAP sightings, and analyzing those reports for plausible explanations: not because they might be aliens, but because they might be the enemy.

And then, something happened with a balloon. What's up with that?? We'll talk about it next time, in part II.

[1] I will not expand on the story of Bob Lazar here, but for those not familiar it is useful to know that Lazar's stories of secret underground alien bases and military collaboration with aliens are both completely discredited and extremely influential on modern UFO thought.

[2] Here I will caution you that the horror film "Skinwalker Ranch" is both almost entirely unrelated to the real story (or even doubtful claims) about the place and, well, bad.

--------------------------------------------------------------------------------

>>> 2023-02-13 my homelab

I have always found the term "homelab" a little confusing. It's a bit like the residential version of "on-premises cloud," in that it seems to presuppose that a lab is the normal place that you find computer equipment. Of course I get that "homelab" is usually used by those who take pride in the careful workmanship of their home installation, and I am not one of those people.

Welcome to Computers Are Bad - in color.

Note: if you get this by email, the images may or may not work right. We're going to find out together! I don't plan to make a habit of including images and they don't look that good anyway, so I'm not too worried about it.

closet rack

They say that necessity is the mother of invention, but I think often mere desire will suffice, and I am sort of particular about how I want things to work. Perhaps the bigger problem is that I started my career in technology in a way that was both mundane and hands-on: in high school I found a poorly paying job as a sort of technical jack-of-all-trades for a local managed service provider (MSP). The term MSP is not even that familiar to many in the technology industry today. This was the kind of company that would set up and maintain Microsoft Active Directory for businesses that were big enough to have ten computers but not big enough to have an IT department. The owner, though, was a wheeler-dealer if I ever knew one, and generally jumped into whatever line of business he thought would make some money.

I was hired ostensibly as a computer technician, repairing laptops as a Lenovo contract warranty service center. Then I was repairing photocopiers, then I was selling them. Not long after I was running common-spaces WiFi for a fairly large office tower (the World Trade Center... of Portland, Oregon). Along with some video surveillance installation, I developed the kind of addiction that doesn't pay well enough to be a career unless you are smart enough to go to trade school instead of a university: cabling.

And I think that's how I became the person I am today: I want computer networks to operate in as straightforward and tangible a fashion as they did in 2009. And I want a lot of cabling.

I don't have a large house, and I do have a lot of stuff. Most equipment is crammed into a 14U wall-mount rack in the upper part of the office closet. Two sets of fan grilles, in a push-pull arrangement, ventilate the top of the closet and as a bonus circulate air from the office to the laundry room. Closet shelving stands in for things that are not amenable to rackmounting, such as my "breadbox" form factor AT&T Merlin model 206 KSU. This small-business telephone system dates back to around 1985 but still operates well after a repair to the power supply. It supports 6 extensions (conveniently connected by 8P8C cabling, ethernet-compatible) and 2 outside lines, which are provided by an ATA connected to the Asterisk server I run "in the cloud." It is one of two phone systems in the house, the other being all IP.

I installed the Merlin instead of the significantly more capable, late-'90s vintage Comdial PABX I have (with voicemail!) because it is incredibly fashionable and because I love the simple logic of key systems. I do also love the Comdial for how over-the-top complicated its hybrid PABX/key system design is, complete with text messaging, but it just doesn't have the charm of a system where phones were offered in a color called Cinnabar. Unfortunately I don't have any phones in Cinnabar; they've proven very hard to find on the second-hand market.

Also on the shelf, due to lack of motivation to mount it more neatly, is a PiStar/MMDVM hotspot. While it is configured for DMR (I sometimes monitor the Southwest and New Mexico Brandmeister groups, AE5JL) I use it mostly as a POCSAG pager transmitter. A simple daemon I wrote bridges messages from MQTT to the MMDVM remote control interface, notifying me of various events like violation of the IR optical fence across the end of the driveway via the finest communications technology of the '80s: a beeper. I have started acquiring hardware to replace it with a 35 watt transmitter which will properly introduce DAPNET amateur paging to Albuquerque, but I only have so much free time and money.

closet rack

I take great pride in my work, but no one pays me for this, so I try not to consider it work. About once a year I make a sincere effort to tidy the patch cables but it never lasts.

An Arris cable modem is where The Cloud arrives in my home. I am fortunate enough to have slightly faster than gigabit internet service, although I haven't bothered to set up link aggregation so it is de facto 1gbps. It's okay, the router doesn't really make 1gbps in some scenarios due to PPS performance limitations anyway. I am unfortunate enough to obtain that internet from Comcast, which means that it is expensive and the upstream only hits 45mbps on a good day. My favorite feature of this Arris modem is that no matter how many times I reset the password for the management interface I can never get back into it later. I'm pretty sure this is my fault, but cable modems are loathsome so I'll blame it on Arris anyway. The city recently completed a franchise agreement with an FTTH provider out of Texas and it is possible I will be able to get service from them inside of the next six months. Given the history of new ISPs in this area I am not holding my breath.

Because of my strident objection to Comcast's existence, for about the first six months after I bought this house I obtained my internet connection only via LTE, using a used Cradlepoint and roof-mounted diversity antennas. The performance was actually quite good at night, but it was very poor during the day. I live very close to downtown and so I assume this was determined mostly by the occupancy of the office towers. The bigger problem is that the tiny MVNO I used, on a grandfathered contract with AT&T that had exceptionally good terms, was also one person with a FedEx Office mailing addresss that was not very good at subscription management. Every couple of months the internet would stop working and I would have to call them to nag them to update the expiration date on my service plan in their provisioning system, which was of course not at all integrated with their billing system.

From the modem, bits flow downstream to a PC Engines APU4D4 SBC running Opnsense. This is one of two APU4D4s that sit side-by-side in a very tidy 1U enclosure I imported from France at a completely exorbitant price. Why I spent something like EUR 150 on getting this nicely silk-screened front panel for the APUs only to Tetris most of the rest of the equipment onto a rack shelf is a mystery to me as well.

I am mostly pretty happy with Opnsense except for all of the ways I hate it. It replaced a Unifi Security Gateway which replaced an old Sonicwall, so I figure I am at least moving upwards in usability. My favorite thing about Opnsense is that it brings me the warm comfort of using BSD. My least favorite thing about it is how many clicks it takes to get to the DHCP lease table, which I am constantly looking at because I do not keep the internal DNS records up to date at all.

The core switch is a TP-Link 24-port PoE switch. It's Omada-manageable, along with a couple of other TP-Link switches elsewhere in the house, and I figure I will eventually buy into Omada when I get tired of mapping VLANs by hand. This switch does have fans but is very quiet, an impressive feat in a PoE switch. I am only using around 50W of its 250W capacity, if I ever go for that PoE++ troffer lighting I like to window shop for it might end up a whole lot louder. Currently the PoE load is mostly the result of infrared illuminators in exterior surveillance cameras. The SFP cages will be much appreciated when I finally lose my mind and run fiber to the shed.

Next to the router, the second APU4D4 runs Pihole, Home Assistant, and Plex Media Server in Docker containers. I run Plex in a docker container because they only build it for ARM as a Debian package, and I'm a Red Hat person. Well, Red Hat in the streets, Fedora, erm, at home. It's also a Tailscale subnet router, although I haven't really bought into Tailscale that much yet and still have a lot of manually-configured Wireguard tunnels.

Home Assistant is perhaps the most complicated thing here. I am not as bought into Home Assistant as I maybe should be, and so I make extensive use of various homegrown services that speak MQTT. I have, at times, been tempted to improve performance and "simplify" (for select definitions of "simplify") by writing my own simple logic engine to implement automations, but I'd probably just end up creating a bad version of Home Assistant with fewer features. A chintzy USB Z-wave stick is a major bridge to the Real World, and I am particularly fond of the Zooz multi-relays as a practical way to handle various physical inputs and outputs. A Philips Hue hub tidily slapped on the side of the rack controls most lighting, though, besides a few Z-wave wall dimmers for integral LED fixtures.

My latest home automation achievement is something I call "Giant Voice" after the historic Altec outdoor address system once popular on military bases. It receives simple commands via MQTT and plays back audio clips and speech synthesis via Microsoft Azure Cognitive Services Speech (a Microsoft product name if I have ever seen one). So it's sort of like a doorbell, and basically functions as one, except it plays clips of Star Trek computer beeps and announces which part of my small lot a visitor has intruded on. It's not at all reliable because, for reasons of being built out of things I had on hand, it's running on a Pi Zero W connected to a cheap Bluetooth speaker. Trying to keep a reliable connection to a Bluetooth audio sink on Linux without X running may actually be impossible.

Pihole forms part of a split-horizon DNS arrangement on the top-level domain I use, which is such a nice name I made it available on FreeDNS where it is used by a dozen poorly run Minecraft servers. This introduces an interesting set of DNS hijacking and misconfiguration hazards, which I find aesthetically pleasing. Systemd-resolved machines, for example, are prone to acting up due to resolved's well-known oddities around split-horizon systems. Of course, in all truth I completely agree with Poettering that split-horizon DNS is sin, but why live if we can't sin a little?

On a rack shelf below is a 5-bay NAS made by company called Kobol that doesn't exist any more. I like it because it's a simple arrangement of an ARM SBC (running Fedora of course) with a lot of SATA controllers and yet they made an unreasonably nice aluminum enclosure for it. I use btrfs because every time I use ZFS I end up having to tune it, and for how much I appreciate the inanities of computers tuning ZFS is actually somewhere near dental surgery in my list of favorite activities. I follow btrfs development just closely enough to figure that there is about a 10% chance of massive data loss, which is why I back the entire thing up to a cloud provider. What I really want is to back it up to LTO tape, just for appearance, but LTO drives stay expensive until they're several generations old and I have a hard time getting excited about LTO7 when I know that LTO9 exists.

One day the NAS will probably die or I will get annoyed with how slow it is CPU-wise, but I really don't know what I'll do to replace it. Maybe the NVR is an omen of things to come.

And right, the NVR, or network video recorder, which records the surveillance cameras. It's a small-form-factor Dell workstation I bought used off a friend to replace a failed NUC. Neither the NUC nor it have reasonable internal storage capacity (on account of their small size), so it has most of its storage in a Startech 2-bay USB3.0 enclosure that I am surprisingly in love with. It's fast and reliable, and has no-fuss RAID0/1 in hardware. It even comes apart to install the drives in a pleasing way. It has 8TB of storage which is enough for around a month of history. I do have 2TB of SSD storage in the NVR which is used for live recording so that a less performance-sensitive batch job can move older recordings to the slow platter drives in the enclosure.

When it comes to software, the NVR runs a commercial package called Blue Iris on Windows. I am not particularly interested in defending this choice, other than to explain that I have been using Blue Iris for years. Well, I will be a little argumentative. Open-source NVR packages suck. All of them are just incredibly bad. For some reason all of the replacements for Zoneminder either almost single-mindedly target Raspberry Pis with barely the performance for a single UHD camera or are nodejs monstrosities. Most are both. If you get cameras on the cheap and sometimes from surplus auctions like I do, you need support for a lot of video and PTZ protocols, and Blue Iris is mature enough to have out-of-the-box support for every bit of hardware I've come up with. It has both a reasonably good web interface and the ability to run the full desktop console remotely. Although it's not open source, it has simple but functional HTTP and MQTT APIs that have made it easy to integrate with my broader tangled mess, and CodeProject AI server support for object classification to boot. It definitely seems like there should be a suitable open-source replacement at this point but I just haven't found one. Maybe growing up on Milestone VMS just ruined my taste the way growing up on Perl did.

Jammed below the NVR and next to its drive enclosure is a NUC. This is the warranty replacement for the one that failed. There's a whole story here, I wasn't expecting to get a warranty replacement, but then it showed up in the mail. I hooked it up so that I can WoL it when needed to run longer, more performance-intensive tasks like video encoding that I don't want to have to keep my laptop plugged in for. In this regard it replaces my old laptop, which used to be shoved into the rack with its screen always on for some reason.

Also sharing the lower rack shelf is an HDHomeRun TV tuner cabled to a nice active antenna on the roof. Would you believe that I can get some 60 channels of infomercials and televangelism, completely free? My favorite part is just how heavily compressed it all is, now that DTV broadcasters realized they can cram something like eight SD channels onto one carrier. There's also a Davis WeatherLinc back there somewhere, it's sort of an IP gateway for Davis Vantage weather instruments also mounted on the roof. A small service I wrote on the Home Assistant machine loads data from it into Prometheus for use elsewhere. There's also a second, separate wireless weather instrument system elsewhere in the house that also goes into Prometheus. That one is by Ecowitt and it's just for temperature and soil moisture sensors in the small heated greenhouse (Home Assistant controls the heater and irrigation via Z-Wave).

At the bottom of the rack is a not-great-but-okay Cyberpower UPS. I have a slight bias against Cyberpower because another of their products I own has twice taken down the computer plugged into it due to what seemed to be a software bug that could only be resolved by leaving it unplugged for long enough for the battery to die... a long time since it stops producing output in that state. Admittedly it's done this twice in about five years and that issue hasn't stopped me from buying a new battery for it occasionally. This rackmount one doesn't seem to have that problem, or at least hasn't so far, but it's really just the cheapest rackmount UPS I could find with readily replaceable batteries.

On the left side, a Ubiquity AP-AC-Lite. This thing, along with its compatriot in the living room, is showing its age. The problem is that I have been holding out for TP-Link to release their Omada-managed WiFi 6E AP in the US, which keeps getting pushed back. I own three of these total, and one of my favorite things about them is that one of the three is an older hardware revision that only supports 24v PPoE, and the other two support 802.3af. Guess how good I am at not mixing them up.

To facilitate all this junk, I have installed a power outlet in the closet and ethernet runs from various parts of the house and exterior. Most of the ethernet runs land at the patch panel at the top, but not all of them for reasons of laziness.

Most ethernet is run through the attic, although the extremely low overhead in the attic (due to a very shallowly pitched roof) makes many areas difficult to access. For this reason I own my friend, Mr. Longarm, a 35' telescopic fiberglass pole. I have found that a great many practical problems in cabling can be solved with the use of a long enough pole. Fiberglass pushrods and a magnet fishing set are invaluable. In some cases I have had to open sections of wall, but I try to avoid it because drywall repair becomes tedious. An inventory of "installer bits," semi-flexible drill bits several feet long, can minimize the need for opening drywall but come with hazards when used blind. Sometimes you can achieve a medium of drilling small pilot holes into each stud bay, inspecting with a borescope to locate electrical wiring and whatever else, and then driving an installer bit through several stud bays at a time. The exploratory holes are fairly quick to repair and paint.

Some aspects of my home technical infrastructure are more whimsical, or perhaps more directly reflect my personal neuroses. I have always been tremendously frustrated at the lack of time synchronization in modern clocks considering the several different technical approaches available. I run an NTP server on one of the APU4s and all of the wall clocks in the house synchronize to it. For the most part these are used/surplus clocks from Primex's now discontinued SNS series, which used to be easy to get in both battery-powered analog and mains-powered LED versions. The supply of these seems to be drying up, but the Primex OneVue series is also NTP-over-WiFi capable. Unfortunately I'm less confident that the OneVue clocks can be configured to use a local NTP server without the Primex enterprise management system, which makes them less appealing for small systems.

clock

Personally I prefer the LED versions for their over-the-top size, although unfortunately the six-digit (seconds-indicating) version seems hard to get in the larger 6" digit height option. This one, a 2.5" model in the bedroom, has had a couple of layers of neutral gray theater gel added to the lens since the lowest brightness setting will still illuminate a room in red.

I have a similar bent when it comes to "smart home" control. I find the industry's focus on phone apps and voice controls infuriating. It's nearly always faster and more convenient to press a button, but the industry as a whole has apparently deemed buttons to be too expensive. Architectural lighting controls used to universally offer "scene controllers," panels with a few buttons that each select a scene, but these are oddly hard to find in the modern home automation market. I make my own.

buttons

This is a programmable keypad scanned by a little Python program running on a cheap SBC with WiFi. Right now it actually hits the Hue controller API directly, but I have been planning for months to re-implement it to send MQTT messages instead. The most obvious (and probably best) choice for a keypad would be X-Keys, but this Genovation ControlPad is popular in warehouse and picker automation so there's a good supply of used ones on eBay. The major disadvantage to Genovation is uglier programming software and no backlighting (the X-Keys models have individually-addressable two color backlighting). I'd highly recommend everyone try these out and help bring physical buttons back to the industry. You could even make it look a lot nicer if you put in even slightly more effort than I did.

And I think that's the grand tour. I'm not sure that I would say that I am completely proud of any of this because it is all so cobbled together and I change things frequently, but that's kind of why I wanted to respond to the genre of "my homelab" or "my home network" posts. I always sort of cringe at these because the focus on aesthetics, with modified Ikea furniture or whatever, is going to make modification down the road much more difficult. There is a big advantage to the 19" rack as a form factor, and wall-mount units are easy to come by. If you're especially space-constrained you might even consider a swing-down vertical one. Whatever you do, just make sure you run a lot of cables. Cables everywhere!

--------------------------------------------------------------------------------
<- newer                                                                older ->