_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2021-11-26 no u pnp

Previously on Deep Space Nine, we discussed (read: I complained about) the short life and quick death of DLNA. On the way, I mentioned DLNA's dependence on UPnP as an underlying autoconfiguration protocol. Let's talk a bit more about UPnP, because it's 1) an interesting protocol, 2) widely misunderstood, and 3) relates to an interesting broader story about Microsoft and Web Services. It'll also be useful background knowledge for a future post about the thing I originally intended the last post to be about, the Windows Media Center-based home automation platform exhibited at Disney's Innoventions Dream Home somewhat over a decade ago.

UPnP has sort of a bad reputation, and it's very common in online conversations to see people repeating "disable UPnP" as stock advice. There is some reason for this, although the concern is mostly misguided. It's still amusing, though, as it relates to the one tiny corner of UPnP functionality which has ever really had any success.

Before we dig into that, though, we need some history.

UPnP fits into a broader space called zero-configuration networking, which I will be calling 0CN for short because the more common Zeroconf is, technically, the name of a specific protocol stack.

0CN aimed to solve a big problem for early adopters of local area networks. You can connect all your computers to a network, but then they need to be numbered (addresses assigned) and then configured to know what addresses are capable of offering which services. In early small networks this could be relatively simple, but between the mass migration onto the relatively complex TCP/IP stack, the ubiquity of "non-consumer" automatic address assignment via DHCP [1], and the increasing number of devices on home networks caused by WiFi... it was turning into a real pain.

It's just hard to sell a product to a consumer as "fun and easy to use" when the setup instructions are going to involve some awkward arrangement to get the device's IP address, then having to go enter that into other devices in order to use the new service. But with a typical pre-0CN home network environment of IP addressing assigned by DHCP (probably by ISC dhcpd running on a consumer router), this was pretty much the only option for devices or software to discover and connect to other network participants.

Some network developers had addressed these problems surprisingly thoroughly in various pre-IP network stacks. For example, AppleTalk is often raised as a prime example---AppleTalk is a pre-IP protocol based on Xerox's XNS that had autoconfiguration and service discovery baked into it from the start. A number of other early LAN protocols were similar. But the "everything over IP" trend could be analogized to parking a heavy diesel truck in the driveway. It's highly capable, which is attractive, but it was not designed for consumer use. The industry loved TCP/IP as a powerful common platform, but typical home computer users didn't want to learn to double-clutch. In this way, TCP/IP for home networks sometimes felt like a regression compared to the various (often XNS-based) ease-of-use-oriented precursors.

0CN, then, can be viewed as an effort to re-consumerize consumer networks, by smoothing over the parts of TCP/IP that were not easy to use.

0CN systems thus tend to concern themselves first with discovery, that is, the auto-detection of services available on a local network. Discovery tends to further imply description, which is the ability of discovered devices to describe what they are capable of and how to interact with them. Most computer interconnects have some form of discovery and description, but 0CN differs in that these capabilities are intended to be very high-level. What is discovered is something like a media server, and it describes itself in terms of protocols that can be used to retrieve media and their endpoints. This is a much more end-use oriented form of discovery than we see in lower-level discovery protocols such as ARP.

0CN systems may, or may not, extend discovery and description with a set of standardized communication protocols to be used after discovery. For example, 0CN standards may specify the type of API that a device should present once discovered. This may include standardized message bus protocols and other application-level concerns. Later 0CN systems tended to be more prescriptive about this kind of thing to ease implementation of "universal" clients, but there are still plenty of 0CN standards around that leave all of it to individual device and software vendors.

One of the weird things about 0CN is the relative lack of standardization. It's not that there isn't a well-accepted standard for 0CN, it's that there's like four of them. To be reliably discovered by a variety of operating systems, devices typically need to implement multiple 0CN standards in parallel.

Here's a brief summary of 0CN protocols in common use today:

In the Windows world, NetBIOS specifies a basic discovery system for both advertising capabilities and name resolution (of NetBIOS names) [2]. This protocol is looking pretty crufty today but is still in reasonably common use between Windows hosts offering file or printer shares. In part in response to the limitations of the NetBIOS mechanism, Microsoft introduced WS-Discovery (part of Microsoft's larger Web Service craze which we will discuss in the future), which is a little more modern and a little more powerful. It is still frequently used by network printers to advertise their capabilities. UPnP, as a Microsoft protocol, is widely supported by Windows hosts and less widely by devices such as printers and consumer NAS.

In the closer to UNIX world, discovery approaches derived from DNS are popular. The most prominent is a combination of the mDNS distributed DNS service with the DNS Service Discovery (DNS-SD) standard, which allows the distributed DNS mechanism to be used to describe capabilities as well as presence and name (basically by using a set of specially crafted SRV records). Apple's Bonjour and the open source Avahi are both implementations of this mDNS-and-DNS-SD combination. "Zeroconf," with a capital Z, has a tendency to refer to this stack as well but there's a lot of inconsistency in how the term is used.

So this is the landscape in which UPnP exists---already somewhere between solved and hopelessly fragmented. UPnP intended to compete, in part, by being a more complete standard with higher level components. So, for example, UPnP specifies the general structure of device APIs (XML SOAP as discovered from the XML device description), an event system (basically a simple subscription message bus), and a very lazy end-user standard that basically suggests that all UPnP devices should offer a web interface.

More significantly, UPnP was expanded significantly into the A/V use-case. The UPnP A/V spec includes a basic media architecture, including media servers, renderers, and control points. UPnP specifies not just discovery but also protocols between these devices. You are likely now wondering what the difference between UPnP and DLNA is, and that can be a confusing point. DLNA is directly based on UPnP, or perhaps it is more accurate to say that DLNA incorporates UPnP. DLNA uses all of the UPnP protocols, including A/V, but extends the UPnP specification with much more detailed standards for content management. A simple way to think about it is that DLNA is the upper level while UPnP is the lower level. This of course does not totally help with the fact that DLNA was so closely associated with UPnP during its lifespan that it's not uncommon for people to use the two terms interchangeably.

So, how does UPnP actually work? First, UPnP does not handle addressing but instead incorporates either DHCP or RFC 3927 link-local addressing. So, UPnP functionally starts at the discovery layer. For discovery, UPnP incorporates a protocol that never quite made it on its own: Simple Service Discovery Protocol, or SSDP. SSDP is actually as simple as the name suggests, but due to Microsoft's incredible love of web services it uses an unusual transport. SSDP runs on top of HTTPU, or HTTP over UDP, a protocol that basically consists of taking a small HTTP payload and putting it in a single UDP packet.

A client wishing to discover services sends a simple HTTP request, in a UDP packet, to the multicast address, port 1900. Any device that has a service to offer replies with an HTTP response to the source IP and port on the request. To help reduce traffic volumes, SSDP also allows devices to gratuitously advertise their presence to the multicast address, and all clients are permitted (and encouraged) to passively discover devices by monitoring these extra announcements.

The next step is description. UPnP description is quite simple: the device discovery information provides a URL. The client requests that URL to receive an XML document that describes the device and gives a list of services it provides. Each service is defined as a set of endpoints and commands that can be issued to those endpoints.

For the most part, a UPnP client will now interact with the device by issuing commands to the described endpoints. This is all done, following the general trend you may have noticed, with XML SOAP over HTTP. That's basically the end of the UPnP story, but UPnP does add one interesting additional feature: an eventing or message system. The UPnP description of a service lists a set of variables that describe the state of the service, and provide an event endpoint that allows a client to subscribe for notifications whenever a particular variable changes. This is all, once again, done with XML over HTTP.

So this is all interesting and helps to describe the underlying structure of DLNA (which can be viewed as just a set of standardized, strongly specified UPnP services). But it has oddly little to do with what we know UPnP for today. What gives?

The service discovery and description functionality of UPnP is, for the most part, boring and transparent. It is used in practice for things like autoconfiguration of printers, but it doesn't usually do anything that notable. What UPnP is broadly known for is NAT negotiation.

Many home devices that offer services might want to be internet-accessible, but the fact that most home networks employ NAT makes it difficult to set that up. UPnP intended to resolve this problem by throwing a NAT port mapping protocol into the UPnP standard. Called Internet Gateway Device Protocol (IGDP), it allows a UPnP client to discover a router, and then specifies a service the router provides that allows a UPnP client to request that a given port be opened from the internet back to that device. As a bonus, it also allows a UPnP client to request the current external IP address from the router, so that it knows its internet endpoint.

IGDP is a sibling to Port Mapping Protocol and the later Port Control Protocol (PCP), but for a few reasons it is much more common. One of the reasons is a business one: it's pretty much correct to say that UPnP IGDP is the Microsoft solution, while PCP is the Apple solution. PCP is implemented and used by a variety of Apple products, but UPnP support is more common in consumer routers.

In the eyes of many power users today, UPnP's service discovery function is so little discussed that UPnP is more often used as a mistaken reference to IGDP specifically (router vendors and innumerable support forums do not help to alleviate the confusion here).

Because of security concerns around providing a low-effort way for software to map incoming ports, it's common security advice to "disable UPnP," which in practice means disabling IGDP... but since IGDP is the only meaningful service provided by most consumer routers, that's often done by disabling the whole UPnP implementation and it's labeled as such in the router's configuration interface. I am actually somewhat skeptical of the security advantages of disabling UPnP for this purpose. The concern is usually that malware on a machine in the local network will use UPnP to map inbound ports, allowing direct inbound connections to things usually not accessible from the internet. That problem is real, but in practice malware running somewhere on the local network has many options for facilitating external access and UPnP isn't even really one of the easier ones.

There's actually a much better and less often discussed reason to be cautious about UPnP: there is a long track record of embedded devices like routers and IoT "things" having poorly implemented UPnP stacks that are vulnerable to remote code execution. Several botnets have spread among IoT devices through UPnP, but it had nothing to do with port mapping... instead, they took advantage of defects in the actual UPnP implementations.

What's worse is that many IoT UPnP implementations turn out to have a defect where they do not correctly differentiate requests coming from the LAN side vs. the WAN side. This has the alarming result that it is possible to request and receive port mappings from the WAN back to the WAN. All devices with this defect are potential participants in reflected DDoS attacks, and indeed have been widely abused that way.

The point is that I would agree that it's a good idea to disable UPnP, but not because of what UPnP does, and not just on your router. Instead, it's a good idea to be very skeptical of UPnP because of defective implementations in many embedded devices, especially routers, but also all of your IoT nonsense.

Ultimately, 0CN almost feels obsolete. Most modern products have a near total dependence on a cloud service, and simply use the cloud service as a broker instead of performing service discovery. Where devices do need to be discovered, it's usually pre-configuration for WiFi networks, and so it has to be done over a side channel like WiFi Direct or Bluetooth. This has the substantial downside that these seemingly local devices will stop working if the internet connection is lost or the service unresponsive, but it's the 21st century and we've all come to accept that our light switches are now dependent on Amazon somehow.

More seriously, there is a general trend that more expensive, higher-quality IoT products are more likely to perform service discovery and communicate on the local network. This likely happens because their developers realize that support for local communication will result in lower latency and better reliability for many users. But ultimately reliance a cloud broker is easier, so a lot of even high-end products ship a cloud intermediary as the only way to communicate with them.

[1] DHCP might sound like it would be part of a 0CN environment, but 0CN is usually used to describe easy-to-use, highly automatic, consumerized protocols in contrast to configuration protocols like DHCP that were designed for large networks and so are relatively complex and difficult to work with.

[2] Peer-to-peer discovery and name resolution protocols tend to have inherent scalability problems due to their dependency on broadcasting. NetBIOS, having originally been designed as a pretty complete and sophisticated network application standard on top of pre-IP protocols, resolves this scaling problem by supporting centralized name service for large networks. That centralized name service is called Windows Internet Name Service or WINS, an acronym you have probably run into before dealing with Windows network configuration. WINS is essentially a pre-IP DNS, for the much more limited NetBIOS name standard. It is fairly rare to encounter a WINS server today as Microsoft has shifted to DNS for almost all purposes, even for establishment of NetBIOS connections where the difference between "NetBIOS name" and "DNS name" can now be very fuzzy. E.g., the UNC path "\\a.multipart.domain.name\a\share" has worked fine for many years, even though the name is illegal for NetBIOS... the Windows NetBIOS client has been fine with using DNS names pretty much since NT stopped being called NT. NetBIOS names are also still supported but it's rare to encounter a machine where the NetBIOS name is not the same as the local part of the DNS name, making it not obvious that the distinction even exists.


>>> 2021-11-06 smart audio for the smart home

Sonos offers a popular line of WiFi-based networked speakers that are essentially a consumer distributed audio system. It's a (relatively) affordable and easy to use spin on the network-based audio rendering systems already common in large building background music/PA and, increasingly, entertainment venue and theatrical sound reinforcement. The core of it is this: instead of distributing audio from one amplifier position to multiple speakers over expensive and lossy (in commercial contexts) or difficult to install (in consumer contexts) speaker-level audio cables, audio is distributed over an IP network to a small amplifier colocated with each speaker [1].

This isn't a new concept, although Sonos had to invest considerable effort in getting it to function reliably (without noticeable synchronization issues) over unreliable and highly variable consumer WiFi networks. From the perspective of commercial audio, it's just a more consumer-friendly version of Dante or Q-LAN. From the perspective of consumer audio, it's either revolutionary, or the sad specter of two decades of effort in network-enabled consumer AV systems. After all the work that was done, and all the technologies that could have succeeded, Sonos is what we ended up with: a feature-incomplete, walled-garden knockoff of DLNA.

Yes, I'm being unfair to Sonos. The biggest problem that Sonos figured out how to solve, precise and reliable synchronization of audio renderers without special network facilities like PTP, isn't one that DLNA attempted to address. And it remains a hard problem today; even my brand-new bluetooth earbuds regularly experience desynchronization problems when my phone's mediocre Bluetooth stack gets behind (maybe my ailing phone is more to blame than theoretical complexity).

But every time I really look at today's home media streaming and management landscape, which is largely dominated by Sonos, Apple's AirPlay, and Google's ChromeCast, it's very hard not to see it as a shadow of DLNA.

So what is DLNA, and why did it fail? In general, why is it that all of the home network AV efforts of the late '90s through the '00s amounted to nothing, and were replaced by thin, vendor-locked "cast" protocols?

The answer, of course, is Microsoft and Capitalism (rarely have there been two more ominous bedfellows). But let's get there the long way, starting with the evolving home media landscape of the late '90s.

The compact disc, or CD, came into widespread popularity in the late '80s and represented a huge change in music distribution. Besides the low cost, small size, and excellent fidelity of CDs, they were the first widespread consumer audio recording format that was digital. For the first time it was, in principle, possible to create an exact digital copy of CD. Once written to another storage device, the CD could be handled as computer data [2].

"CD Ripping" actually did not become especially common until the early '00s. In practice, "ripping" audio CDs from PCs is somewhat complex because of the surprisingly archaic architecture of PATA CD drives. Early computers, in the '90s, often weren't capable of fast enough I/O to read an audio CD at 1x speed (meaning as fast as the audio bitrate, allowing real-time decoding). Even for those that were, audio CDs amounted to hundreds of megabytes of data and hard drives that large were very costly. As a result, PC CD drives played audio CDs by behaving as plain old CD players and outputting analogue audio. Some of you may remember installing an IDE CD drive and having to connect the three-wire analogue audio output from the CD drive to the sound card. Some of you may even remember the less common CD drives with discrete playback control buttons on the front panel, allowing it to be used to play music without any media software to control it.

These CD drives, when playing audio, behaved a lot like smartphones making phone calls: the computer actually wasn't "in the loop" at all, all it did was send the CD drive commands to play/pause/etc and the CD drive decoded the audio and converted it to analog internally, sending it straight to the amplifiers in the sound card. With this architecture, the way to "rip" a CD was actually to tell the CD drive to start playback and then use the soundcard to record its analogue output. This ran only at real-time speed, not all soundcards were capable without an external jumper from the line out to line in, and the double conversion reduced quality. It didn't take off, although Gnome's Sound Juicer continued to use this method until around 2005.

The much better method, of actually reading the CD as data and decoding the audio in software, took off in the '00s, mostly in the form of WinAmp 5.0 and Windows Media Player in XP. It greatly accelerated a trend which already seemed clear to industry: consumers would purchase their music on physical media (downloading it was still infeasible on most consumer's internet connections), but in the future they would immediately rip it, store it on a central device, and then play it back in digital form using a variety of devices.

Microsoft leaned hard into this vision of the future, not just a little because it strongly implied multiple licensed copies of Windows being involved in the modern home stereo. A significant development effort that entailed the addition of multiple new operating system features lead to Windows XP Media Center Edition, or MCE, released in 2002. MCE was a really impressive piece of software for the time. An MCE computer wasn't just a computer, it was a "home media hub" capable of consuming media from multiple formats, storing it, and then streaming it over the network to multiple devices. MCE introduced consumers to significant expansions of the role of the "computer" in media. For example, MCE supported playback and recording of television from cable and multiple OEMs sold MCE desktops with preinstalled cable tuners. An XP MCE machine could compete with Tivo, but with distributed playback which Tivo would not introduce until later.

Microsoft leveraged their newfound place in the home theater, the Xbox, to create an ecosystem around the platform. First-gen Xbox consoles with a purchased upgrade and all Xbox 360 consoles could function as "media center extenders," which entailed opening an RDP session to the MCE machine to present the MCE interface on the Xbox. Extensions to RDP implemented to support this feature, namely efficient streaming of HD video over RDP without re-encoding, went on to drive the "network projector" functionality between Windows Vista and various Ethernet-enabled projectors. This was arguably the primary precursor to the modern ChromeCast as RDP introduced some (but not nearly all) of the offloading to the streaming target that ChromeCast relies on. In fact the Media Center Extender and Network Projectors are closely related both technically and in that they have both faded into obscurity. Modern "network projectors" typically rely on the simpler, AirPlay-like and Microsoft-backed but open Miracast protocol [3].

Because Media Center Extender relied on RDP, it essentially required that the Extender implement a good portion of the Windows graphics stack (remember that RDP is basically a standardization of Citrix's early application streaming work, which made a lot of assumptions about a Windows client connecting to Citrix/Windows server environment and offloaded most of the drawing to the client). This was no problem for the Xbox, which ran Windows, but the improvements to RDP and open standards that make non-Windows RDP clients practical today were not yet complete in the '00s and it was unreasonable to expect conventional consumer A/V manufacturers to implement the Extender functionality. Microsoft didn't make anything near a full line of home media products, though, and so it needed some kind of Ecosystem.

Conveniently, Intel was invested in the exact same issue, having begun to introduce a set of Intel-branded media streaming solutions (including WiDi, a true unicorn to see in the wild, which allowed certain Intel WiFi adapters to stream video to a television well before this was a common consumer capability). Because media streaming required fast I/O and network adapters, both Intel offerings, it had real potential to expand consumer interest in Intel's then commercial products. Intel kicked off, and recruited Microsoft and consumer A/V giant Sony into, a new organization called the Digital Living Network Alliance. That's DLNA.

DLNA built on UPnP, the then-leading consumer network auto-discovery and autoconfiguration protocol, to define a set of network standards for the new integrated home media network. DLNA defined how a set of devices broadly categorized as Media Servers, Media Players, and Media Renderers would discover each other and exchange signaling in order to establish various types of real-time media streams and perform remote control. As a broad overview, a Media Server offered one or more types of stored (e.g. files on disk) or real-time (e.g. cable tuner) media to the network using established protocols like RTSP. A Media Player allowed a user to browse the contents of any available Media Servers and control playback. A Media Renderer received media from the Media Server and decoded it for playback.

The separation of Media Player and Media Renderer may seem a bit odd, and in practice they were usually both features of the same product, but enabled network-based remote control of the type we see today with Spotify Connect. That is, you could launch a Media Player on your computer (say Windows Media Player) and use it to play audio on the Media Renderer integrated into your stereo receiver. Other Media Players could discover the renderer and control its playback as well. This was all possible in Windows XP, although it was very seldom used because of the paucity of Media Renderers.

The problem is not necessarily that DLNA failed to spawn an ecosystem. While Microsoft added pretty complete DLNA support to Windows through MCE, Windows Media Player, and Explorer (where it became intertwined with the Homegroup peer-to-peer network sharing system), JRiver and TwonkyMedia were major third-party commercial DLNA Media Servers. On the open source side, Xbox Media Center (XBMC), now known as Kodi, had fairly strong DLNA support. Plex, still a fairly popular home media system today, originated as a port and enhancement of XBMC and went on to gain even more complete DLNA support. Even Windows Media Center itself was extensible by third-parties, providing a public API to add applications and features to the MCE interface.

Years ago, around 2008, I successfully interoperated TwonkyMedia, Plex, Windows Media Player, and a Logitech hardware renderer all via DLNA. This has become more difficult over time rather than less, as the market for DLNA-capable hardware has thinned and DLNA software support has fallen out of maintenance.

One major challenge that DLNA faced was the low level of consumer readiness for fully digital home media. It's an obvious problem that consumers weren't yet used to the idea of "smart TVs" and other devices that would integrate DLNA clients. DLNA clients were somewhat common in devices like HD-DVD and Blu-Ray players (these already needed a pretty significant software stack so adding a DLNA client wasn't much extra effort), but it tended to be a second-class feature that was minimally marketed and usually also minimally implemented. I remember a particular LG Blu-Ray player that had a mostly feature-complete DLNA Media Player and Media Renderer but struggled to actually play anything because somewhere between its chipset and 802.11g, network streaming performance was too poor to keep up with 1080p content.

And 802.11g is part of the problem as well. Home networks were surprisingly bad in the early to mid '00s. I suppose it shouldn't be that surprising, because "home network" was largely a new idea in that time period that was displacing a single computer directly connected to a modem. Broadband was taking over in cities, but DOCSIS and DSL modems still provided a USB interface and it was not uncommon for people to use it. Almost no one ran ethernet because of the high cost of installing it concealed and the frustration and aesthetic impact of running it along floorboards [4]. 802.11g, the dominant WiFi standard, was nominally fast enough for all kinds of media streaming but in practice congestion and range issues made 802.11g performance pretty terrible most of the time. All of this is still mostly true today, but we've gone from nominal WiFi speeds of 54mbps to well over a gigabit, which allows for a solid 20mbps even after the very substantial performance reduction in most real environments.

The biggest problem, though, is the troublesome concept of a "server" in the home. Microsoft seemed to operate, at the time, on the assumption that a desktop computer would serve as the Media Server. In fact, MCE specifically introduced power management enhancements to Windows XP intended to allow the MCE machine to stay in standby mode but wake when needed for media recording or streaming (this never seems to have worked very well). Microsoft further reinforced this concept with their marketing strategy for MCE, which was only released to developers and OEMs and could not be purchased as a regular standalone license. If you were going to use MCE, it was going to be on a desktop computer sold to serve as a hybrid workstation and media server.

Unfortunately, around the same time period the household desktop started to become a less common fixture. Lower prices and better performance from laptops could free up a desk and add a lot of flexibility, and consumers were just getting less excited about buying a mid-tower. And besides, the hard drive capacities of the time made a normal desktop somewhat limited as a media server. Remember, this was back in that awkward period of physical media where it was common for desktops to have two optical drives because people wanted a DVD player and a CD burner, and no one had figured out how to economically fit both into one device yet. Hard drives were usually in the hundreds of GB only.

What was needed, it seemed, was some sort of home media server appliance that was compact, affordable, and user-friendly enough that people would consider it a typical network appliance like a router/AP combo. That's sort of what Apple delivered in the AirPort Time Capsule (and then later kind of in the Apple TV, it got confusing), and pretty much what we would call a consumer NAS today (like a Drobo or QNAP or something). Back in the mid '00s neither of these options were available on the market, and most people weren't going to watch the episode of Screen Savers about connecting a JBOD to a desktop on the cheap. So, in 2007, Microsoft served up a real wonder: Windows Home Server.

Windows Home Server was a stripped down (although surprisingly little) version of Windows Server 2003 R2, intended to be easily set up and managed by a consumer using a desktop client called the Windows Home Server Console. The Console was a bit like the MMC in principle, but with fewer features and more wizards. Although I'm having a hard time confirming this, I believe it actually worked via RDP using the Virtual App technology, meaning that the actual Console ran on the server and only the GUI drawing occurred on the client. This probably eased implementation but had the result that you could only really manage Windows Server from Windows, which did not help at all against Apple's widely perceived media dominance.

Not a lot of Windows Home Server devices made it to the market. HP released several, one of which I used to own. The larger HP Home Servers were pretty similar to a modern consumer NAS, with multiple front-accessible drive bays. Home Server functioned (perhaps first and foremost) as a DLNA server, but also offered a number of other services like backup, brokering RDP connections from the internet for remote desktop, and a web-based file browser to get your files remotely---naturally tied into .NET Passport. Home Server was extensible and third-party software could be installed from a small app store and managed from the Console; both Plex and Twonky offered Home Server versions at the time. Several commercial antivirus packages were available as well, as this was a bit before Microsoft took antivirus and HIDS on as a first-party component, so Norton and McAffee could still make a lot of money off of getting you to pay for one more license. In fact the Home Server versions of these antivirus products were interesting because they often functioned more like a miniature enterprise antivirus/HIDS platform, centrally controlling and monitoring your host-based security products from the Home Server.

Windows Server also introduced a significant new feature called Drive Extender, which was a high-level file system feature that allowed the user to combine multiple drives into one logical drive and use multiple drives for redundancy, all in a way which was largely agnostic to the drives and interfaces in use. This was presumably a response to the high cost of the hardware RAID controllers that served the same purpose in most "real" Windows Server machines, but it compared favorably with the software-first approach to storage management that would shortly after spread through the POSIX world in the forms of ZFS and Btrfs. Ironically Drive Extender was a source of a lot of frustration and data loss as it turned out to buggy, which is perhaps why Microsoft ignominiously killed the feature along with Windows Server. Years later later it would reappear, seemingly in a much reworked form, under the name Storage Spaces.

And what would you guess happened with Windows Home Server? That's right, it failed to gain traction and slowly faded away. Windows Home Server 2011 did make it to the light of day but proved to be the last version of the concept.

Ultimately I think it was a victim of Microsoft's usual failures: it addressed a new use-case rather than an existing one, so it wasn't something that consumers had much interest in to begin with. Microsoft massively failed at creating a marketing campaign that would convince consumers otherwise. The cost of Home Server devices was pretty high (for much the same reasons that consumer NAS continue to be rather expensive today), and at the end of the day it just kind of sucked. In a fashion very typical of Microsoft, it had a lot of interesting and innovative features but the user experience and general level of polish were surprisingly poor. The Console, I remember, was sometimes nearly impossible to get to connect without rebooting the server. The Drive Extender faults lead to a lot of instances of data loss which then lead to bad press. It formed a part of Microsoft's generally confusing and poorly designed home sharing experience, later Homegroups, and so got tangled up in all of the uncertainty and poor usability of those features (for example to easily get access to the server's storage via SMB). The whole thing was a failure to launch.

And that sort of sums up the fate of DLNA as well: despite the best efforts of its promoters, DLNA never really got to a position where it was attractive to consumers. It was mostly intended to solve a problem that consumers didn't yet have (access to their non-existent local media library and their non-existent computer cable tuner [5]), and time has shown would never really have. The ecosystem of DLNA products, outside of Windows, was never that large. Dedicated media renderers never gained consumer adoption, and devices that threw in DLNA as a value-add (like a lot of HD-DVD players) did a bad job and didn't promote it. DLNA was complex and a lot of implementations didn't work all that well on home networks, which was also true of related technologies (like SMB file sharing) that were important parts of the whole home network ecosystem.

Moreover, DLNA was wiped out by two trends: the cloud, and proprietary casting systems.

First, the cloud: the idea that consumers would have a large local library of music, TV shows, and movies, has never really materialized. The number of people who have a multi-terabyte local media collection is vanishingly small and they are basically all prolific pirates, which makes the broader media industry uninterested in keeping them happy. In fact much of the media industry worked to actively discourage this kind of use pattern, because it is difficult to monetize in a reliable way.

Instead, most consumers get their media from a cloud-based streaming service like Spotify, which has no requirement for any particular features on the local network. Since these types of services already need significant cloud support, it becomes easier to implement features like casting and network remote control within the product itself, without any need (or support) for open standards. No one needs DLNA because they use Spotify, and Spotify has worked commercial partnerships to get Spotify Cast support in their A/V devices.

Second, proprietary casting systems: one of the features but also, in hindsight, defects of DLNA was its underlying assumption of a peer-to-peer "matrix" system in which many devices interacted with many devices in a flexible way. In practice, 90% of consumer use cases can be solved by a much simpler system in which a media renderer operates under direct remote control (and perhaps receiving media directly from) of a computer or phone. Miracast, for example, operated in this fashion and while it never became that common it was much more practical to integrate Miracast support into a device like a TV than a DLNA renderer and player.

Moreover, casting features offer a compelling opportunity for vendor lock-in, since it is natural to integrate them with operating systems and specific applications. While Microsoft made some effort to promote the Miracast standard it was lackluster at best, so the whole space was dominated by Apple (which had a head start in the form of iTune's remote playback capabilities and massive leverage by integrating the feature into iOS) and Google (which mostly won by making the Chromecast extremely cheap, although leveraging the YouTube app was also a major boon). Neither of these companies have much interest in facilitating a multi-vendor ecosystem, and in the case of consumer A/V devices where they have little strength they operated for closed partnerships with device manufacturers over open standards. Along with commercial incentives the scheme works: my TV supports the closed standards AirPlay and Chromecast, but ironically not the open standard Miracast. The only device I have with integrated Miracast support is a no-brand sub-$200 DLP projector, where it works very well.

DLNA formally dissolved in 2017, although the writing was on the wall as early as 2010 when Spotify began to transform the way music was consumed. Similar capabilities are now found in various vendor-proprietary systems, but few of these approach the original ambitions of DLNA. A huge industry preference for cloud-intermediated platforms and increasing consolidation of home media onto one of a small number of walled gardens makes any serious resurgence of a DLNA-like project unlikely.

And Sonos? I don't want to be too mean to Sonos, the technology is pretty cool. I just have a hard time dropping $500 on a speaker that handles the unsolved problems of 2008 but not the solved ones. These newer distributed media systems (also Yamaha Musicast, for example) are impressive but fail to provide the flexible, multi-source matrix architecture that DLNA had once put within reach.

At the end of the day, who killed DLNA? In some ways DLNA was ahead of its time, as it required better home networks than most people had and enabled a media consumption pattern that was the future, not the present. Of course it was ahead of a time that turned out not to exist, as the cloud streaming services took over. In this way, technological progress (or more cynically the twisted economics of the cloud) killed DLNA. Microsoft, and to a lesser extent Intel, killed DLNA through poor marketing, few partnerships, and repeatedly bungled products. Apple and Google killed DLNA by seeing a simpler and more commercially advantageous solution and making it widespread (and we can't blame them for this too much, as AirPlay and ChromeCast really are plainly easier to implement well than DLNA ever was).

And, you know, capitalism killed DLNA, because as an open-standards distributed system its profit-making potential was always limited. The consumer A/V industry that could have flocked to DLNA because it offered wide interoperability instead flocked to the proprietary standards because they offered money. Heavyweights Apple and Google were playing for keeps. Microsoft's unusually generous dedication to open systems ultimately left them holding the bag, and to this day Windows lacks a coherent media streaming ecosystem.

Also frankly all the DLNA products pretty much sucked but hey, I'm running on nostalgia.

[1] This approach can be used as an alternative to, or in combination with, high-voltage audio systems which we have discussed before.

[2] Due to the surprisingly analogue CD mastering process and the use of generous error correction and error tolerance in CD playback, it is surprisingly common for it to not actually be possible to create an exact copy. But the general point still stands.

[3] Miracast is kinda-sorta a subset of WiFi, being standardized by the WiFi alliance, and builds on previous HD-media-over-WiFi efforts like Intel WiDi. These were tremendously unsuccessful but are important precursors of ChromeCast and AirPlay. A lot of devices, including all Windows machines, support Miracast but it's pretty rare to see it used in practice as TV manufacturers have not been enthusiastic about it (which is to say Microsoft has not incentivized integrated Miracast in the way that Apple has incentivized integrated AirPlay).

[4] This is still basically true today, although structured wiring has made some inroads in new, especially multi-family, construction. Of course I continue to meet people who live in a home or apartment with structured ethernet and CATV who don't even realize it or don't understand how to use it. Consumer interest in plugging cables into things remains low.

[5] In fact the whole DLNA story is tied up in the CableCard story, which I'll probably write about in the future. The short version is that the whole idea of using an arbitrary tuner for a cable subscription was unpopular with cable carriers and hard to implement. Cable carriers preferred to provide their own set-top boxes and DVRs, which consumers were mostly happy with. The concept of connecting your cable, or even TV antenna, to a computer just never really went anywhere.


>>> 2021-10-25 datacasting

I recently came across an article about New Mexico's main PBS affiliate, KNME, starting a trial project to "datacast" school materials to rural homes. One of the most interesting takeaways, to me, is the fact that KNME's traditional broadcast radio network of one 250kW primary transmitter and many low-power translators is estimated to reach 98% of New Mexico homes, a significantly better level of penetration than broadband internet... despite the telephone industry's early history of strong rural penetration.

Even now in the 21st century, traditional broadcast radio is the most effective way to deliver information to a large area at a low cost. While digital audio has never really caught on in the US (I am oddly proud of my HD radio tuners) like it has in Europe, digital television has been the norm for some time. Over-the-air (OTA) digital television in the US is based on a standard called ATSC [1]. ATSC OTA is ultimately just a 19.4 Mbps data stream consisting of a series of MPEG transport stream (TS) packets. Because it's intended for use with unreliable links like broadcast radio, MPEG TS is a essentially a uni-directional network protocol with structural similarities to other packetized radio protocols.

AV containers like MPEG TS are usually associated with a single program consisting of a video stream and an audio stream, although it's not too unusual to have multiple audio streams (e.g. for multiple languages) and many containers support multiple video streams for the purpose of multiple angles (this was a "killer feature" of the DVD that exactly no one ever used) [2]. MPEG TS takes this to the next level by permitting an arbitrary number of streams, identified by a program table. This feature is best known for allowing a single ATSC "channel" to actually carry multiple channels, leading to the confusing world of OTA channel numbering. All of the programs carried on a given ATSC channel must fit into the 19.4 Mbps, so lower-budget ATSC channels tend to actually look worse as they're compressed more heavily to fit more channels onto a shared transmitter.

Moreover, MPEG TS allows for streams carrying non-media metadata typically in the form of "tables." While tables were originally intended to carry program-related metadata for use by decoders, "private" table IDs exist that allow you to shove basically anything you want into an MPEG stream. And we have now looped back to IP-over-MPEG, something I have touched on before, which just means cramming IP packets into an MPEG TS table alongside media.

And now perhaps you see how this "datacasting" works: the television station adds a new stream to their MPEG TS transmission that sends a "private table." The private table is actually decoded to whatever format the datacasting receiver expects, which is very possibly UDP but might also be a purpose-built lightweight transport protocol for files with metadata. It could also just be like a series of ZIP files with a little framing bolted on, there are a lot of options.

The client device uses an ATSC tuner to extract the special data table and then decodes it into files, which it stores on an SD card. It has a local webserver that allows users to browse the files it has accumulated at their leisure. Since the total MPEG TS stream is nearly 20Mbps, there's a decent amount of room to add in this kind of secondary use. KNME transmits five television channels, but three are standard definition, so there may be a good 5Mbps available for data.

This is all a pretty interesting use of existing broadcast television infrastructure, and I appreciate that it echos a tradition of radio technology being pioneered for New Mexico education (which, incidentally, is the NME in the vanity call sign KNME). KANW, the first FM radio station in New Mexico, was founded by Albuquerque Public Schools for education delivery. APS also helped found KNME, which was not the first OTA television station in New Mexico but was the first to upgrade to digital. APS continues to part-own both KANW and KNME (in partnership with a community college and state university, respectively), enjoying the FCC's generous approach to licensing educational institutions.

A combination of the rural-access and educational mission of many academically-owned public radio stations, along with the FCC's ongoing support for educational broadcasting, has had the somewhat surprising result that educational broadcasters have long been technical pioneers. Commercial broadcasters are increasingly losing interest in OTA, especially for television, creating the modern situation where a school district and a university partner to reach 98% of the state while an affiliate of the once-great American Broadcasting Company runs at a reduced power that barely covers the city. But that's all a tangent.

What really drew my attention to this KNME project is the mere utterance of the term "datacasting," a perennial business plan in telecommunications that has seldom found long-term success. Datacasting, in general, refers to any broadcast of data other than audio/video media.

Datacasting sometimes takes the form of a dedicated system for one-way delivery to multiple points. One could argue that backhaul systems for media networks (e.g. the WestLink satellite distribution system for public television shows, incidentally operated by KNME out of Albuquerque) are datacasting systems, since they transmit media that is not intended to be directly viewed. These systems also increasingly use non-media transports like IP to allow for delivery of syndicated shows at faster-than-real-time speeds. That might be getting a little pedantic, though.

A clearer case of datacasting is SkyCache, later called Cidera and now defunct. From 1997 to 2003, SkyCache operated a satellite network that delivered content to internet service providers for caching, as a means of reducing their actual bandwidth content. The university where I attended my undergraduate had used SkyCache for a one-way NNTP feed in the late '90s, in order to relieve their SONET internet uplink from the rather substantial bandwidth required by Usenet at the time [3].

Under the Cidera name, SkyCache made an attempt to pivot towards delivery of video to streaming media edge sites. This worked about as well as you might imagine considering the rapid decrease in the cost of bandwidth in the early 2000s, and Cidera quickly found themselves obsoleted by the very internet connections that they were trying to accelerate. Similar concepts seem to have had a longer endurance in other countries, but I can't find anyone attempting this business model today... it would come off as a bit ludicrous considering that the cost of satellite transit is well-known to be higher than terrestrial IP transit, which is really a testament to how cheap IP transit has become.

More common than dedicated datacasting systems were those that embedded datacasting as a secondary use of traditional media broadcasts. I had a vague recollection of having once owned a GPS navigation device that employed a secondary data broadcast on an FM radio station to obtain live traffic information. Thanks to a Twitter follower I just learned that this was most likely DirectBand, which Microsoft operated vaguely as part of the sprawling MSN product family [4]. DirectBand made use of some space above the stereo component of an FM radio signal, and did a fairly high rate of 12 kbps with error correction. Evidently it also supported applications like headlines, stock quotes, and sports scores, which I never had the luck to see implemented.

The same Twitter follower pointed out that UK regulator Ofcom still has a license out for exactly this kind of service. It's held by a company called INRIX that seems to have once been a contender in the GPS navigation space but, like basically all the legacy GPS companies crushed by Apple and Google, is now attempting to make a pivot to enterprise mobility and GIS service. Some research into this lead to reading this sentence:

The HA and the Network Information Systems Limited ("NISL") selected INRIX as the primary supplier of data services (data-as-a-service, or "DaaS").

So it's good to know that UK government contracting is as "aaS-obsessed" as my own United States. There was a time we'd just call that, you know, a service contract. Just a service as a service. Like normal.

Datacasting has a similar history in the television space. In fact, PBS affiliates are particularly known for datacasting. Through the company National Datacast, PBS offered for-hire datacasting over its portfolio of analog TV affiliates. The defunct service MovieBeam took a page from Cidera's book by using National Datacast to send out HD films that set-top boxes stored for later viewing, it was not at all successful.

In fact, the PBS history goes back further than this to an interesting metadata application. Since some time in the '90s, many PBS member stations transmitted a timestamp encoded into their analog signal. It was intended to allow VCRs to set their clocks automatically, but it doesn't seem to have been especially commonly used and by the '00s the infrastructure had fallen into a state of disrepair such that many PBS stations were actually sending incorrect timestamps. Automatic VCR clock incorrecting, if you will.

Last in a tour of datacasting applications, I'll mention pagers once again. Because pager transmitters simply broadcast short packets with an address header, it's easy to turn any paging system into a sort of low-bandwidth datacasting network. Although most of these applications have been replaced by cellular data, it used to be pretty common for things transit station arrival time displays to just be very large pagers. They received their info updates over either a pager transmitter owned by the municipality or a commercial pager network. Pager protocols are still sometimes used this way for applications ranging from public information signage to triggering tornado sirens. You could debate whether or not this truly constitutes datacasting, but a company called Ambient Devices attempted to commercialize the network in a sort of proto-IoT way.

They did not succeed, because the internet took over IoT as well.

[1] People sometimes confuse this with ClearQAM, but ClearQAM refers to unencrypted digital television over coaxial cable - 256QAM encoding does not perform well at long distances OTA, so OTA broadcasts use 8VSB instead. ClearQAM is not especially common because modern cable television providers rely on encryption, rather than physical disconnection, to enforce payment.

[2] There's a popular idea that the multi-angle feature was used primarily by pornography. Smut on DVDs being even more of a historic artifact than DVDs in general, I'm having a hard time finding a good source that multi-angle pornographic DVDs were actually common. Certainly there were a few, but there were also a few actual multi-angle DVDs, like Die Hard. Part of my skepticism of this comes from the very popular "porn killed betamax" story that has never actually stood up to scrutiny; I think the salaciousness of adult films and the lack of well-established ratings agencies for them creates a lot of urban myths about the role of adult entertainment in broader market trends.

[3] A few years earlier, computer center policy had forbidden use of usenet during business hours as the NNTP server was also a workstation in the computer lab and became unusably slow for the person sitting in front of it if there was too much usenet activity. I am indebted to the late John Shipman in many important ways, but also for having preserved decades of computer center bulletins full of gems like this.

[4] DirectBand is not to be confused with RBDS, which is in common use in the US to this day but is so slow as to only really be useful for delivery of the title of the currently playing song... which not that many radio stations actually implement correctly. RBDS is sort of infamously poorly implemented; in theory it should allow car radios to set their clocks automatically but so few radio stations have an RBDS time encoder that it's never actually worked for me. A surprising number of FM stations here transmit RBDS and use it for... their call letters, over and over. Presumably they installed the modulator and then never connected anything to it.


>>> 2021-10-11 intro to burglary

One of the best-known brands in American burglar alarm systems is ADT. ADT stands, of course, for American District Telegraph. As the name suggests, ADT did not originate as a burglar alarm company. Instead, it started as a telegraph operation, mostly providing stock quotes and other financial information within cities. As happened with many early telegraph companies, ADT became part of Western Union and later part of AT&T. The history of ADT as a telegraph company is not all that interesting, and their telegraphy business is not well remembered today.

The modern ADT story starts, the company history tells us, with someone breaking into the home of one of the founders of one of ADT's constituent companies. This was the origin of the business line we know ADT for today: burglar alarms. While a number of companies have entered and exited the burglar alarm market (including AT&T itself as a separate venture from ADT), none have gained the market dominance of ADT... dominance so great that they lost an antitrust case in the '60s.

Before we get to the history of burglar alarm signaling, though, we should discuss the burglar alarms themselves.

The fundamental concept of a burglar alarm is fairly simple, and might better be explained by analogy to a similar, simpler system. Many industrial machines make use of a "safety string" or "safety circuit." This consists of a set of limit switches or other devices connected in series on things that are safety critical, like access doors. It's all arranged such that, in a "normal" state, the safety string is closed (all of the switches are in their circuit-closed state). If any unsafe condition occurs, such as an access door being opened, a switch opens the circuit. This usually cuts power to the potentially dangerous equipment.

A burglar alarm is simply the extension of this concept to multiple circuits. A set of switches and sensors are installed throughout the protected area. Any of these sensors indicating an "unsecured" state causes an alarm.

In practice, the wiring of burglar alarms is more complex than that of a simple safety string. There are various reasons for this, but the most obvious is the problem of supervision. Burglar alarms are designed to operate in a somewhat hostile environment including the possibility of tampering, and in any case insurance companies usually require that they have an adequate "self-test" facility to ensure that the system is functioning correctly during arming. This means that the alarm system needs to have a way to determine whether or not the sensor circuits are intact---whether the sensors are still connected. This is referred to as supervision.

A very common supervision method is the end-of-line resistor. A common burglar alarm circuit arrangement is the normally closed circuit with 5.6k EOL resistor. In this system, the sensors are each connected to the alarm in series and are normally closed. When a protected door is opened, for example, the sensor opens the circuit which is detected by the alarm controller. An enterprising burglar, though, might realize that they can short the two alarm wires together and thus prevent the circuit ever opening. An EOL resistor complicates this attack by creating a known, fixed resistance at the far end of the sensor circuit.

Let's imagine that the alarm controller functions by measuring the resistance of each alarm circuit (they basically do, but usually instead by putting a fixed current on the circuit and then measuring the voltage drop). If the resistance of the circuit is infinite, it is "open," which indicates an alarm. If the resistance of the circuit matches the expected EOL resistor (say 5.6k but the value just depends on what the alarm manufacturer chose), everything is normal. If the resistance of the circuit is zero (or near zero), the circuit has been shorted... and that can result in either the alarm sounding or reporting of a "trouble" condition.

This isn't the only way to use an EOL resistor. Another approach that is very common with fire alarms is a normally open EOL resistor circuit. In this system, the normal state is the fixed resistance of the EOL resistor, but the "initiating devices" such as smoke detectors are connected between the two alarm wires and actually short them together (creating a zero resistance) to cause the alarm to sound. In this case, an infinite resistance (open circuit) indicates that the circuit has been broken somewhere. In practice the difference between normally-open and normally-closed alarm circuits is more one of convention than technical merit, as both have very similar reliability characteristics. Largely for historical reasons, burglar alarms tend to be normally closed and fire alarms tend to be normally open, but there are many exceptions to both.

In early burglar alarms, this basic concept of measuring the resistance of an alarm circuit to check for a "normal" or "secure" value was very clear. A common vault alarm in the early 20th century had a large gauge showing the resistance of the single circuit and a knob which was used to adjust a variable resistor. To arm the alarm, the knob was turned (adjusting a reference resistance) until the needle fell into the green band, and then the alarm was switched on. The needle leaving the green band in either direction (indicating an open or a short circuit) resulted in an alarm, often by a mechanism as simple as the needle touching a peg (but perhaps also by things like spring-balanced magnetic coils). Modern burglar alarms are mostly just an evolution of this same design, although digital communications are becoming more common.

A burglar alarm usually uses several of these circuits, which are labeled "zones." Zones could correspond to physical areas of a protected building (and sometimes do), but for practical reasons it's very common for zones to correspond more to type of sensor than location. For example, it is very common to have one zone for perimeter detection devices (e.g. window and door closures) and one zone for presence detection devices (e.g. motion detectors) [1]. This is done so that different types of sensors can be armed separately, most often to enable a "perimeter" or "home" or "stay" mode in which the sensors on entry and exit points are armed but the sensors on the interior space are not.

Besides the sensors themselves, another important part of a burglar alarm is a means of arming and disarming it. Due to the lack of practical digital technology, early alarms took a very mechanical approach. A common arrangement in commercial buildings was a two-keyswitch system. To disarm the alarm, a key switch on the outside of the building had to be turned to the disarm position. Then, within a certain time period (measured by anything from a mechanical timer to a thermistor and heater), a second keyswitch located somewhere in the building had to be turned to the disarm position.

This somewhat complex system is actually very clever. It's quite possible that a skilled burglar will pick, dismantle, or otherwise defeat the outside keyswitch. The interior keyswitch serves as a "what you know" second factor: a burglar, not being familiar with the building's alarm system, would probably not know where the second switch was and would run out of time while looking for it. To improve this mechanism, the alarm panel and second keyswitch (it was usually right on the alarm panel) was often put somewhere non-obvious like a closet off of an office or behind a painting.

This use of two different key switches gets at a fundamental concept in the design of burglar alarm systems. The set of access points (like doors and windows) monitored by the alarm delineates a boundary between the secured space and the unsecured space. Alarm equipment within the secured space gets a certain degree of protection against physical tampering by virtue of its location: it is difficult to a burglar to tamper with something if they can't get to it without setting off the alarm. On the other hand, devices in the unsecured space, such as a keyswitch placed outside, are much more vulnerable to tampering. These devices need some kind of additional protection, or need to have their capabilities limited. The same problem exists with door locks and all sorts of other security systems [2].

In other cases a much simpler and more manual approach was taken. In some early alarm systems, there was no way to disarm the alarm. Instead, the employee opening for the morning would contact the monitoring point and exchange codewords (or otherwise identify themselves), so that the alarm center operator knew to disregard the upcoming alarm. This is actually still a fairly common practice both in military and other high security installations (where the extra manual checks are worth the resources) and in multi-tenant, infrequent access locations like radio towers where it would become frustrating to issue alarm codes to all of the different contract technicians. You will sometimes see signs along the lines of "Call 123-456-7890 before entering," which is a good indicator that this approach is still in use [3].

By the '70s the "alarm code" or "alarm PIN" was becoming the dominant approach, with users expected to disarm the alarm by entering a numeric combination. Modern alarms still mostly rely on this method, although it's getting more common to be able to arm and disarm alarms via the internet or a remote keyfob.

In both the cases of sensors and arm/disarm mechanisms we see that there has not been a great deal of technical progress over the last decades. In general the alarm space is somewhat stagnant, but it depends on the market. Consumer systems are changing very quickly, but in some ways for the worse, as newer consumer alarms are often cheaper and easier to install but also less reliable and easier to tamper with than older systems. No small number of the "ease of use" improvements in consumer alarms are directly achieved by reducing the security of the system, usually in a way that consumers don't clearly understand (more about this later).

Commercial alarm systems, on the other hand, have changed much less. Partially this is because the market is small and somewhat saturated, but partly it is because of the relatively higher security and certification requirements of commercial insurance companies. For businesses, insurance discounts are usually a bigger factor in the decision to install an alarm system, and the insurance companies are usually stricter about requiring that the alarm be certified against a specific standard. The good news is that these standards mean that commercial alarms are usually built on all of the best practices of the '90s. The downside is that the standards do not change quickly and so commercial alarms do not necessarily integrate modern cryptography or other methods of enhancing security. All in all, though, commercial alarms tend to be both more antiquated and more secure than modern consumer alarms.

This post has really just been a quick introduction to the topic of burglar alarms, giving you the basic idea that they function by monitoring strings of sensors and provide some authenticating method of arming and disarming the alarm. In the future, we'll talk a lot more about burglar alarms: about central monitoring systems (remember ADT?) and about the new landscape of DIY consumer systems, at least. Probably some rambling about different types of motion detectors as well.

[1] There are a GREAT variety of different types of perimeter and presence sensors, and they are all very interesting, at least to me. You may be detecting that technical security is an interest of mine. In the future I will probably write some posts about different types of burglar alarm sensors and the historical evolution they have gone through.

[2] While not as relevant today, this is one of the reasons that alarm keypads are usually placed inside the secured space. In older alarm systems, the keypad sometimes directly armed and disarmed the alarm via a simple electrical mechanism, making it possible to "hotwire" the alarm from the keypad. Placing the keypad inside of the secured space, such that anyone accessing it would have set off a sensor and started the entry delay timer, makes this kind of exploit more difficult by putting the burglar on the clock. In well-designed modern alarms the keypad no longer has the capability to disarm the alarm without the user entering a code (i.e. the keypad sends the code to the alarm panel elsewhere to be checked), but even today we can't take this for granted as some manufacturers have "economized" by putting the entire controller into the keypad.

[3] This is particularly common in telecom facilities because telecom companies have an old habit of implementing a "poor-man's burglar alarm." They would simply connect a switch on the door of the equipment hut to the trouble circuit used to report conditions like low backup batteries and failed amplifiers. That way any unauthorized access would result in a trouble ticket for someone to go out and inspect the equipment. Of course, authorized access just meant calling the maintenance office first to tell them to ignore the upcoming trouble alarm.


>>> 2021-10-06 street lighting and nuclear war

Let's consider, for a while, electric lighting.

The history of electric lighting is long and interesting, and I don't expect to cover it in much depth or breadth because it's somewhat off of my usual topic and not something I have a lot of expertise in. But I do have a fascination with the particular case of large-area outdoor lighting, e.g. street lighting, and there are a few things to talk about around street lighting that are really rather interesting.

As a very compressed history, the first electric street lights usually took the form of "moon towers," tall masts with arc lamps at the top. Various types of open and contained arc lamps were experimented with early on, but generally fell out of favor as incandescent lamps became cheaper and lower maintenance. The only remaining moon towers in the US are in Austin, Texas. They were expensive and high-maintenance, so they were fairly quickly replaced with low-height incandescent in most applications. Later, mercury vapor and metal halide arc lamps would begin to replace incandescent street lighting, but let's stop for a while in the era of the incandescent street light [1].

Let's say that you were installing a large number of incandescent lights, for example to light a street. How would you wire them?

In our homes, here in the United States, we wire our lights in parallel and apply 120v to them. This has convenient properties: parallel wiring means that the lights all function independently. A fixed voltage means that the brightness of a given light can be adjusted by using bulbs of different resistances, which will result in a different power (say, 60 watts). That makes a lot of sense, which is why it may be surprising to some to learn that incandescent street lights were wire in series.

Series wiring had multiple advantages, but the biggest was conductor size: in a parallel-wired system, wiring near the power source would need to carry a very large current (the combined current of all of the bulbs), and lights further from the power source would be dimmer due to voltage drop unless the conductors were prohibitively large. In a series-wired system, the current is the same (and much lower, approximately equivalent to a single bulb) at all points in the wiring, and voltage drop is seen across the entire length, and thus consistent across all of the bulbs.

These street lighting circuits are referred to as constant current circuits, as opposed to the more conventional constant voltage. All of the bulbs were designed to operate at a specific current, 6.6A was typical, and a specially equipped power supply transformer adjusted the voltage on the circuit to achieve that 6.6A. The voltage would be fairly large, typically something like 5kV, but that wasn't a problem because the streetlight wiring ran overhead with power distribution wiring of similar and higher voltages.

In early street lighting systems, the constant current regulator was a complex mechanical device using magnetic coils and levers. Today, constant-current incandescent lighting circuits are still in use in some applications and the constant current regulators have been replaced with more robust electronically controlled systems. But the regulators aren't really that important, except to understand that they simply apply whatever voltage is required to get 6.6A to flow.

Of course, connecting lighting in series introduces a major problem that you have probably realized. If any bulb burns out, the entire series circuit will be broken. That's unacceptable for street lights, and a few different solutions were invented but the most common was a cut-out disk. The cut-out disk was a small fuse-like device, but operated somewhat in the opposite way of a fuse. In fact, they're sometimes amusingly referred to as antifuses.

The cut-out disk is wired in parallel with the bulb. Should the bulb burn out, the voltage across the two sides of the disk rises, which causes an arc that "burns out" a film material separating two contacts. The contacts then touch, and electricity flows freely through the cut-out disk, restoring the circuit. When the bulb is replaced, the cut-out disk is replaced as well with a fresh one.

"Stay-Lit" Christmas light strings employ a very similar method, but use a thermistor instead of a cut-out disk so that it is not an additional consumable part. But the concept is the same: a device in the bulb base begins to pass current if exposed to the full 120v power supply, instead of the few volts normally seen when the bulb is present.

Constant-current lighting circuits have an interesting safety property, which this discussion of cut-out disks may have brought to your mind. A short circuit in a constant-current lighting circuit is actually pretty safe, as the regulator will reduce the voltage to near zero to maintain only 6.6A. But, when no current flows, the constant current regulator will increase the voltage applied to the entire circuit in an attempt to restore current flow. Modern electronic regulators mitigate this somewhat by detecting this condition, but it gives constant-current lighting circuits a particularly sharp edge. An open circuit can be rather dangerous as the regulator will increase the output voltage to the upper limit---potentially something like 10kV. Everything on the lighting circuit needs to be rated for this maximum voltage, which leads to more demanding requirements than typical 120v or 240v commercial lighting.

With that preamble on the concept of constant-current street lighting, let's take a look at a particularly interesting bit of history that closely relates to these technical details [2].

On July 9th, 1962, the United States conducted its 7th high-altitude nuclear test. While an earlier series of high-altitude demonstrations had shown the tactical potential of detonations far from the ground, significant advances had been made in the science and instrumentation in the intervening years, and greater resources had become available for geopolitical reasons (resumed Soviet testing). The new test, named Starfish Prime, was thus expected to contribute significant information on the effects of high-altitude detonations.

The approximately 1.5Mt detonation occurred at about 400 km altitude, well into space. It was expected that a detonation at such a high altitude would create a substantial electromagnetic pulse effect, since the "horizon" was far from the detonation allowing wide coverage and less cancellation by ground reflections. Prior to the Starfish Prime test, however, EMP had not typically been viewed as a major consideration in the use of nuclear weapons... while the effect clearly existed, the magnitude was not such that it was likely to cause substantial disruption [3].

Starfish Prime seemed to suggest otherwise. Effects that are out of scope for our purposes resulted in damage to several US military satellites. Moreover, unanticipated effects resulted in higher EMP field strengths at the ground than had originally been estimated. This effect extended over a very large area, including the nearly 1,500km straight-line distance from the detonation near Johnston Atoll to Honolulu.

Various effects observed in Honolulu (from which the detonation was visible in the sky) were attributed to the explosion. It can be difficult to sort out which of these reports are accurate, as something as dramatic as a nuclear detonation tends to get tied up with all kinds of unrelated events in people's memory. One thing that was particularly widely reported, though, was streetlights in Honolulu going dark at the instant of the detonation.

Was this actually an example of EMP having a significant effect on a civilian power distribution system, as has long been theorized but seldom validated?

The answer, it turns out, is yes and no. Honolulu, like many jurisdictions in the early '60s, made extensive use of series-wired street lighting circuits. Also like many municipalities, Honolulu had gone through growth and changes that resulted in various ad-hoc and sometimes confusing adjustments to the electrical distribution system. Part of this had involved changes in distribution voltage on existing lines, which created problems with the safety separation distances required between lines of different voltages attached to the same pole. The result was somewhat odd arrangements of lines on poles which complicated the installation of street lighting circuits.

On some Honolulu streets, it had become impractical to install series-wired street lighting circuits at medium voltage (6kV being fairly typical) and still have adequate safety separation from 240v split secondary wiring supplying power to homes. Honolulu adopted a solution of mixing series-wired high-voltage and parallel-wired low-voltage street lighting. On residential streets with crowded poles, street lights ran on 500v so that their lines could be run directly alongside the residential power supply.

At this point in time, the modern norm of photocell switches installed on each light did not yet exist. Instead, streetlights were turned on and off by mechanical timer switches (or occasionally manual switches) attached to the constant current regulators. So, the 500v lighting used on residential streets still needed to be connected to the series-wired system in order to run off of the same controls. The solution, which simplified the cabling in other ways as well, was to power the 500v constant-voltage lighting off of the 6.6A constant-current system. This was implemented by connecting autotransformers to the series-wired system that reduced the voltage to 500v. The constant-current regulator meant that the autotransformer could cause an arbitrary voltage drop, at 6.6A, as necessary to provide adequate power to the 500v lights connected to it.

Essentially, the autotransformer acted as a particularly large 6.6A light on the series circuit. This made it subject to the same series disadvantage: if the autotransformer failed, it would cause the entire series circuit to shut off. The solution was to equip the autotransformer with a cut-out disk just like the lights, albeit one rated for a higher voltage.

It was determined that the street lights which failed in response to the Starfish Prime EMP all seem to have been the low-voltage lights attached to constant-current circuits. Perhaps you can see what happened.

Some of the low-voltage lighting circuits were positioned such that their length was oriented perpendicular to the EMP field. The altitude and location of the detonation was such that the EMP field reached Honolulu with horizontal polarization nearly parallel to the ground. The combination of these factors created a near-worst-case for induced voltage in certain low-voltage lighting circuits. The induced voltage backfed the autotransformer, resulting in a higher voltage on its constant-current side that fused the cut-out disk, shorting the autotransformer and cutting off the power supply to the low-voltage lighting circuit.

The solution was simply to replace the cut-out disk.

This should illustrate two things about the effects of nuclear EMP: First, it is possible for nuclear EMP to have damaging effects on power infrastructure. Second, it is not especially easy for nuclear EMP to have damaging effects on power infrastructure. The Honolulu incident resulted from a specific combination of factors, and specifically from the use of a "weak-link" design element in the form of the cut-out disks, which performed as expected in response to an unusual situation.

None of this is to say that EMP effects are insubstantial, but they are mostly confined to digital and especially telecommunications systems due to their far higher sensitivity to induced voltages. Power systems with long enough wiring runs to pick up substantial induced voltages also tend to be designed for very high voltages, making damage unlikely. The same cannot be said of telephone lines.

By the '60s series-wired constant-current lighting systems were falling out of favor. The future was in mercury-vapor lamps running at 240v and controlled by local photocells, so that they could be powered directly off of the secondary distribution lines. This is still basically the arrangement used for street lighting today, but mercury vapor has given way to LED.

Constant-current lighting still has its place. While airfield marker lighting is mostly being converted to LED, it's still generally powered by constant-current loops. Many airfields still use 6.6A tungsten bulbs, which were at least perceived to have reliability and visibility advantages until relatively recently. Airfield constant-current regulators usually provide three (occasionally more) output settings with different target currents, allowing for a low, medium, and high intensity. While less common with today's better marker light optics, you will still sometimes hear pilots ask the tower (or use radio control) to turn the lights down.

When it comes to LEDs, most non-trivial LED lighting is actually constant-current. The use of a small solid-state constant-current regulator (usually called an LED driver in this context) improves efficiency and lifespan by keeping LEDs at an optimal current as temperature and supply voltage changes.

Despite staying reasonably close to their '70s state, streetlights have become hubs for technology projects. They're convenient, widely-distributed power sources that are either owned by the city or operated on contract for the city. Many "smart city" technologies like environmental sensors and WiFi/municipal LTE/smart meter radios are offered in packages intended to be clamped directly onto streetlights and powered by tapping their cables. The fairly standardized screw socket used for photocell switches on street lights has itself become a form of power outlet, with some smart city devices screwing in to use it as a power feed and incidentally also turning the light on and off.

In this way the line between "light controller" and "sensor package" can become blurry. The City of Albuquerque now installs networked controllers on new street lights that primarily allow for remote monitoring and management of the light itself---but also support environmental data collection and traffic monitoring via Bluetooth sniffing.

The humble street light today reminds me a bit of the cigarette lighter socket in cars... barely changed in decades, and yet a major factor in enabling a huge technology market.

Or at least that's an optimistic way to look at it. The pessimistic way is to observe that we now live in a world where streetlights are being repurposed to detect gunshots. Cheery.

Addendum: I am not personally a fan of aggressive street lighting. It tends to create a substantial light pollution problem that is thought to contribute to health problems in humans and environmental disturbances. Further, it's been the common wisdom for some time now that street lighting is likely not actually effective in reducing crime. That said, a recent 2019 study conducted in New York City is the first randomized controlled trial of the impact of additional street lighting on crime, and it actually did find a reduction in crime, and not a small one. That stands in opposition to a history of studies that have not found any crime reduction, but none of those studies were as rigorously designed. Hopefully more research will be conducted on this question.

[1] Incandescent street lights are typically the ones fondly remembered as "historic" street lamps anyway, with many "antique style" streetlamps installed in historic districts today being loose replicas of various common incandescent models, such as the GE or Novalux "acorn" glass globes with frilly metal reflector. That is, assuming they are replicas of electric and not gas fixtures, although makers of vaguely historic light fixtures often mix the two anyway.

[2] Much of the information on this incident comes from the aptly titled "Did High-Altitude EMP Cause the Hawaiian Street Light Incident?", SAND88-3341. Charles Vittitoe, Sandia National Laboratories, 1989. As far as I can tell this paper, prepared well after the fact, is one of the only detailed case reports of a real EMP incident in the public literature. The fact that there was, reportedly, very little rigorous investigation of the Starfish Prime test's EMP effects in Hawaii for over twenty years after has interesting implications on how seriously EMP effects on civilian infrastructure were taken at the time.

[3] I discuss this topic a bit more in my YouTube video on EMP simulation facilities at Kirtland Air Force Base.

<- newer                                                                older ->