_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss
COMPUTERS ARE BAD is a newsletter semi-regularly issued directly to your doorstep to enlighten you as to the ways that computers are bad and the many reasons why. While I am not one to stay on topic, the gist of the newsletter is computer history, computer security, and "constructive" technology criticism.

I have an MS in information security, more certifications than any human should, and ready access to a keyboard. These are all properties which make me ostensibly qualified to comment on issues of computer technology. When I am not complaining on the internet, I work in professional services for a DevOps software vendor. I have a background in security operations and DevSecOps, but also in things that are actually useful like photocopier repair.

You can read this here, on the information superhighway, but to keep your neighborhood paperboy careening down that superhighway on a bicycle please subscribe. This also contributes enormously to my personal self esteem. There is, however, also an RSS feed for those who really want it. Fax delivery available by request.

--------------------------------------------------------------------------------

>>> 2024-04-26 microsoft at work

I haven't written anything for a bit. I'm not apologizing, because y'all don't pay me enough to apologize, but I do feel a little bad. Part of it is just that I've been busy, with work and travel and events. Part of it is that I've embarked on a couple of writing projects only to have them just Not Work Out. It happens sometimes: I'll notice something interesting, spend an evening or two digging into it, but find that I just can't make a story out of it. There isn't enough information; it's not really that interesting; the original source turned out to just be wrong. Well, this one is a bit of all three. Join me, if you will, on a journey to nowhere in particular.

One of the things I am interested in is embedded real-time operating systems. Another thing I am interested in is Unified Communications. Yet another is failed Microsoft research projects. So if you've ever heard of Microsoft At Work, you probably won't be surprised that it has repeatedly caught my eye. Most likely, you haven't heard of it. Few have; even the normal sources of information on these kinds of things appear to be inaccurate or at least confused about the details.

Microsoft went to work in the summer of 1993, or at least that's when they announced Microsoft At Work. This kind of terrible product naming was rampant in the mid-'90s, perhaps more from Microsoft than usual. MAW, as I and a few others call it, was marketed with a healthy dose of software sales obfuscation. What was it, exactly? an Architecture, Microsoft said. It would enable all kinds of new applications. With MAW, one would be able to seamlessly access the wealth of information on their personal computers. Some reporters called it an Environment. Try this for a lede: "Microsoft Corp. unveils integrated computer program."

The announcement included a demo that got a lot more to the point: a fax machine that ran Windows.

Even this was strangely obfuscated: enough newspaper reports described it as a "fax like product" that I think this verbiage was sincerely used in the announcement. Today, we would refer to MAW as an effort towards "smart" office machines, but in 1993 we hadn't quite learned that vocabulary yet. Microsoft must have been worried that it would be dismissed as "just a fax machine." It couldn't be that, it had to be something more. It had to be a "fax like product," built with "Windows architecture."

I am being a bit dismissive for effect. MAW was more ambitious than just installing Windows on a grape. The effort included a unified communications protocol for the control of office machines, including printers, for which a whole Microsoft stack was envisioned. This built on top of the Windows Printing System, a difficult-to-search-for project that apparently predated MAW by a short time, enough so that Windows Printing System products were actually on the market when MAW was announced---MAW products were, we will learn, very much not.

Windows Printing System modules were sold for at least the HP LaserJet II and III. If you did not experience them, these printers placed their actual rasterization logic onto a modular card that could be swapped out, usually to switch between PCL or PostScript "personalities." The PostScript module was offered mostly for MacOS compatibility, Apple having selected PostScript as a common printer control language. The Windows Printing System module took this operating system specialization a step further, using Windows' simple GDI graphics protocol to draw output to the printer.

I am actually a little unclear on whether or not the Windows Printing System lead directly to the cheap "WinPrinters" that are also associated with the idea of GDI-based printing. "WinPrinters," so-called by analogy to WinModems, are entirely dependent on the host computer to perform rasterization. While extremely irritating from the perspective of software support, this was an important cost-savings measure in consumer printers. Executing a capable printer control language was rather demanding; the Apple LaserWriter famously had a faster processor than the Macintosh computers it was a peripheral to. Printers with independent rasterization, particularly the more complex PostScript, came at a substantial price premium to those that required the host to perform rasterization.

While some details of reporting on the Windows Printing System make me worry that it was in fact rasterizing on device (like the curiously specific limit of "up to 79" TrueType fonts), I'm fairly sure it was indeed a precursor to the later inexpensive designs. Rather than a cost-savings measure, though, Microsoft seems to have marketed it as a premium feature. Because of the Windows Printing System's higher level of integration with the operating system, it brought numerous new features, many of which we take for granted today. TrueType font support at all, for example, a cutting-edge feature in '93. Duplex control from the print dialog rather than the printer's own display, and for that matter, the ability to see printer status messages (like "PC LOAD LETTER") on the computer you just printed from.

And at the end of the day, offloading rasterization from the printer had an advantage: the Windows Printing System was faster than PCL or PostScript.

Even if it did become the dominant printing method years later, the Windows Printing System of the MAW era doesn't seem to have fared very well. Because it took the position of an add-on cartridge (like a font cartridge), it would have been an added-cost option for printer buyers---an added cost of $132.99, according to a period advertisement. The dearth of available documentation or even post-launch advertising for the Windows Printing System cartridge suggests disappointing sales numbers.

The fortunes of Windows Printing Technology would turn a year later, though, as Lexmark introduced their WinWriter series: "With the Microsoft Windows Printing System Built In!" Speaking of the Lexmark WinWriter series, this whole printing thing is kind of a tangent. What about MAW? The Windows Printing System, it seems, was not really a part of MAW. It was just generally related and available when MAW was announced, so it was rolled into the press conference. It is a bit ironic that the Lexmark WinWriter, truly the Printer for Windows, was not a MAW device despite shipping well after MAW was announced.

So, back to the course: MAW was not just Windows on a fax machine, not just the Windows Printing System, but an integrated system of Windows on a fax machine, the Windows Printing System, a generalized network protocol, and apparently a page description language. This was all, as you can see, rather document-focused. MAW would allow Windows users to easily, seamlessly interact with these common office machines, sending and receiving documents like it was 1999.

And later, it would do more: Microsoft was clear from the beginning that MAW had a higher vision, one that is remarkably similar to the later concept of Unified Communications. Microsoft envisioned Windows on a phone, bringing desk phones into the same architecture, or environment, or whatever. Remember the phone part, it comes back.

In practice, MAW would do nothing. It was a complete and total failure. It took two years for the first MAW office machine to reach the market, a Ricoh fax machine. Fortunately, a television commercial has been preserved, giving us a small window into the Windows on a Fax Machine experience. "Microsoft's At Work Still Loafing on the Job," is how the Washington Post put it in 1995.

They call it "the first real step toward the paperless digital office," a nod towards the promise of Microsoft's document-messaging vision, before noting that virtually no products had shipped, everything was behind schedule, and Microsoft had reorganized the At Work team out of existence. Microsoft At Work was seldom spoken of again. Few products ever launched, those that did sold poorly (the Windows licensing fee imposed on them being one of several factors contributing to noncompetitive price tags), and by the time Windows gained proper USB support few would remember it had ever happened.

In other words, a classic Microsoft story.

But I'm not here to chronicle Microsoft's foibles, there are other writers for that. I'm here to chronicle their weird operating system projects. And that's what got me reading into MAW: the promise of not just one, but two weird operating system projects.

Regard that promise with suspicion.

Wikipedia tells us that MAW included "Microsoft At Work Operating System, a small RTOS to be embedded in devices." That's very interesting. I love a small RTOS to be embedded in devices! Tell us more.

Researching this MAW embedded operating system turns out to be a challenge. You see, it is not the better known of the operating systems produced by the MAW initiative. That would be WinPad, curiously not mentioned at all in the MAW Wikipedia article, but instead in the Windows CE article, as a precursor to CE. Windows CE gets a lot more affection than MAW, and so we know quite a bit more about WinPad. It was an early attempt at an operating system for a touchscreen mobile device, one that, in classic Microsoft fashion, competed internally with another project to build an operating system for a touchscreen mobile device (called Pegasus) and died out along with the rest of MAW.

It was based on 16-bit Windows 3.1, using a stripped-down UI layer that resembled Windows 95. Probably not coincidentally, there seems to have been an effort to port WinPad onto Windows 95, and fortunately developer releases of WinPad have been preserved. With some effort, you can get them running on top of appropriate Windows versions in an emulator.

WinPad was envisioned as a core part of MAW, the key enabler of that paperless office. With MAW and WinPad, you could synchronize documents, emails, and faxes, everything you could ever want in 1995, onto your handheld device and then carry it with you. WinPad also didn't work. Evidently the performance was lousy and it required entirely unrealistic battery capacities. Not a surprising outcome when one ports a mid-'90s desktop operating system to a tablet. How charming! But not exactly my target. What about this RTOS?

If you dig into these things for too long, you start to question your life, or at least reality. References to this MAW embedded operating system are so sparse that I quickly started to wonder if it existed at all, or if it was simply confused with WinPad. This MAW OS would run directly on the office machines. Is it possible that it was, in fact, WinPad that ran on a fax machine? Or at least that whatever ran on the fax machine was a direct precursor to WinPad, an earlier new UI layer on top of 16-bit Windows?

The nagging thing that kept me on the hunt for this MAW embedded OS was, oddly enough, the Sega Saturn. A series of newspaper archives, many gathered by Mega Drive Shock, tell an interesting story. Microsoft, it seemed, had been contracted to provide the operating system for the Sega Saturn. Well, this seems to have been a misconception, although clearly a period one. As the news cycle carried on, the scope of this Microsoft-Sega partnership (at first denied by Microsoft!) was reduced to Microsoft providing some sort of firmware related to the Saturn's CD drive.

There is, though, a tantalizing detail. The Electronics Times reported that "Microsoft looks set to port its Microsoft At Work operating system to Hitachi's new SH series of microprocessors." The article explicitly linked the porting to the Saturn effort, but also mentioned that the MAW operating system was being ported to Motorola 68000.

Do you know what never ran on the Hitachi SH or Super-H architecture? 16-bit Windows.

Do you know what did? Windows CE.

Is it possible? Do you think? Is Windows CE a derivative of Windows for Fax Machines?

I'm pretty sure the answer is no. A reader pointed me at John Murray's 1998 book Inside Windows CE, which provides a brief and presumably authoritative history of the platform. It specifically discusses Windows CE as a follow-on project to the failed WinPad, which it describes as 16-bit Windows 3.1, and goes on to say it "was designed for office equipment such as copiers and fax machines."

It is, of course, possible that the book is incorrect. But given the dearth of references to this MAW embedded RTOS, I think this is the more likely scenario:

MAW devices like the Ricoh IFS77 ran 16-bit Windows 3.1 with a new GUI intended to appear more modern while reducing resource requirements. Some reporters at the time noted that Microsoft was cagey about the supported architectures, I suspect they were waiting on ports to be completed. The fax machine was probably x86, though, as there's little evidence MAW actually ran on anything else.

This operating system was extended for the WinPad project, and efforts were made to port it to architectures more common in the embedded devices of the time like SH and 68000. Microsoft may have reached some level of completion on that project and sold it to Sega for the Saturn's complicated storage controller, but it's also possible that the connection between the Saturn and MAW is mistaken and the software Microsoft delivered to Sega was a simple, from-scratch effort. The strange arc of media reporting on the Microsoft-Sega relationship offers the tantalizing possibility that Microsoft was intended to deliver a complete OS for the Saturn but had to pare it back as a result of problems with porting WinPad, but it seems more likely it just results from an overeager electronics industry press and the Sega NDA that a Microsoft spokesperson admitted to being subject to.

MAW failed to win the market, and WinPad failed to win a BillG review. The project was canceled. From the ashes of WinPad and the similarly failed Pegasus, some of the same people started work on a brand new project, Pulsar, which would become Windows CE.

MAW didn't survive the '90s.

Well, some things are like that. I still got 240 lines out of it.

Update: Alert reader abrasive (James Wah) writes in that they had previously dumped the CD-ROM firmware from the Saturn and performed some reverse engineering. Several things suggest that it was not developed by Microsoft, including a Hitachi copyright notice. It seems likely, then, that the supposed Microsoft-Sega partnership never produced anything or was never real in the first place.

--------------------------------------------------------------------------------

>>> 2024-04-05 the life of one earth station

Sometimes, when I am feeling down, I read about failed satellite TV (STV) services. Don't we all? As a result, I've periodically come across a company called AlphaStar Television Network. PrimeStar may have had a rough life, but AlphaStar barely had one at all: it launched in 1996 and went bankrupt in 1997. All told, AlphaStar's STV service only operated for 13 months and 6 days.

AlphaStar is sort of an interesting story on its own. Much like the merchant marine, satellites are closely tied to the identity of their home state. Many satellites are government owned and operated, and several prominent satellite communications networks were chartered by governments or intergovernmental organizations. Consider the example of Inmarsat, a pioneer of private satellite communications born of a UN agency, or Telesat, originally a Crown corporation of Canada. As space technology became more proven, private investors started to fund their own satellite projects, but they continued to operate with the imprimatur of their licensing state.

AlphaStar was sort of an oddity in that sense: a subsidiary of a Canadian company set up to offer an STV service in the United States. Understanding this situation seems to require some background in the Canadian STV industry. 1995 saw the announcement of Expressvu, a satellite television service by telecom company BCE and satellite receiver manufacturer Tee-Comm. Canadian satellite operator Cancom would provide the space segment, and Tee-Comm the ground segment.

Expressvu looked to be headed directly for monopoly: despite attempts by a coalition of Montreal company Power and Hughes/DirecTV to launch a competing service, only Expressvu could meet a regulatory requirement that Canadian broadcast services be served by Canadian satellites. Power's efforts to change the rules involved considerable political controversy as politicians up to the prime minister became involved in the back-and-forth between the two hopeful STV operators.

Foreshadowing Alphastar, both potential Canadian STV operators struggled. Neither Expressvu nor PowerDirecTV would ever begin operations as originally planned. While regulatory uncertainty contributed to schedule delays, and the complexity of still relatively new satellite TV technology drove up costs, one of the biggest problems was a lack of satellite capacity. Most Canadian communications satellites were launched and operated by Telesat, and in the mid '90s Telesat's fleet fit onto a small list. Expressvu had been slated to use a set of transponders on Telesat's Anik E1, but in successive events Anik E1 lost a solar panel and then several of its transponders.

The lack of Canadian satellite capacity created a regulatory conundrum for Canadian STV: Industry Canada was requiring that operators show they had access to satellite capacity in order to obtain an STV license. No capacity was available on Canadian satellites, though. For STV to become available at all in Canada, some compromise needed to be found.

PowerDirecTV and a new satellite venture by Shaw Communications applied for an exception, allowing them to use US satellites until transponders were available on Canadian satellites. Industry Canada was reticent to approve the arrangement, considering the uncertainty over what satellites could be used and when.

As Expressvu failed to get off the ground, several of the partners in the project backed out, and Tee-Comm decided to set off on their own. Considering the licensing situation in Canada, they devised a clever plan: they would launch an STV service in the United States. Such a service, delivering US-made content to US customers, could clearly be served by US-owned satellites according to Canadian policy. But it would also secure long-term satellite carriage agreements and fund the construction of infrastructure. When Tee-Comm later returned to apply for an STV license in the Canadian market, they would have fully operational infrastructure and an existing customer base. They could make a far stronger argument that they would be a reliable, affordable service that could transition to Canadian satellites when capacity allowed.

So Tee-Comm started AlphaStar.

AlphaStar carried over several signs of their Canadian origin, including the basic broadcast technology. They would broadcast DVB-S, the norm overseas but new to the United States where DirecTV and the Dish Network used their own protocols. With DVB-S and more powerful Ku-band transponders on AT&T's Telstar 402R satellite, AlphaStar customers needed a 30" dish---smaller than the C-band TVRO dishes associated with earlier STV, but still larger than the 24" and smaller dishes used with DirecTV's DSS.

Of course, satellite feeds have to come from somewhere. AlphaStar purchased an existing earth station in the town of Oxford, Connecticut and adapted it for television use, adding TVRO antennas to receive programming alongside the large steerable dishes used to transmit to the satellite. An on-site network control center ensured the quality and reliability of their television service; corporate headquarters were located nearby in Stamford.

They never signed up many customers. There may have been a high point of around 40,000, but that wasn't enough to cover the cost of operations. Tee-Comm had barely received authorization to launch the Canadian version of the service (AlphaStar Canada) when they went belly-up in both countries. AlphaStar in the US managed over a year, but AlphaStar Canada only made it a few months. In the mean time, the old Expressvu project, minus Tee-Comm, had finally lurched to life. Expressvu went live in 1997, and the AlphaStar story was forgotten.

During the bankruptcy proceedings in the US and Canada, the courts solicited bids to take over AlphaStar's assets. These included, according to a document prepared by AlphaStar, their Oxford earth station which had been built for the Strategic Defense Initiative and hardened to withstand nuclear attack.

See, this is where I really got interested. An SDI satellite earth station in Oxford? What part of SDI was it built for? I started hunting for the location of this earth station. Not far from Oxford I found an obvious candidate, an isolated facility with a half dozen large, steerable antennas. But no, it was built by Inmarsat and is operated today by Comsat (also originally government-chartered).

Finally, digging through FCC rulings, I found an address: 66 Hawley Road. There was nothing to see there, though, just a tilt-up warehouse for a bearing company that showed no signs of satellite communications heritage. It's funny, Google Maps itself intermittently shows images from before or after the bearing company moved in, but I never noticed that. It took Department of Agriculture aerials from the '90s for me to realize the address was correct; the earth station was demolished just a few years ago.

There are few photos of the building. The best I've seen, from a marketing presentation from one of AlphaStar's successors, is only a partial view. The building doesn't look to be nuclear-hardened, though. It has a glass-walled lobby, and no sign of blast deflectors on its ventilation openings. It seemed like it had been renovated, though. Perhaps they tore out its original hardened features?


Historic aerial imagery tells a story. The facility was first built sometime in the 1980s, and in the early '90s featured two large, likely steerable antennas. They were in the open, not enclosed by radomes, an observation that points away from a military application. It is a fairly simple matter to estimate the altitude and azimuth of a satellite antenna from aerial photographs, so antennas used for military and intelligence purposes are almost always kept under inflatable cover.

In the mid-'90s, around when AlphaStar moved in, small antennas proliferated on the site, peaking at probably a dozen. By the turn of the millenium the antennas receded, dwindling in number as the largest were demolished.

AlphaStar's remains were purchased out of bankruptcy by Egyptian telecom entrepreneur Mahmoud Wahba, who operated them as Champion Telecom Platform. Champion was a general-purpose satellite communications company, but took advantage of the network control center and television equipment at the Oxford facility to focus on television distribution. Making the record a bit confusing, Champion advertised many of its services under the AlphaStar name. They seem to have been reasonably successful, but never attracted much press.

Still, there were interesting aspects to the business. They offered a service where Champion used their small network of earth stations to receive international channels, streaming them over IP to cable television operators who could beef up their lineup without the cost of added headend receivers. At one point, it seems, they even provided infrastructure for a nascent direct-to-consumer IPTV service. They offered the Oxford network control center as an amenity to their earth station customers, and had relationships with a few national television networks, likely as a backup site.

Champion had a better run than AlphaStar but still faded away. Their "remote cable headend" service was innovative in the worst way; in the 2000s the model was widely adopted by the increasingly monopolized cable industry. "Virtual headends" became the norm, with each cable network operating central receivers and network control in-house. IPTV was quite simply a commercial failure, but perhaps we can give them the credit of saying that they were ahead of their time. Earth stations became more available and affordable, and the fees Champion could extract from television networks must have gotten thinner.

Champion Telecom shut down sometime in the '00s. Through their holding company, JJT&M Inc., Champion and Wahba held onto the building and leased it to a tenant, SteelVault Data Centers. For several years, SteelVault operated the building as a colocation center. In their marketing materials, they said "The data center building was originally built for [the] CIA in the early 1980's" [1].

Oh? Now the CIA is involved.

At one point, I felt the trail had gone cold on the history of the Oxford earth station. It clearly predated AlphaStar, and it seemed likely that it was built sometime in the early '80s as several sources claimed. But by whom, and for what? Newspaper archives turned up very little. Ironically, any search with the word "satellite" in the 1980s turns up an unlimited number of articles on the Strategic Defense Initiative, but none have any relation to Oxford.

I put down the case for a month or more. I must have looked into property records, but to be honest, I think I was thrown off the case by Connecticut's curious convention of putting tax assessors and clerks in city government rather than the county. Oxford is in New Haven County, but the New Haven assessor works for the city by that name. Of course they have nothing on parcels in Oxford.

It pays to return with fresh eyes, and today I found what should have been obvious: the Oxford assessor has record of the parcel. The Oxford clerk, in a feat rare in my part of the country, has digitized their books. I didn't even have to brave a phone call, just a frustrating web application. It was a simple trail to follow from the current deed to the survey that first described the parcel---in 1982.


In the era of SteelVault, 66 Hawley takes a strange turn. Like most "secure data centers," the sector of the market that often make claim to having renovated a government bunker, SteelVault did not flourish. In 2013, SteelVault was bankrupt and left the building. Of course, that doesn't stop numerous data center directories from repeating their CIA claims today.

JJT&M, too, was bankrupt, and the building at least seemed to be tied up in the matter. There was a lien, then a foreclosure, then a tax auction; unpaid property taxes of over one million dollars.

Then, there was a twist: the Oxford tax collector went to prison. She had been pocketing property tax payments. JJT&M sued the Town of Oxford, alleging the unpaid taxes had, in fact, been paid to begin with. They also sued the town marshal, who conducted the auction, alleging that he failed to tell the bidders that JJT&M might still hold title.

None of these attempts were successful: there were various technical problems with JJT&M's claims, but the larger finding was that JJT&M had been given ample notice of the unpaid taxes, the foreclosure, and the tax auction, but had failed to object until after the whole thing was done. Wahba had a number of business ventures in the television industry and elsewhere, and he must have been an absentee owner. A good reminder for us all to check the mail every once in a while.

The auction purchaser transferred the building to a holding LLC, probably as an investment, and then a few years later sold it to the Roller Bearing Company of America. They tore it down and built a new warehouse, and that's the end of the story.

But what about the beginning?

Several of the deeds on the property, which is variously listed with an address on Hawley or on the adjacent Willenbrock Road, include the same metes-and-bounds description. It ends: "Being the premises shown and described on a certain map entitled 'Survey & Topographical Map Prepared for G.T.E. Satellite Corp, Oxford.'"

In 1981, the Southern Pacific Railroad, owner of Sprint, launched a satellite communications business under the name Southern Pacific Communications Corporation (SPCC). In 1983, GTE acquired both Sprint and SPCC, rebranding SPCC as GTE Satellite and then shortly after as GTE Spacenet. In 1994, GTE sold Spacenet to GE, where it became GE Capital Spacenet Services, who sold the Oxford earth station to AlphaStar in 1995.

Before AlphaStar, it was a commercial earth station for satellite data network Spacenet, who had built the property to begin with. So what about the SDI? The CIA? AlphaStar had, I think, stretched the truth.

Spacenet was a major satellite data operator in the '90s. They had many commercial customers, but also government customers, and so it is not inconceivable that they held defense contracts. GTE Government Systems had definitely been involved in the SDI, contributing to computer systems and radar technology. But GTE was a huge company with many divisions, and the jump from its Government Services arm to Spacenet being built for the SDI is not one that I can find any backing for. Besides, it doesn't make much sense: SDI was, itself, a satellite program. Why would they use a commercial teleport built for civilian communications satellites?

And what of the CIA? As soon as those three letters are invoked, any claim takes on the odor of urban legend. The CIA has been accused of a great many things, and certainly has done some of them, but I can find nothing to substantiate any connection to Oxford.

It seems more likely that the Oxford earth station fits into the history of satellite communications in the obvious way. GTE Satellite was rapidly growing. From its beginning as SPCC, it had ordered the construction of two satellites that would launch in 1984. In 1982, they were making preparations, purchasing property in Oxford CT and completing a survey and zoning approvals. Over the following year the Oxford Earth Station was constructed, and when Spacenet 1 reached orbit in May 1984 it was ready for service. Oxford was just one of a half dozen earth stations built from 1982-1984 by GTE.

But there's a little more: the Oxford earth station has always had an affinity for television. Paul Allen's Skypix, a spectacularly failed satellite pay-per-view movie service, used GTE's Oxford earth station to uplink its 80 channels of video feeds in the early '90s. Perhaps this was the origin of the site's television equipment, or perhaps there had been a TV venture with GTE even earlier.

What we know for sure is that the Oxford earth station didn't make the cut when GE acquired Spacenet. They sold the earth station shortly after the acquisition. A few years later, in the words of a bankrupt company looking to sell its assets, GTE became the SDI. In the eyes of a failing data center, it became the CIA. And now those claims are rattling around in Wikipedia.

[1] The original just says "built for CIA," which has charming echoes of Arrested Development's "going to Army."

--------------------------------------------------------------------------------

>>> 2024-03-27 telephone cables

two phone cables, terminated opposite ways

So let's say you're working on a household project and need around a dozen telephone cables---the ordinary kind that you would use between your telephone and the wall. It is, of course, more cost effective to buy bulk cable, or simply a long cable, and cut it to length and attach jacks yourself. This is even mercifully easy for telephone cable, as the wires come out of the flat cable jacket in the same order they go into the modular connector. No fiddly straightening and rearranging, you can just cut off the jacket and shove it into the jack.

But, wait, what's up with that whole thing anyway? and are telephone cables really as simple as stripping the jacket and shoving them in?

There's a lot of weirdness about modular cables. I use modular cable to refer to a cable assembly that is terminated in modular connectors, a standard type of multipin connector developed by the Bell System in the 1960s and now widely used for telephones, Ethernet, and occasionally other applications. These types of connectors are often referred to as RJ connectors, although that's a bit problematic for the pedantic. The modular connector itself is more properly designated in terms of its positions and contacts. Telephone connections predominantly use a 6P4C modular connector: the connector has six positions, but only four are populated with actual contacts. Ethernet uses an 8P8C modular connector, a bit larger with eight positions, all of which are used. The handset of a telephone typically connects to the base with a 4P4C connector: smaller than the 6P4C, but still with four contacts.

Why? And what do the RJ designations actually have to do with it?

Well, historically, telephones would be hardwired to the wall by the telephone installer. This proved inconvenient, and so the connection between the telephone and wall started to be connectorized. Telephones of the early 20th century were unlike the ones we use today, though, and were not fully self contained. A "desk set," the part of the telephone that sat on your desk, would be connected to an electrical box, usually mounted on the wall. The box was often called the ringer box, because it contained the ringer, but in many cases it also contained the hybrid transformer that achieved the telephone's key feat of magic: the combination of bidirectional signals onto one wire pair.

The hybrid transformer performed the conversion between a two-wire (one pair) signal and a four-wire (two-pair) signal with 'talk' and 'listen' on separate circuits. Since the hybrid was in the box on the wall, the telephone needed to be connected to the box by four wires. Thus the first standard telephone connector, a chunky block with protruding pins, had four contacts. These connectors were in use even after the end of separate ringer boxes, making two of the four wires vestigial. They were still in use into the 1960s, and so you might still find them in older houses.

As you will gather from the fact that the hybrid may have been in the phone or in a box on the wall, and thus the telephone connection to the wall may require four or two wires, the interface between telephone and wall was poorly standardized. This wasn't much of a problem in practice: at the time, you did not own a telephone, you rented it. When you rented a phone, an installer would be sent to your house, and if any wiring was already present they would check it and adjust the connections as required. Depending on the specific type of service you had, the type of phone you had, and when it was all installed, there were a number of ways things might actually be connected.

By the 1950s, as the Model 500 telephone became the norm, a separate hybrid became very unusual: the Model 500 had a hybrid built into its base and only needed the two wires, which could be connected directly to the exchange without an intermediary box. So what of the other two wires? Just about anyone will tell you that the other two wires are present to allow for a second telephone line. This isn't wrong in the modern context, but it is ahistorical to the origin of the wiring convention. The four wires originated with the use of an external hybrid, and when they became vestigial, other uses were sometimes found for them.

For example, the "Princess" phone, a rather slick phone introduced as more of a consumer-friendly product in 1959, had a cool new feature: a lighted dial. The Princess phone was advertised specifically for home use, and particularly as a bedside telephone, so the lighted dial was a convenient feature if you wanted to make a telephone call at night. I realize that might sound a bit strange to the modern reader, but a lot of people used to put a phone extension on their nightstand. If you wanted to place a call after you had turned out the lights, wouldn't it be nice to not have to get up and turn them back on just to see the dial? Anyway, the whole concept of the Princess phone was this kind of dialing-in-bed luxury, and the glowing dial was a nice touch.

There's a problem, though: how to power the dial light? It could potentially be powered by the loop current, but the loop current is very small, likely to be split across multiple extensions, and the exchange would not appreciate the increased load of a lot of tiny dial lights. Instead, Princess phones were installed with a transformer that produced 6VAC from wall power for the dial light. That power was delivered to the phone using the two unused wires in its wall connection. This sounds rather slick in the era of DECT phones that require a separate power cable to the wall, and was one of the upsides of the complete integration of the telephone system. One of the downsides was, of course, that you were paying a monthly rental rate for all of this convenience.

In the late 1960s, the nature of telephone ownership radically changed. A series of judicial and regulatory decisions, culminating in the Carterfone decision, unleashed the telephone itself from the phone company. In the 1970s, consumers gained the ability to purchase their own phone and connect it to the telephone network without a rental fee. Increasingly, they chose to do so. Suddenly, the loose standardization of the telephone-to-wall interface became a very real problem, and one that impeded the ability of consumers to choose their own telephone.

The solution was the Registered Jack, originally a set of standardized wiring configurations developed within the Bell System and later a matter of federal regulation. Wiring installed by telephone companies was required to provide a standard Registered Jack so that consumers could easily connect their own device. It is important to understand that the Registered Jack standards are really about wiring, not connectors. They describe the way that connectors should be wired to meet specific standard applications.

The most straightforward is number 11, RJ11, which specifies a 6P2C connector with a single telephone pair. But what of the 6P4C connector we use today? Well, that's RJ14, a 6P4C with two telephone lines. The problem is that neither consumers nor the telephone cable industry have much of any appetite for understanding these distinctions, and so today the RJ standards have become misunderstood to such a degree that they are only poor synonyms for the modular jack configuration.

Cables with 6P4C connectors are routinely advertised as RJ11 or RJ14, sometimes RJ11/RJ14. Most of the time RJ11 is manifestly incorrect as they do, in fact, contain four wires and thus provide 6P4C connectors. Actual 6P2C telephone cables are uncommon, as they don't really cost any less than 6P4C (manufacturing cost by far dominating the small-gauge copper) and consumers tend to expect any telephone cable to work with a two-line phone. RJ14 here is even incorrect, as there really is no such thing as an RJ14 cable. It's in the name, Registered Jack: RJ14 describes the jack you plug the cable into, the electrical interface presented on the wall. Any 6P4C cable could be used with any RJ that specifies a 6P4C connector. Incidentally, this is only academic, as RJ14 is the only 6P4C jack. This is, of course, much of why the terminology has become confused: Most of the time it doesn't matter! If the connector fits, it will work.

This whole thing becomes famously complex with Ethernet. It is common, but entirely incorrect, to refer to the 8P8C connector used for Ethernet as RJ45. This terminology is purely the result of confusion, a real RJ45 connector is actually keyed differently (and thus incompatible with) the 8P8C non-keyed connector used for Ethernet. They just look similar, if you don't look too close. A true RJ45 connector provides one telephone line and a resistor with a value that would tell a modem what transmit power it should use. In practice this jack was rarely used and it is entirely obsolete today.

In fact, Ethernet is wired according to a standard called TIA 568, which famously has two different variants, A and B. A and B are electrically identical and differ only in the mapping of color pairs to pins. The origin of this standard, and its two variants, is arcane and basically a result of awkwardly shoehorning Ethernet into telephone wiring while trying not to interfere with the telephone lines, or the RJ45 resistor if present. The connectors are wired strangely in order to provide crossover of transmit and receive while using the pins not used by the RJ45 standard: ironically, Ethernet is very intentionally incompatible with RJ45. It's sort of the inverse, plus a twist to swap RX and TX.

So you have to know why? Well, on any modular wiring, the center pins (4 and 5 for an 8P connector) are almost guaranteed to carry a telephone line. That's what modular wiring was for! Additionally, the RJ45 standard that closely resembles Ethernet uses pins 7 and 8 for the resistor. For these reasons, Ethernet originally avoided those pins, using only pins 1, 2, 3, and 6. Pins 3 and 6 would likely already be a pair, as they are the conventional position for either a second telephone line or a key system control circuit. That maintains, of course, the symmetry that is standard for telephone wiring. But that leaves pins 1 and 2 to be used for the other pair. And this is where we get the weird, inconsistent wiring pattern: 1 and 2, and 3 and 6, respectively were used for pairs by 10/100. When Gigabit ethernet came around and used four pairs, 4 and 5 were obvious since they were already going to be a telephone pair, and 7 and 8 were left. Ethernet connectors grew like tree rings: the middle is symmetric according to telephone convention, the outside is weird, according to Ethernet convention.

And as for why there are two different color conventions... well, the "A" variant was identical to the telephone industry convention for the two center pairs, which was very convenient for any installation that reused or coexisted with telephone wiring. The "B" pattern was actually included only for backwards compatibility with a pre-Ethernet, pre-TIA 568 structured wiring system called SYSTIMAX. SYSTIMAX was widely installed for a variety of applications in early business networking, carrying everything from analog voice to token ring, but particularly emphasized serial terminal connections. Since both telephone wiring and SYSTIMAX wiring were widely installed, using different color conventions for mapping pairs to 8P8C connectors, TIA-568 decided to encompass both.

It is ironic, of course, that SYSTIMAX was originally an AT&T product, and so AT&T created the whole confusion themselves. Today, it is the legalistic view that TIA-568A is "correct" as the standard says it is preferred. TIA-568B, despite being included in the standard for backwards compatibility, is nonetheless extremely common. People will tell you various rules of thumb, like "government uses A and business uses B," or "horizontal wiring uses A and patch cables use B," but really, you just have to check.

But that's not what I meant to talk about here, and I don't think I even explained it very well. Ethernet is weird, that's the point. It's the odd one out, because it was shoehorned into a wiring convention originally designed for another purpose, and in many cases it had to coexist with that other purpose. It's some real legacy stuff. And also Ethernet was originally used with coaxial cables, yes I know, that's why it only needed one pair to begin with, but then we wanted full duplex.

So that's the great thing about phone cables: they're actually using the cable and modular connector the way they were intended to be used, so they fit right into each other. So quick and easy, and there's nothing to think about.

Except...

With Ethernet, there used to be this confusion about whether or not RX and TX were swapped by the cable. Today, because of something originally called auto-MDIX and replaced by the media-independent interface part of GbE, we rarely have to worry about this. But with older 10/100 equipment, there was a wiring convention for one end, and a wiring convention for the other, but if you tried to connect two things that were wired to be the same end, you had to swap RX and TX in the cable. This was called a crossover cable, and is directly analogous to the confusingly named "null modem" serial cable.

Telephone cables are... well, if you go shopping for RJ11 or RJ14 telephone cables, you might run into something odd. Some sellers, typically the more knowledgeable ones, may identify their cables as "straight" or "reverse." Even more confusingly, you will often read that "straight" is for data applications (like fax machines!) while "reverse" is for voice applications. If you consider that the majority of fax machines provide a telephone handset and are, in fact, capable of voice, this is particularly confusing.

See, the thing is, a reverse cable has the two ends swapped relative to each other. It's not like Ethernet, the RX and TX pairs aren't swapped, because there are no such pairs. Remember, the two pairs of a 6P4C telephone cable are used as two separate circuits. Instead, the polarity is swapped within each pair.

Telephone cables are wired in such a way that this is easy: in a 6P4C connector, the "first" pair is the middle two pins (3 and 4), while the "second" pair is the next two pins out (2 and 5). That makes them symmetric, so you can swap the polarity of all of the pairs by simply putting one of the modular jacks on the other way around. With Ethernet, not coincidentally, the "inner" two pairs still work this way. It's the outer ones that buck convention.

When the jacks are connected such that the pins are consistent---that is, pin 1 on one connector is connected to pin 1 on the other, we could call that a straight cable. If the ends are mirrored, that is, pin 1 on one end is connected to pin 6 on the other, we could call it a reverse cable.

With a telephone, we already talked about the hybrid situation: the two directions are not separated on the telephone line. We don't need to swap out RX and TX. So... why? why are there straight and reverse cables? Why do they have different applications?

Telephone lines have a distinct polarity, because of the DC battery voltage. For historic reasons, the two "sides" of a telephone pair are referred to as "tip" and "ring," referring to where they would land on the 1/4" connector that we no longer call a "phone" connector and instead associate mostly with electric guitars and expensive headphones. The ring is the negative side of the battery power, and the tip is the positive side. As standard, these are identified as -48v and 0v, because the exchange equipment is grounded on the positive side. Both sides should be regarded as floating at the subscriber end, though, so the voltages and positive or negative aren't that important. It's just tip and ring.

There is a correct way to connect a phone, but older phones with entirely analog wiring wouldn't notice the difference. When touch-tone phones introduced active digital electronics, polarity suddenly mattered, but you can imagine how this went over with consumers: some people had telephone jacks wired the wrong way around, and had for years, without any problems. When they upgraded to a touch-tone phone and it didn't work, the phone was clearly at fault, not the wiring. So, quite a few touch-tone phones were made with circuitry to "fix" a reverse-wired telephone connection. Besides, just to keep things complex, there were some types of pre-touch-tone phones that required tip and ring be correctly preserved for biasing the magnetic ringer.

But wait... why, then, would so many sources assert that reverse-wired cables are appropriate for voice use? Well, there is a major problem of internet advice here. Look carefully at the websites that are the top results for the question of straight vs. reverse telephone cables, and you will find that they don't actually agree on what those terms mean. There are, in fact, two ways to look at it: you could say that a straight cable is a cable with the same correspondence of color to pin, or you could say that a straight cable has the two modular connectors installed the same way up.

If you think about it, you will realize that these conflict: if you attach both modular connectors with the latch on the same side of the cable, they will have mirrored pinouts and thus opposite polarity. To have a 1:1 pin correspondence that preserves polarity, you must attach the connectors such that one has the latch up and the other has the latch down. Now, this only makes sense if you lay your cable out perfectly flat, and for a round cable (like the twisted pair cables used for ethernet) you still wouldn't be able to tell. But telephone cables are flat, and what's more, the manufacturing process leaves a distinct ridge on one side that makes it obvious which way the connector is oriented. Latch on the ridge side, or latch on the smooth side?

There's another way to look at it: put two 6P4C connectors face-to-face, like you are trying to plug the two into each other. You will notice that, if the wiring is pin-to-pin, they don't match each other. Pin 2 on one connector is a different color from the adjacent pin 5 on the other connector. This isn't all that surprising, because we're basically doing the same thing: we're focusing on the physical orientation of the connectors instead of the electrical connection.

Whether "straight" refers to the wiring or the connector orientation varies from author to author. I will confidently assert that the correct definition of "straight" is a cable where a given pin on one end corresponds to the same pin on the other, but there are certainly some that will disagree with me!

Diagrams of two ways of terminating

Here's the thing: as far as I can tell, the entire issue of straight vs. reverse telephone cables comes from this exact confusion. Oddly enough, non-pin-consistent wiring (e.g. with pin 2 on one connector going to pin 4 on the other) seems to have been the historical convention. Many manufactured telephone cables are made this way, even today. I am not sure, but I will speculate it might be an artifact of the manufacturing technique, or at least the desire of those manufacturing telephone cables to have an easy, consistent way to put the connector on. Non pin-consistent cables are often articulated as placing the connector latch on the ridge side of the cable at both ends. Which makes sense, in a way!

The thing is, these cables, standard though they apparently are, will reverse the polarity of the telephone line. If you connect two with a mating connector, the second one might reverse it back to the way it was before... but it might not! mating connectors are made in both straight and reverse variants, although in this case straight seems much more common.

And I believe this is the whole origin of the "data" vs "voice" advice: telephones, the voice application, rarely care about line polarity. Data applications, because of the diversity of the equipment in use, are more likely to care about polarity. Indeed, for true digital applications like T-carrier, the cable must be straight. The whole thing is perhaps more succinctly described as "straight vs. don't care" rather than "straight vs. reverse," because as far as I can tell, there is no true application for what I am calling a reverse cable (one that does not preserve pin consistency). They're just common because of the applications in which polarity need not be maintained.

But I would love to hear if anyone knows otherwise! Truthfully I am very frustrated by this whole thing. The inconsistency of naming conventions, confusion over applications and the history, and argumentative forum threads about this have all deeply unsettled my belief in the consistency of telecommunications wiring.

Also, if you're making telephone cables, just make them straight (pin-consistent). It seems to be the safer way. I've never had it not work!

two phone cables, terminated opposite ways

--------------------------------------------------------------------------------

>>> 2024-03-17 wilhelm haller and photocopier accounting

In the 1450s, German inventor Johannes Gutenburg designed the movable-type printing press, the first practical method of mass-duplicating text. After various other projects, he applied his press to the production of the Bible, yielding over one hundred copies of a text that previously had to be laboriously hand-copied.

His Bible was a tremendous cultural success, triggering revolutions not only in printed matter but also in religion. It was not a financial success: Gutenburg had apparently misspent the funds loaned to him for the project. Gutenburg lost a lawsuit and, as a result of the judgment, lost his workshop. He had made printing vastly cheaper, but it remained costly in volume. Sustaining the revolution of the printing press evidently required careful accounting.

For as long as there have been documents, there has been a need to copy. The printing press revolutionized printed matter, but setting up plates was a labor-intensive process, and a large number of copies needed to be produced at once for the process to be feasible. Into the early 20th century, it was not unusual for smaller-quantity business documents to be hand-copied. It wasn't necessarily for lack of duplicating technology; if anything, there were a surprising number of competing methods of duplication. But all of them had considerable downsides, not least among them the cost of treated paper stock and photographic chemicals.

The mimeograph was the star of the era. Mimeograph printing involved preparing a wax master, which would eventually be done by typewriter but was still a frustrating process when you only possessed a printed original. Photographic methods could be used to reproduce anything you could look at, but required expensive equipment and a relatively high skill level. The millennial office's proliferation of paper would not fully develop until the invention of xerography.

Xerography is not a common term today, first because of the general retreat of the Xerox corporation from the market, and second because it specifically identifies an analog process not used by modern photocopiers. In the 1960s, Xerox brought about a revolution in paperwork, though, mass-producing a reprographic machine that was faster, easier, and considerably less expensive to operate than contemporaries like the Photostat. The photocopier was now simple and inexpensive enough that they ventured beyond the print shop, taking root in the hallways and supply rooms of offices around the nation.

They were cheap, but they were costly in volume. Cost per page for the photocopiers of the '60s and '70s could reach $0.05, approaching $0.40 in today's currency. The price of photocopies continued to come down, but the ease of photocopiers encouraged quantity. Office workers ran amok, running off 30, 60, even 100 pages of documents to pass around. The operation of photocopiers became a significant item in the budget of American corporations.

The continued proliferation of the photocopier called for careful accounting.

Illustration


Wilhelm Haller was born in Swabia, in Germany. Details of his life, in the English language and seemingly in German as well, are sparse. His Wikipedia biography has the tone of a hagiography; a banner tells us that its neutrality is disputed.

What I can say for sure is that, in the 1960s, Haller found the start of his career as a sales apprentice for Hengstler. Hengstler, by then nearly a hundred years old, had made watches and other fine machinery before settling into the world of industrial clockwork. Among their products were a refined line of mechanical counters, of the same type we use today: hour meters, pulse counters, and volume meters, all driving a set of small wheels printed with the digits 0 through 9. As each wheel rolled from 9 to 0, a peg pushed a lever to advance the next wheel by one digit. They had numerous applications in commercial equipment and Haller must have become quite familiar with them before he moved to New York City, representing Hengstler products to the American market.

Perhaps he worked in an office where photocopier expenses were a complaint. I wish there was more of a story behind his first great invention, but it is quite overshadowed by his later, more abstract work. No source I can find cares to go deeper than to say that, along with Hengstler employee Paul Buser, he founded an American subsidiary of Hengstler called the Hecon Corporation. I can speculate somewhat confidently that Hecon was short for "Hengstler Counter," as Hecon dealt entirely in counters. More specifically, Hecon introduced a new application of the mechanical counter invented by Haller himself: the photocopier key counter.

Xerox photocopiers already included wiring that distributed a "pulse per page" signal, used to advance a counter used for scheduled maintenance. The Hecon key counter was a simple elaboration on this idea: a socket and wiring harness, furnished by Hecon, was installed on the photocopier. An "enable" circuit for the photocopier passed through the socket, and had to be jumpered for the photocopier to function. The socket also provided a pulse per page wire.

Photocopier users, typically each department, were issued a Hecon mechanical counter that fit into the socket. To make photocopies, you had to insert your key counter into the socket to enable the photocopier. The key counter was not resettable, so the accounting department could periodically collect key counters and read the number displayed on them like a utility meter. Thus the name key counter: it was a key to enable the photocopier, and a counter to measure the keyholder's usage.

Key counters were a massive success and proliferated on office photocopiers during the '70s. Xerox, and then their competitors, bought into the system by providing a convenient mounting point and wiring harness connector for the key counter socket. You could find photocopiers that required a Hecon key counter well into the 1990s. Threads on office machine technician forums about adapting the wiring to modern machines suggest that there were some users into the 2010s.


Hecon would not allow the technology to stagnate. The mechanical key counter was reliable but had to be collected or turned in for the counter to be read. The Hecon KCC, introduced by the mid-1990s, replaced key counters with a microcontroller. Users entered an individual PIN or department number on a keypad mounted to the copier and connected to the key counter socket. The KCC enabled the copier and counted the page pulses, totalizing them into a department account that could be read out later from the keypad or from a computer by serial connection.

Hecon was not only invested in technological change, though. At some point, Hecon became a major component of Hengstler, with more Hengstler management moving to its New Jersey headquarters. "Must have good command of German and English," a 1969 newspaper listing for a secretarial job stated, before advising applicants to call a Mr. Hengstler himself.

By 1976, the "Liberal Benefits" in their job listing had been supplemented by a new feature: "Hecon Corp, the company that pioneered & operates on flexible working hours."

During the late '60s, Wilhelm Haller seems to have returned to Germany and shifted his interests beyond photocopiers to the operations of corporations themselves. Working with German management consultant Christel Kammerer, he designed a system for mechanical recording of employee's working hours.

This was not the invention of the time clock. The history of the time clock is obscure but they were already in use during the 19th century. Haller's system implemented a more specific model of working hours promoted by Kammerer: flexitime (more common in Germany) or flextime (more common in the US).

Flextime is a simple enough concept and gained considerable popularity in the US during the 1970s and 1980s, making it almost too obvious to "invent" today. A flextime schedule defines "core hours," such as 11a-3p, during which employees are required to be present in the office. Outside of core hours, employees are free to come and go so long as their working hours total eight each day. Haller's time clock invention was, like the key counter, a totalizing counter: one that recorded not when employees arrived and left, but how many hours they were present each day.

It's unclear if Haller still worked for Hengstler, but he must have had some influence there. Hecon was among the first, perhaps the first, companies to introduce flextime in the United States.


Photocopier accounting continued apace. Dallas Semiconductor and Sun Microsystems popularized the iButton during the late 1990s, a compact and robust device that could store data and perform cryptographic operations. Hecon followed in the footprints of the broader stored value industry, introducing the Hecon Quick Key system that used iButtons for user authentication at the photocopier. Copies could even be "prepaid" onto an iButton, ideal for photocopiers with a regular cast of outside users, like those in courthouses and county clerk's offices.

The Quick Key had a distinctive, angular copier controller apparently called the Base 10. It had the aesthetic vibes of a '90s contemporary art museum, all white and geometric, although surviving examples have yellowed to to the pallor of dated office equipment.

As the Xerographic process was under development, British Bible scholar Hugh Schonfield spent the 1950s developing his Commonwealth of World Citizens. Part micronation, part NGO, the Commonwealth had a mission of organizing its members throughout many nations into a world community that would uphold the ideals of equality and peace while carrying out humanitarian programs.

Adopting Esperanto as its language, it renamed itself to the Mondcivitan Republic, publishing a provisional constitution and electing a parliament. The Mondcivitan Republic issued passports; some of its members tried to abandon citizenship of their own countries. It was one of several organizations promoting "world citizenship" in the mid-century.

In 1972, Schonfield published a book, Politics of God, describing the organization's ideals. Those politics were apparently challenging. While the Mondcivitan Republic operated various humanitarian and charitable programs through the '60s and '70s, it failed to adopt a permanent constitution and by the 1980s had effectively dissolved. Sometime around then, Wilhelm Haller joined the movement and established a new manifestation of the Mondcivitan Republic in Germany. Haller applied to cancel his German citizenship, he would be a citizen of the world.

As a management consultant and social organizer, he founded a series of progressive German organizations. Haller's projects reached their apex in 2004, with the formation of the "International Leadership and Business Society," a direct extension of the Mondcivitan project. That same year, Haller passed away, a victim of thyroid cancer.


A German progressive organization, Lebenshaus Schwäbische Alb eV, published a touching obituary of Haller. Hengstler and Hecon are mentioned only as "a Swabian factory," his work on flextime earns a short paragraph.

In translation:

He was able to celebrate his 69th birthday sitting in a wheelchair with a large group of his family and the circle of friends from the Reconciliation Association and the Life Center. With a weak and barely audible voice, he took part in our discussion about new financing options for the local independent Waldorf school from the purchasing power of the affected parents' homes.

Haller is, to me, a rather curious type of person. He was first an inventor of accounting systems, second a management consultant, and then a social activist motivated by both his Christian religion and belief in precision management. His work with Hengstler/Hecon gave way to support and adoption programs for disadvantaged children, supportive employment programs, and international initiatives born of unique mid-century optimism.

Flextime, he argued, freed workers to live their lives on their own schedules, while his timekeeping systems maintained an eight-hour workday with German precision. The Hecon key counter, a footnote of his career, perhaps did the same on a smaller scale: duplication was freed from the print shop but protected by complete cost recovery. Later in his career, he would set out to unify the world.

But then, it's hard to know what to make of Haller. Almost everything written about him seems to be the work of a true believer in his religious-managerial vision. I came for a small detail of photocopier history, and left with this strange leader of West German industrial thought, a management consultant who promised to "humanize" the workplace through time recording.

For him, a new building in the great "city on a hill" required only two things: careful commercial accounting with the knowledge of our own limited possibilities, and a deep trust in God, who knows how to continue when our own strength has come to an end.

Illustration

--------------------------------------------------------------------------------

>>> 2024-03-09 the purple streetscape

Across the United States, streets are taking on a strange hue at night. Purple.

Purple streetlights have been reported in Tampa, Vancouver, Wichita, Boston. They're certainly in evidence here in Albuquerque, where Coal through downtown has turned almost entirely to mood lighting. Explanations vary. When I first saw the phenomenon, I thought of fixtures that combined RGB elements and thought perhaps one of the color channels had failed.

Others on the internet offer more involved explanations. "A black light surveillance network," one conspiracist calls them, as he shows his mushroom-themed blacklight poster fluorescing on the side of a highway. I remain unclear on what exactly a shadowy cabal would gain from installing blacklights across North America, but I am nonetheless charmed by his fluorescent fingerpainting demonstration. The topic of "blacklight" is a somewhat complex one with LEDs.

Historically, "blacklight" had referred to long-wave UV lamps, also called UV-A. These lamps emitted light around 400nm, beyond violet light, thus the term ultraviolet. This light is close to, but not quite in, the visible spectrum, which is ideal for observing the effect of fluorescence. Fluorescence is a fascinating but also mundane physical phenomenon in which many materials will absorb light, becoming excited, and then re-emit it as they relax. The process is not completely efficient, so the re-emited light is longer in wavelength than the absorbed light.

Because of this loss of energy, a fluorescent material excited by a blacklight will emit light down in the visible spectrum. The effect seems a bit like magic: the fluorescence is far brighter, to the human eye, than the ultraviolet light that incited it. The trouble is that the common use of UV light to show fluorescence leads to a bit of a misconception that ultraviolet light is required. Not at all, fluorescent materials will emit just about any light at a slightly lower wavelength. The emitted light is relatively weak, though, and under broad spectrum lighting is unlikely to stand out against the ambient lighting. Fluorescence always occurs, it's just much more visible under a light source that humans can't see.

When we consider LEDs, though, there is an economic aspect to consider. The construction of LEDs that emit UV light turns out to be quite difficult. There are now options on the market, but only relatively recently, and they run a considerable price premium compared to visible wavelength LEDs. The vast majority of "LED blacklights" are not actually blacklights; they don't actually emit UV. They're just blue. Human eyes aren't so sensitive to blue, especially the narrow emission of blue LEDs, and so these blue "blacklights" work well enough for showing fluorescence, although not as well as a "real" blacklight (still typically gas discharge).

This was mostly a minor detail of theatrical lighting until COVID, when some combination of unknowing buyers and unscrupulous sellers lead to a wave of people using blue LEDs in an attempt to sanitize things. That doesn't work, long-wave UV already barely has enough energy to have much of a sanitizing effect and blue LEDs have none at all. For sanitizing purposes you need short wave UV, or UV-C, which has so much energy that it is almost ionizing radiation. The trouble, of course, is that this energy damages most biological things, including us. UV-C lights can quickly cause mild (but very unpleasant) eye damage called flashburn or "welder's eye," and more serious exposure can cause permanent damage to your eyes and skin. Funny, then, that all the people waving blue LEDs over their groceries on Instagram reels were at least saving themselves from an unpleasant learning experience.

You can probably see how this all ties back to streetlights. The purple streetlights are not "blacklights," but the clear fluorescence of our friend's psychedelic art tells us that they are emitting energy mostly at the short end of the visible spectrum, allowing the longer wave light emitted by the poster to appear inexplicably bright to our eyes. We are apparently looking at some sort of blue LED.

Those familiar with modern LED lighting probably easily see what's happening. LEDs are largely monochromatic lighting sources, they emit a single wavelength that results in very poor color rendering, which is both aesthetically unpleasing and produces poor perception for drivers. While some fixtures do indeed combine LEDs of multiple colors to produce white output, there's another technique that is less expensive, more energy efficient, and produces better quality light. Today's inexpensive, good quality LED lights have been enabled by phosphor coatings.

Here's the idea: LEDs of a single color illuminate a phosphorous material. Phosphorescence is actually a closely related phenomenon to fluorescence, but involves kicking an electron up to a different spin state. Fewer materials exhibit this effect than fluorescence, but chemists have devised synthetic phosphors that can sort of "rearrange" light energy within the spectrum.

Blue LEDs are the most energy efficient, so a typical white LED light uses blue LEDs coated in a phosphor that absorbs a portion of the blue light and re-emits it at longer wavelengths. The resulting spectrum, the combination of some of the blue light passing through and red and green light emitted by the phosphor, is a high-CRI white light ideal for street lighting.

Incidentally, one of the properties of phosphorescence that differentiates it from fluorescence is that phosphors take a while to "relax" back to their lower energy state. A phosphor will continue to glow after the energy that excited it is gone. This effect has long been employed for "glow in the dark" materials that continue to glow softly for an extended period of time after the room goes dark. During the Cold War, the Civil Defense Administration recommended outlining stair treads and doors with such phosphorescent tape so that you could more safely navigate your home during a blackout. The same idea is still employed aboard aircraft and ships, and I suppose you could still do it to your house, it would be fun.

Phosphor-conversion white LEDs use phosphors that minimize this effect but they still exhibit it. Turn off a white LED light in a dark room and you will probably notice that it continues to glow dimly for a short time. You are observing the phosphor slowly relaxing.

So what of the purple streetlights? The phosphor has failed, at least partially, and the lights are emitting the natural spectrum of their LEDs rather than the "adjusted" spectrum produced by the phosphor. The exact reason for this failure doesn't seem to have been publicized, but judging by the apparently rapid onset most people think the phosphor is delaminating and falling off of the LEDs rather than slowly burning away or undergoing some sort of corrosion. They may have simply not used a very good glue.

So we have a technical explanation: white LED streetlights are not white LEDs but blue LEDs with phosphor conversion. If the phosphor somehow fails or comes off, their spectrum shifts towards deep blue. Some combination of remaining phosphor on the lights and environmental conditions (we are not used to seeing large areas under monochromatic blue light) causes this to come off as an eery purple.

There is also, though, a system question. How is it that so many streetlights across so many cities are demonstrating the same failure at around the same time?

The answer to that question is monopolization.

Virtually all LED street lighting installed in North America is manufactured by Acuity Brands. Based in Atlanta, Acuity is a hundred-year-old industrial conglomerate that originally focused on linens and janitorial supplies. In 1969, though, Acuity acquired Lithonia: one of the United States' largest manufacturers of area lighting. Acuity gained a lighting division, and it was on the war path. Through a huge number of acquisitions, everything from age-old area lighting giants like Holophane to VC-funded networked lighting companies have become part of Acuity.

In the mean time, GE's area lighting division petered out along with the rest of GE (they recently sold their entire lighting division to a consumer home automation company). Directories of street lighting manufacturers now list Acuity followed by a list of brands Acuity owns. Their dominant competitor for traditional street lighting are probably Cree and Cooper (part of Eaton), but both are well behind Acuity in municipal sales.

Starting around 2017, Acuity started to manufacture defective lights. The exact nature of the defect is unclear, but it seems to cause abrupt failure of the phosphor after around five years. And here we are, over five years later, with purple streets.

The situation is not quite as bad as it sounds. Acuity offered a long warranty on their street lighting, and the affected lights are still covered. Acuity is sending contractors to replace defective lights at their expensive, but they have to coordinate with street lighting operators to identify defective lights and schedule the work. It's a long process. Many cities have over a thousand lights to replace, but finding them is a problem on its own.

Most cities have invested in some sort of smart streetlighting solution. The most common approach is a module that plugs into the standard photocell receptacle on the light and both controls the light and reports energy use over a municipal LTE network. These modules can automatically identify many failure modes based on changes on power consumption. The problem is that the phosphor failure is completely nonelectrical, so the faulty lights can't be located by energy monitoring.

So, while I can't truly rule out the possibility of a blacklight surveillance network, I'd suggest you report purple lights to your city or electrical utility. They're likely already working with Acuity on a replacement campaign, but they may not know the exact scale of the problem yet.


While I'm at it, let's talk about another common failure mode of outdoor LED lighting: flashing. LED lights use a constant current power supply (often called a driver in this context) that regulates the voltage applied to the LEDs to achieve their rated current. Unfortunately, several failure modes can cause the driver to continuously cycle. Consider the common case of an LED module that has failed in such a way that it shorts at high temperature. The driver will turn on until the faulty module gets warm enough and the driver turns off again on current protection. The process repeats indefinitely. Some drivers have a "soft start" feature and some failure modes cause current to rise beyond limits over time, so it's not unusual for these faulty lights to fade in before shutting off.

It's actually a very similar situation to the cycling that gas discharge street lighting used to show, but as is the way of electronics, it happens faster. Aged sodium bulbs would often cause the ballast to hit its current limit over the span of perhaps five minutes, cycling the light on and off. Now it often happens twice in a second.

I once saw a parking lot where nearly every light had failed this way. I would guess that lightning had struck, creating a transient that damaged all of them at once. It felt like a silent rave, only a little color could have made it better. Unfortunately they were RAB, not Acuity, and the phosphor was holding on.

--------------------------------------------------------------------------------
                                                                        older ->