_____                   _                  _____            _____       _ 
  |     |___ _____ ___ _ _| |_ ___ ___ ___   |  _  |___ ___   | __  |___ _| |
  |   --| . |     | . | | |  _| -_|  _|_ -|  |     |  _| -_|  | __ -| .'| . |
  |_____|___|_|_|_|  _|___|_| |___|_| |___|  |__|__|_| |___|  |_____|__,|___|
  a newsletter by |_| j. b. crawford               home archive subscribe rss

>>> 2024-05-06 matrix

For those of you who are members of the Matrix project, I wanted to let you know that I am running for the Governing Board, and a bit about why. For those of you who are not, I hope you will forgive the intrusion. Maybe you'll find my opinions on the topic interesting anyway.

I am coming off of a period of intense involvement in an ill-fated government commission, and I wanted to find another way to meaningfully contribute to the governance of something I care about. Auspiciously, the newly constituted Matrix foundation is forming a governing board. I am up for one of the individual member seats.

Why do I care?

Instant messaging is a fascinating case study in the history of technology. It is nearly as old as networked computing, and you could make a decent argument that it is older, running only into dithering around the definitions. We've always wanted to communicate, and text has always been an obvious option. It is probably because of the obviousness of instant messaging that it has repeatedly been coopted by commercial interests.

You don't have to be very old to have lived through several iterations of this process. I'm not quite the right person to remember ICQ fondly; for me it was AIM. But what I remember most fondly is more obscure: XFire. It had an in-game overlay and killcounter integration, both critical features for my computer habits that consisted heavily of Jedi Knight: Jedi Academy. That isn't actually important, I'm just reminiscing, but I think most people have a story like this.

If you have read much of my back catalog you know that I am not always optimistic about federated systems. They face a lot of challenges, which range from the technical complexity of changing federated protocol specifications to a whole category of opposing forces that can be vaguely chalked up as capitalism. And yet, textual communications bring us what is probably federation's greatest and most enduring success: email. Email is also a cautionary tale in a lot of ways, but it gives us a cause for optimism.

The history of federated messaging is rather more varied. XMPP was, in its heyday, nearly on track to mass adoption. High quality clients emerged, XMPP was adopted by grassroots projects and then, as at least an implementation detail, by Facebook and Google. We all know what happened. I think most people today are too quick to blame XMPP's downfall on inconsistent implementation of protocol extensions (XEPs) rather than complete cooption by two of the era's largest internet companies, but to be clear, inconsistent implementation was indeed a problem.

Matrix and Me

I have used Matrix as my main, day-to-day messaging solution since 2016. I have also operated a homeserver with open registration for that entire span. In some ways this has been a rather passive venture, but as the user count of that homeserver has grown I've struggled more with performance and moderation issues. A few months ago things tipped over and I had to spend a weekend doing some serious work on both fronts. This lead me to pay a lot more attention to the Matrix project and the state of the art.

I wish that I had been more involved in the Matrix project to date, but I try very hard to avoid software engineering, and Matrix governance and community efforts, the area that matters to me most, have often been hard for me to follow. This situation has improved significantly recently, and I think that the Matrix foundation deserves enormous credit for the work they have done to pick up the level of community engagement.

Of course I come to the topic with some opinions. Who would expect anything less?

Polish over Features

The Matrix project, especially as personified by Element, has added a huge number of new features. It's hard to call this a bad thing, and some of them have been notable successes. For example, E2E is a challenging feature to deliver, but has indeed become table stakes for a messaging product that attracts a privacy-minded userbase.

Still, there is one criticism of Matrix that has remained constant over its entire lifespan, and its one that needs to be attended to: the level of consistency, usability, and polish.

Polish is tricky in a federated system. It's more the domain of clients than the protocol, but the protocol directly affects the situation by determining how easy it is to develop and maintain high-quality clients. For many years it was clear that the change rate in the Matrix protocol made it difficult to develop a good client. Element often felt like the only complete client, and even it was pretty rocky. Fortunately there has been a lot of progress; Element has greatly improved and the stable of third-party clients like my own choice, Nheko, has a lot to offer.

Still, there's a lot of progress to be made. Matrix competes directly with commercial products that come from vendors with a heavy focus on usability and user experience. It only takes one instance of the dreaded "Unable to decrypt" for casual users to bounce. Element continues to be a de facto "primary" implementation that can make the road more difficult for others.

I think that protocol changes should be evaluated conservatively, with an eye towards providing a level of stability that enables multiple top-tier clients. The Matrix Foundation should actively seek ways to support the enhancement and maintenance of clients beyond Element, supporting the healthy ecosystem of independent implementations that are required for an open protocol to be sustainable.


Moderation is one of the great struggles of the internet, if not the greatest. Some advocates of federated systems opine that they make moderation easier or more tractable. I disagree; while federation enables more flexibility in how users experience moderation it makes many of the underlying problems more difficult. Moderation decisions across the system are made in an ad-hoc, distributed way. The rich network of homeservers presents many opportunities for bad actors, including every poorly maintained (or unmaintained) node.

Matrix imposes a moderation challenge at two levels: within communities and within homeservers. Relatively good tools exist at the community level, but still, too many basic functions require introducing the Mjolnir moderation bot. At the level of the homeserver, moderation tools are frustratingly limited. The administration API is minimal in severely limiting ways and there do not appear to be any complete implementations of a client for it.

I applaud the various efforts that have popped up, things like the community moderation initiative's blocklist effort and the "awesome technologies" Synapse administration tool. But we need more, and we need more in two ways.

First, we need technical progress. The in-protocol moderation capabilities of Matrix should be improved over time with a north-star vision of eliminating Mjolnir, an approach to community moderation that was carried over from IRC but probably should have stayed there. The Synapse admin API should be improved and better tooling around it developed.

Second, we need progress in governance. I would like to see an open initiative to develop best practices for moderation of communities and homeservers. This can include the development of shared blocklists through a documented, auditable process (although not necessarily an open one, for reasons of user privacy). I would like to see a sincere effort to advance the state of the art in distributed moderation, bringing together diverse users to learn their concern and developing tools to make consistent and active moderation the default.

The number of independently operated homeservers in Matrix can be a strength, but in this area it can be a weakness. ActivityPub, with its heavier orientation towards public discussion, has served as a laboratory for abuse and moderation issues. Matrix could learn a lot from the efforts going on in the Mastodon community, for example, towards practical means of moderating across instances.

For homeserver operators, moderation is an immense practical concern due to risks from load and CSAM. The volume of CSAM traffic on Matrix, while not a problem beyond solving, seems badly under-discussed and particularly calls for some sort of distributed moderation program to relieve public homeserver operators of ongoing whac-a-mole. Sometimes a graph is only as strong as its weakest node---this is the kind of hard problem we have to take on to build a sustainable future for federated systems, and we should take it on enthusiastically.

I would like to see the Matrix project boldly take on moderation at multiple levels. First, improving the moderation tools and capabilities of the Matrix protocol should always be part of the discussion. Second, I would like to see the Matrix Foundation support the development of improved moderation and abuse tools, preferably including them as part of Synapse or providing a very easy setup process so that good abuse management can be the norm rather than the exception. Third, I would like the Matrix foundation to facilitate community discussion around best practices, tools, and techniques for moderation.

Not everyone will agree on the way to perform moderation, or even the goals of moderation. That's the nature of the internet, and more broadly of communications. We can't let it stop us from trying. This can be one of the hardest areas to build consensus, but that will always be the case, and so we need to include the inherent social complexity of moderation as part of the technical requirements. Once again: we need to be bold and take on the hard problems, and this might be the hardest.

Chat, First and Mostly

One of the concerning trends I have seen in a lot of adjacent nonprofit tech projects lately is dilution of mission. We could also call this "distractions." Unfortunately, Matrix has not been immune. The most obvious example is Third Room, the Matrix metaverse project. I want to temper my criticism by saying that the level of effort devoted to Third Room has evidently been low, but I think that the optical problem created by Third Room (the appearance that Matrix has been capered, one might even say Zuck'd, into a distracting focus on the latest trend) is certainly real. For a community venture, appearances are important, and this means applying discipline in how side projects are presented, especially in this era of so many projects presaging their downfall with some buzzword-reaction initiative.

I might go just a bit further. I don't think that the VoIP features of Matrix (voice and video communications) are a bad idea per se, but I think that that's a complex problem space and the current landscape of instant messaging products suggests that it's not a particularly important one. In other words, people seem happy to do their voice/video chat in a different product than their text chat. You could say that this presents an opportunity for Matrix: to double down on providing a best-in-class textual messaging experience, without having to expend significant resources on real-time media.

I wouldn't want to see existing features removed, but I think that features other than core instant messaging should be deprioritized, at least in the short term.


Sometimes caring a lot about onboarding can be kind of gross. It has the scent of focusing on conversions. But it's a really important issue for IM, when the onbarding experience of a lot of the other options is "you already have it." The Matrix Foundation is well-positioned to demonstrate leadership in the onbaording experience, across the protocol, clients, and public communications. Let's make Matrix easy to get into.

A Consistent Direction

I don't want to dwell too long on how many times a certain prominent Matrix client has been renamed, launched new App Store listings, etc. It's old news and fortunately things seem to have settled down. Still, I think a lot of reputational damage happened that has not fully been forgotten. This history serves as a reminder that significant user-facing changes need to be made carefully. New social applications in general, and especially federated ones, have a bad reputation for churn. The most successful are often the most boring. Let's think carefully about things, and look before we leap.

What Do You Think?

I have a lot of opinions and of course all of them are correct, but usually only in my eccentric construction of reality. Your experience may vary. Please feel free to reach out with your thought on the Matrix project, an offer that stands whether I'm elected or not, because I love to talk about it.

And that concludes my stump speech. I'll be back again soon with a normal post about some useless trivia. I think it might be about a specific kind of printer that you've probably seen but not thought much about, other than slight irritation. I'm also spending some time right now playing video games^w^w^w working on a more ambitious writing project that is out of my normal lane but you might still enjoy. It's about dogs. It's also very sad and I'm not entirely sure what to think about it. You'll see what I mean if I ever finish.


>>> 2024-04-26 microsoft at work

I haven't written anything for a bit. I'm not apologizing, because y'all don't pay me enough to apologize, but I do feel a little bad. Part of it is just that I've been busy, with work and travel and events. Part of it is that I've embarked on a couple of writing projects only to have them just Not Work Out. It happens sometimes: I'll notice something interesting, spend an evening or two digging into it, but find that I just can't make a story out of it. There isn't enough information; it's not really that interesting; the original source turned out to just be wrong. Well, this one is a bit of all three. Join me, if you will, on a journey to nowhere in particular.

One of the things I am interested in is embedded real-time operating systems. Another thing I am interested in is Unified Communications. Yet another is failed Microsoft research projects. So if you've ever heard of Microsoft At Work, you probably won't be surprised that it has repeatedly caught my eye. Most likely, you haven't heard of it. Few have; even the normal sources of information on these kinds of things appear to be inaccurate or at least confused about the details.

Microsoft went to work in the summer of 1993, or at least that's when they announced Microsoft At Work. This kind of terrible product naming was rampant in the mid-'90s, perhaps more from Microsoft than usual. MAW, as I and a few others call it, was marketed with a healthy dose of software sales obfuscation. What was it, exactly? an Architecture, Microsoft said. It would enable all kinds of new applications. With MAW, one would be able to seamlessly access the wealth of information on their personal computers. Some reporters called it an Environment. Try this for a lede: "Microsoft Corp. unveils integrated computer program."

The announcement included a demo that got a lot more to the point: a fax machine that ran Windows.

Even this was strangely obfuscated: enough newspaper reports described it as a "fax like product" that I think this verbiage was sincerely used in the announcement. Today, we would refer to MAW as an effort towards "smart" office machines, but in 1993 we hadn't quite learned that vocabulary yet. Microsoft must have been worried that it would be dismissed as "just a fax machine." It couldn't be that, it had to be something more. It had to be a "fax like product," built with "Windows architecture."

I am being a bit dismissive for effect. MAW was more ambitious than just installing Windows on a grape. The effort included a unified communications protocol for the control of office machines, including printers, for which a whole Microsoft stack was envisioned. This built on top of the Windows Printing System, a difficult-to-search-for project that apparently predated MAW by a short time, enough so that Windows Printing System products were actually on the market when MAW was announced---MAW products were, we will learn, very much not.

Windows Printing System modules were sold for at least the HP LaserJet II and III. If you did not experience them, these printers placed their actual rasterization logic onto a modular card that could be swapped out, usually to switch between PCL or PostScript "personalities." The PostScript module was offered mostly for MacOS compatibility, Apple having selected PostScript as a common printer control language. The Windows Printing System module took this operating system specialization a step further, using Windows' simple GDI graphics protocol to draw output to the printer.

I am actually a little unclear on whether or not the Windows Printing System lead directly to the cheap "WinPrinters" that are also associated with the idea of GDI-based printing. "WinPrinters," so-called by analogy to WinModems, are entirely dependent on the host computer to perform rasterization. While extremely irritating from the perspective of software support, this was an important cost-savings measure in consumer printers. Executing a capable printer control language was rather demanding; the Apple LaserWriter famously had a faster processor than the Macintosh computers it was a peripheral to. Printers with independent rasterization, particularly the more complex PostScript, came at a substantial price premium to those that required the host to perform rasterization.

While some details of reporting on the Windows Printing System make me worry that it was in fact rasterizing on device (like the curiously specific limit of "up to 79" TrueType fonts), I'm fairly sure it was indeed a precursor to the later inexpensive designs. Rather than a cost-savings measure, though, Microsoft seems to have marketed it as a premium feature. Because of the Windows Printing System's higher level of integration with the operating system, it brought numerous new features, many of which we take for granted today. TrueType font support at all, for example, a cutting-edge feature in '93. Duplex control from the print dialog rather than the printer's own display, and for that matter, the ability to see printer status messages (like "PC LOAD LETTER") on the computer you just printed from.

And at the end of the day, offloading rasterization from the printer had an advantage: the Windows Printing System was faster than PCL or PostScript.

Even if it did become the dominant printing method years later, the Windows Printing System of the MAW era doesn't seem to have fared very well. Because it took the position of an add-on cartridge (like a font cartridge), it would have been an added-cost option for printer buyers---an added cost of $132.99, according to a period advertisement. The dearth of available documentation or even post-launch advertising for the Windows Printing System cartridge suggests disappointing sales numbers.

The fortunes of Windows Printing Technology would turn a year later, though, as Lexmark introduced their WinWriter series: "With the Microsoft Windows Printing System Built In!" Speaking of the Lexmark WinWriter series, this whole printing thing is kind of a tangent. What about MAW? The Windows Printing System, it seems, was not really a part of MAW. It was just generally related and available when MAW was announced, so it was rolled into the press conference. It is a bit ironic that the Lexmark WinWriter, truly the Printer for Windows, was not a MAW device despite shipping well after MAW was announced.

So, back to the course: MAW was not just Windows on a fax machine, not just the Windows Printing System, but an integrated system of Windows on a fax machine, the Windows Printing System, a generalized network protocol, and apparently a page description language. This was all, as you can see, rather document-focused. MAW would allow Windows users to easily, seamlessly interact with these common office machines, sending and receiving documents like it was 1999.

And later, it would do more: Microsoft was clear from the beginning that MAW had a higher vision, one that is remarkably similar to the later concept of Unified Communications. Microsoft envisioned Windows on a phone, bringing desk phones into the same architecture, or environment, or whatever. Remember the phone part, it comes back.

In practice, MAW would do nothing. It was a complete and total failure. It took two years for the first MAW office machine to reach the market, a Ricoh fax machine. Fortunately, a television commercial has been preserved, giving us a small window into the Windows on a Fax Machine experience. "Microsoft's At Work Still Loafing on the Job," is how the Washington Post put it in 1995.

They call it "the first real step toward the paperless digital office," a nod towards the promise of Microsoft's document-messaging vision, before noting that virtually no products had shipped, everything was behind schedule, and Microsoft had reorganized the At Work team out of existence. Microsoft At Work was seldom spoken of again. Few products ever launched, those that did sold poorly (the Windows licensing fee imposed on them being one of several factors contributing to noncompetitive price tags), and by the time Windows gained proper USB support few would remember it had ever happened.

In other words, a classic Microsoft story.

But I'm not here to chronicle Microsoft's foibles, there are other writers for that. I'm here to chronicle their weird operating system projects. And that's what got me reading into MAW: the promise of not just one, but two weird operating system projects.

Regard that promise with suspicion.

Wikipedia tells us that MAW included "Microsoft At Work Operating System, a small RTOS to be embedded in devices." That's very interesting. I love a small RTOS to be embedded in devices! Tell us more.

Researching this MAW embedded operating system turns out to be a challenge. You see, it is not the better known of the operating systems produced by the MAW initiative. That would be WinPad, curiously not mentioned at all in the MAW Wikipedia article, but instead in the Windows CE article, as a precursor to CE. Windows CE gets a lot more affection than MAW, and so we know quite a bit more about WinPad. It was an early attempt at an operating system for a touchscreen mobile device, one that, in classic Microsoft fashion, competed internally with another project to build an operating system for a touchscreen mobile device (called Pegasus) and died out along with the rest of MAW.

It was based on 16-bit Windows 3.1, using a stripped-down UI layer that resembled Windows 95. Probably not coincidentally, there seems to have been an effort to port WinPad onto Windows 95, and fortunately developer releases of WinPad have been preserved. With some effort, you can get them running on top of appropriate Windows versions in an emulator.

WinPad was envisioned as a core part of MAW, the key enabler of that paperless office. With MAW and WinPad, you could synchronize documents, emails, and faxes, everything you could ever want in 1995, onto your handheld device and then carry it with you. WinPad also didn't work. Evidently the performance was lousy and it required entirely unrealistic battery capacities. Not a surprising outcome when one ports a mid-'90s desktop operating system to a tablet. How charming! But not exactly my target. What about this RTOS?

If you dig into these things for too long, you start to question your life, or at least reality. References to this MAW embedded operating system are so sparse that I quickly started to wonder if it existed at all, or if it was simply confused with WinPad. This MAW OS would run directly on the office machines. Is it possible that it was, in fact, WinPad that ran on a fax machine? Or at least that whatever ran on the fax machine was a direct precursor to WinPad, an earlier new UI layer on top of 16-bit Windows?

The nagging thing that kept me on the hunt for this MAW embedded OS was, oddly enough, the Sega Saturn. A series of newspaper archives, many gathered by Mega Drive Shock, tell an interesting story. Microsoft, it seemed, had been contracted to provide the operating system for the Sega Saturn. Well, this seems to have been a misconception, although clearly a period one. As the news cycle carried on, the scope of this Microsoft-Sega partnership (at first denied by Microsoft!) was reduced to Microsoft providing some sort of firmware related to the Saturn's CD drive.

There is, though, a tantalizing detail. The Electronics Times reported that "Microsoft looks set to port its Microsoft At Work operating system to Hitachi's new SH series of microprocessors." The article explicitly linked the porting to the Saturn effort, but also mentioned that the MAW operating system was being ported to Motorola 68000.

Do you know what never ran on the Hitachi SH or Super-H architecture? 16-bit Windows.

Do you know what did? Windows CE.

Is it possible? Do you think? Is Windows CE a derivative of Windows for Fax Machines?

I'm pretty sure the answer is no. A reader pointed me at John Murray's 1998 book Inside Windows CE, which provides a brief and presumably authoritative history of the platform. It specifically discusses Windows CE as a follow-on project to the failed WinPad, which it describes as 16-bit Windows 3.1, and goes on to say it "was designed for office equipment such as copiers and fax machines."

It is, of course, possible that the book is incorrect. But given the dearth of references to this MAW embedded RTOS, I think this is the more likely scenario:

MAW devices like the Ricoh IFS77 ran 16-bit Windows 3.1 with a new GUI intended to appear more modern while reducing resource requirements. Some reporters at the time noted that Microsoft was cagey about the supported architectures, I suspect they were waiting on ports to be completed. The fax machine was probably x86, though, as there's little evidence MAW actually ran on anything else.

This operating system was extended for the WinPad project, and efforts were made to port it to architectures more common in the embedded devices of the time like SH and 68000. Microsoft may have reached some level of completion on that project and sold it to Sega for the Saturn's complicated storage controller, but it's also possible that the connection between the Saturn and MAW is mistaken and the software Microsoft delivered to Sega was a simple, from-scratch effort. The strange arc of media reporting on the Microsoft-Sega relationship offers the tantalizing possibility that Microsoft was intended to deliver a complete OS for the Saturn but had to pare it back as a result of problems with porting WinPad, but it seems more likely it just results from an overeager electronics industry press and the Sega NDA that a Microsoft spokesperson admitted to being subject to.

MAW failed to win the market, and WinPad failed to win a BillG review. The project was canceled. From the ashes of WinPad and the similarly failed Pegasus, some of the same people started work on a brand new project, Pulsar, which would become Windows CE.

MAW didn't survive the '90s.

Well, some things are like that. I still got 240 lines out of it.

Update: Alert reader abrasive (James Wah) writes in that they had previously dumped the CD-ROM firmware from the Saturn and performed some reverse engineering. Several things suggest that it was not developed by Microsoft, including a Hitachi copyright notice. It seems likely, then, that the supposed Microsoft-Sega partnership never produced anything or was never real in the first place.


>>> 2024-04-05 the life of one earth station

Sometimes, when I am feeling down, I read about failed satellite TV (STV) services. Don't we all? As a result, I've periodically come across a company called AlphaStar Television Network. PrimeStar may have had a rough life, but AlphaStar barely had one at all: it launched in 1996 and went bankrupt in 1997. All told, AlphaStar's STV service only operated for 13 months and 6 days.

AlphaStar is sort of an interesting story on its own. Much like the merchant marine, satellites are closely tied to the identity of their home state. Many satellites are government owned and operated, and several prominent satellite communications networks were chartered by governments or intergovernmental organizations. Consider the example of Inmarsat, a pioneer of private satellite communications born of a UN agency, or Telesat, originally a Crown corporation of Canada. As space technology became more proven, private investors started to fund their own satellite projects, but they continued to operate with the imprimatur of their licensing state.

AlphaStar was sort of an oddity in that sense: a subsidiary of a Canadian company set up to offer an STV service in the United States. Understanding this situation seems to require some background in the Canadian STV industry. 1995 saw the announcement of Expressvu, a satellite television service by telecom company BCE and satellite receiver manufacturer Tee-Comm. Canadian satellite operator Cancom would provide the space segment, and Tee-Comm the ground segment.

Expressvu looked to be headed directly for monopoly: despite attempts by a coalition of Montreal company Power and Hughes/DirecTV to launch a competing service, only Expressvu could meet a regulatory requirement that Canadian broadcast services be served by Canadian satellites. Power's efforts to change the rules involved considerable political controversy as politicians up to the prime minister became involved in the back-and-forth between the two hopeful STV operators.

Foreshadowing Alphastar, both potential Canadian STV operators struggled. Neither Expressvu nor PowerDirecTV would ever begin operations as originally planned. While regulatory uncertainty contributed to schedule delays, and the complexity of still relatively new satellite TV technology drove up costs, one of the biggest problems was a lack of satellite capacity. Most Canadian communications satellites were launched and operated by Telesat, and in the mid '90s Telesat's fleet fit onto a small list. Expressvu had been slated to use a set of transponders on Telesat's Anik E1, but in successive events Anik E1 lost a solar panel and then several of its transponders.

The lack of Canadian satellite capacity created a regulatory conundrum for Canadian STV: Industry Canada was requiring that operators show they had access to satellite capacity in order to obtain an STV license. No capacity was available on Canadian satellites, though. For STV to become available at all in Canada, some compromise needed to be found.

PowerDirecTV and a new satellite venture by Shaw Communications applied for an exception, allowing them to use US satellites until transponders were available on Canadian satellites. Industry Canada was reticent to approve the arrangement, considering the uncertainty over what satellites could be used and when.

As Expressvu failed to get off the ground, several of the partners in the project backed out, and Tee-Comm decided to set off on their own. Considering the licensing situation in Canada, they devised a clever plan: they would launch an STV service in the United States. Such a service, delivering US-made content to US customers, could clearly be served by US-owned satellites according to Canadian policy. But it would also secure long-term satellite carriage agreements and fund the construction of infrastructure. When Tee-Comm later returned to apply for an STV license in the Canadian market, they would have fully operational infrastructure and an existing customer base. They could make a far stronger argument that they would be a reliable, affordable service that could transition to Canadian satellites when capacity allowed.

So Tee-Comm started AlphaStar.

AlphaStar carried over several signs of their Canadian origin, including the basic broadcast technology. They would broadcast DVB-S, the norm overseas but new to the United States where DirecTV and the Dish Network used their own protocols. With DVB-S and more powerful Ku-band transponders on AT&T's Telstar 402R satellite, AlphaStar customers needed a 30" dish---smaller than the C-band TVRO dishes associated with earlier STV, but still larger than the 24" and smaller dishes used with DirecTV's DSS.

Of course, satellite feeds have to come from somewhere. AlphaStar purchased an existing earth station in the town of Oxford, Connecticut and adapted it for television use, adding TVRO antennas to receive programming alongside the large steerable dishes used to transmit to the satellite. An on-site network control center ensured the quality and reliability of their television service; corporate headquarters were located nearby in Stamford.

They never signed up many customers. There may have been a high point of around 40,000, but that wasn't enough to cover the cost of operations. Tee-Comm had barely received authorization to launch the Canadian version of the service (AlphaStar Canada) when they went belly-up in both countries. AlphaStar in the US managed over a year, but AlphaStar Canada only made it a few months. In the mean time, the old Expressvu project, minus Tee-Comm, had finally lurched to life. Expressvu went live in 1997, and the AlphaStar story was forgotten.

During the bankruptcy proceedings in the US and Canada, the courts solicited bids to take over AlphaStar's assets. These included, according to a document prepared by AlphaStar, their Oxford earth station which had been built for the Strategic Defense Initiative and hardened to withstand nuclear attack.

See, this is where I really got interested. An SDI satellite earth station in Oxford? What part of SDI was it built for? I started hunting for the location of this earth station. Not far from Oxford I found an obvious candidate, an isolated facility with a half dozen large, steerable antennas. But no, it was built by Inmarsat and is operated today by Comsat (also originally government-chartered).

Finally, digging through FCC rulings, I found an address: 66 Hawley Road. There was nothing to see there, though, just a tilt-up warehouse for a bearing company that showed no signs of satellite communications heritage. It's funny, Google Maps itself intermittently shows images from before or after the bearing company moved in, but I never noticed that. It took Department of Agriculture aerials from the '90s for me to realize the address was correct; the earth station was demolished just a few years ago.

There are few photos of the building. The best I've seen, from a marketing presentation from one of AlphaStar's successors, is only a partial view. The building doesn't look to be nuclear-hardened, though. It has a glass-walled lobby, and no sign of blast deflectors on its ventilation openings. It seemed like it had been renovated, though. Perhaps they tore out its original hardened features?

Historic aerial imagery tells a story. The facility was first built sometime in the 1980s, and in the early '90s featured two large, likely steerable antennas. They were in the open, not enclosed by radomes, an observation that points away from a military application. It is a fairly simple matter to estimate the altitude and azimuth of a satellite antenna from aerial photographs, so antennas used for military and intelligence purposes are almost always kept under inflatable cover.

In the mid-'90s, around when AlphaStar moved in, small antennas proliferated on the site, peaking at probably a dozen. By the turn of the millenium the antennas receded, dwindling in number as the largest were demolished.

AlphaStar's remains were purchased out of bankruptcy by Egyptian telecom entrepreneur Mahmoud Wahba, who operated them as Champion Telecom Platform. Champion was a general-purpose satellite communications company, but took advantage of the network control center and television equipment at the Oxford facility to focus on television distribution. Making the record a bit confusing, Champion advertised many of its services under the AlphaStar name. They seem to have been reasonably successful, but never attracted much press.

Still, there were interesting aspects to the business. They offered a service where Champion used their small network of earth stations to receive international channels, streaming them over IP to cable television operators who could beef up their lineup without the cost of added headend receivers. At one point, it seems, they even provided infrastructure for a nascent direct-to-consumer IPTV service. They offered the Oxford network control center as an amenity to their earth station customers, and had relationships with a few national television networks, likely as a backup site.

Champion had a better run than AlphaStar but still faded away. Their "remote cable headend" service was innovative in the worst way; in the 2000s the model was widely adopted by the increasingly monopolized cable industry. "Virtual headends" became the norm, with each cable network operating central receivers and network control in-house. IPTV was quite simply a commercial failure, but perhaps we can give them the credit of saying that they were ahead of their time. Earth stations became more available and affordable, and the fees Champion could extract from television networks must have gotten thinner.

Champion Telecom shut down sometime in the '00s. Through their holding company, JJT&M Inc., Champion and Wahba held onto the building and leased it to a tenant, SteelVault Data Centers. For several years, SteelVault operated the building as a colocation center. In their marketing materials, they said "The data center building was originally built for [the] CIA in the early 1980's" [1].

Oh? Now the CIA is involved.

At one point, I felt the trail had gone cold on the history of the Oxford earth station. It clearly predated AlphaStar, and it seemed likely that it was built sometime in the early '80s as several sources claimed. But by whom, and for what? Newspaper archives turned up very little. Ironically, any search with the word "satellite" in the 1980s turns up an unlimited number of articles on the Strategic Defense Initiative, but none have any relation to Oxford.

I put down the case for a month or more. I must have looked into property records, but to be honest, I think I was thrown off the case by Connecticut's curious convention of putting tax assessors and clerks in city government rather than the county. Oxford is in New Haven County, but the New Haven assessor works for the city by that name. Of course they have nothing on parcels in Oxford.

It pays to return with fresh eyes, and today I found what should have been obvious: the Oxford assessor has record of the parcel. The Oxford clerk, in a feat rare in my part of the country, has digitized their books. I didn't even have to brave a phone call, just a frustrating web application. It was a simple trail to follow from the current deed to the survey that first described the parcel---in 1982.

In the era of SteelVault, 66 Hawley takes a strange turn. Like most "secure data centers," the sector of the market that often make claim to having renovated a government bunker, SteelVault did not flourish. In 2013, SteelVault was bankrupt and left the building. Of course, that doesn't stop numerous data center directories from repeating their CIA claims today.

JJT&M, too, was bankrupt, and the building at least seemed to be tied up in the matter. There was a lien, then a foreclosure, then a tax auction; unpaid property taxes of over one million dollars.

Then, there was a twist: the Oxford tax collector went to prison. She had been pocketing property tax payments. JJT&M sued the Town of Oxford, alleging the unpaid taxes had, in fact, been paid to begin with. They also sued the town marshal, who conducted the auction, alleging that he failed to tell the bidders that JJT&M might still hold title.

None of these attempts were successful: there were various technical problems with JJT&M's claims, but the larger finding was that JJT&M had been given ample notice of the unpaid taxes, the foreclosure, and the tax auction, but had failed to object until after the whole thing was done. Wahba had a number of business ventures in the television industry and elsewhere, and he must have been an absentee owner. A good reminder for us all to check the mail every once in a while.

The auction purchaser transferred the building to a holding LLC, probably as an investment, and then a few years later sold it to the Roller Bearing Company of America. They tore it down and built a new warehouse, and that's the end of the story.

But what about the beginning?

Several of the deeds on the property, which is variously listed with an address on Hawley or on the adjacent Willenbrock Road, include the same metes-and-bounds description. It ends: "Being the premises shown and described on a certain map entitled 'Survey & Topographical Map Prepared for G.T.E. Satellite Corp, Oxford.'"

In 1981, the Southern Pacific Railroad, owner of Sprint, launched a satellite communications business under the name Southern Pacific Communications Corporation (SPCC). In 1983, GTE acquired both Sprint and SPCC, rebranding SPCC as GTE Satellite and then shortly after as GTE Spacenet. In 1994, GTE sold Spacenet to GE, where it became GE Capital Spacenet Services, who sold the Oxford earth station to AlphaStar in 1995.

Before AlphaStar, it was a commercial earth station for satellite data network Spacenet, who had built the property to begin with. So what about the SDI? The CIA? AlphaStar had, I think, stretched the truth.

Spacenet was a major satellite data operator in the '90s. They had many commercial customers, but also government customers, and so it is not inconceivable that they held defense contracts. GTE Government Systems had definitely been involved in the SDI, contributing to computer systems and radar technology. But GTE was a huge company with many divisions, and the jump from its Government Services arm to Spacenet being built for the SDI is not one that I can find any backing for. Besides, it doesn't make much sense: SDI was, itself, a satellite program. Why would they use a commercial teleport built for civilian communications satellites?

And what of the CIA? As soon as those three letters are invoked, any claim takes on the odor of urban legend. The CIA has been accused of a great many things, and certainly has done some of them, but I can find nothing to substantiate any connection to Oxford.

It seems more likely that the Oxford earth station fits into the history of satellite communications in the obvious way. GTE Satellite was rapidly growing. From its beginning as SPCC, it had ordered the construction of two satellites that would launch in 1984. In 1982, they were making preparations, purchasing property in Oxford CT and completing a survey and zoning approvals. Over the following year the Oxford Earth Station was constructed, and when Spacenet 1 reached orbit in May 1984 it was ready for service. Oxford was just one of a half dozen earth stations built from 1982-1984 by GTE.

But there's a little more: the Oxford earth station has always had an affinity for television. Paul Allen's Skypix, a spectacularly failed satellite pay-per-view movie service, used GTE's Oxford earth station to uplink its 80 channels of video feeds in the early '90s. Perhaps this was the origin of the site's television equipment, or perhaps there had been a TV venture with GTE even earlier.

What we know for sure is that the Oxford earth station didn't make the cut when GE acquired Spacenet. They sold the earth station shortly after the acquisition. A few years later, in the words of a bankrupt company looking to sell its assets, GTE became the SDI. In the eyes of a failing data center, it became the CIA. And now those claims are rattling around in Wikipedia.

[1] The original just says "built for CIA," which has charming echoes of Arrested Development's "going to Army."


>>> 2024-03-27 telephone cables

two phone cables, terminated opposite ways

So let's say you're working on a household project and need around a dozen telephone cables---the ordinary kind that you would use between your telephone and the wall. It is, of course, more cost effective to buy bulk cable, or simply a long cable, and cut it to length and attach jacks yourself. This is even mercifully easy for telephone cable, as the wires come out of the flat cable jacket in the same order they go into the modular connector. No fiddly straightening and rearranging, you can just cut off the jacket and shove it into the jack.

But, wait, what's up with that whole thing anyway? and are telephone cables really as simple as stripping the jacket and shoving them in?

There's a lot of weirdness about modular cables. I use modular cable to refer to a cable assembly that is terminated in modular connectors, a standard type of multipin connector developed by the Bell System in the 1960s and now widely used for telephones, Ethernet, and occasionally other applications. These types of connectors are often referred to as RJ connectors, although that's a bit problematic for the pedantic. The modular connector itself is more properly designated in terms of its positions and contacts. Telephone connections predominantly use a 6P4C modular connector: the connector has six positions, but only four are populated with actual contacts. Ethernet uses an 8P8C modular connector, a bit larger with eight positions, all of which are used. The handset of a telephone typically connects to the base with a 4P4C connector: smaller than the 6P4C, but still with four contacts.

Why? And what do the RJ designations actually have to do with it?

Well, historically, telephones would be hardwired to the wall by the telephone installer. This proved inconvenient, and so the connection between the telephone and wall started to be connectorized. Telephones of the early 20th century were unlike the ones we use today, though, and were not fully self contained. A "desk set," the part of the telephone that sat on your desk, would be connected to an electrical box, usually mounted on the wall. The box was often called the ringer box, because it contained the ringer, but in many cases it also contained the hybrid transformer that achieved the telephone's key feat of magic: the combination of bidirectional signals onto one wire pair.

The hybrid transformer performed the conversion between a two-wire (one pair) signal and a four-wire (two-pair) signal with 'talk' and 'listen' on separate circuits. Since the hybrid was in the box on the wall, the telephone needed to be connected to the box by four wires. Thus the first standard telephone connector, a chunky block with protruding pins, had four contacts. These connectors were in use even after the end of separate ringer boxes, making two of the four wires vestigial. They were still in use into the 1960s, and so you might still find them in older houses.

As you will gather from the fact that the hybrid may have been in the phone or in a box on the wall, and thus the telephone connection to the wall may require four or two wires, the interface between telephone and wall was poorly standardized. This wasn't much of a problem in practice: at the time, you did not own a telephone, you rented it. When you rented a phone, an installer would be sent to your house, and if any wiring was already present they would check it and adjust the connections as required. Depending on the specific type of service you had, the type of phone you had, and when it was all installed, there were a number of ways things might actually be connected.

By the 1950s, as the Model 500 telephone became the norm, a separate hybrid became very unusual: the Model 500 had a hybrid built into its base and only needed the two wires, which could be connected directly to the exchange without an intermediary box. So what of the other two wires? Just about anyone will tell you that the other two wires are present to allow for a second telephone line. This isn't wrong in the modern context, but it is ahistorical to the origin of the wiring convention. The four wires originated with the use of an external hybrid, and when they became vestigial, other uses were sometimes found for them.

For example, the "Princess" phone, a rather slick phone introduced as more of a consumer-friendly product in 1959, had a cool new feature: a lighted dial. The Princess phone was advertised specifically for home use, and particularly as a bedside telephone, so the lighted dial was a convenient feature if you wanted to make a telephone call at night. I realize that might sound a bit strange to the modern reader, but a lot of people used to put a phone extension on their nightstand. If you wanted to place a call after you had turned out the lights, wouldn't it be nice to not have to get up and turn them back on just to see the dial? Anyway, the whole concept of the Princess phone was this kind of dialing-in-bed luxury, and the glowing dial was a nice touch.

There's a problem, though: how to power the dial light? It could potentially be powered by the loop current, but the loop current is very small, likely to be split across multiple extensions, and the exchange would not appreciate the increased load of a lot of tiny dial lights. Instead, Princess phones were installed with a transformer that produced 6VAC from wall power for the dial light. That power was delivered to the phone using the two unused wires in its wall connection. This sounds rather slick in the era of DECT phones that require a separate power cable to the wall, and was one of the upsides of the complete integration of the telephone system. One of the downsides was, of course, that you were paying a monthly rental rate for all of this convenience.

In the late 1960s, the nature of telephone ownership radically changed. A series of judicial and regulatory decisions, culminating in the Carterfone decision, unleashed the telephone itself from the phone company. In the 1970s, consumers gained the ability to purchase their own phone and connect it to the telephone network without a rental fee. Increasingly, they chose to do so. Suddenly, the loose standardization of the telephone-to-wall interface became a very real problem, and one that impeded the ability of consumers to choose their own telephone.

The solution was the Registered Jack, originally a set of standardized wiring configurations developed within the Bell System and later a matter of federal regulation. Wiring installed by telephone companies was required to provide a standard Registered Jack so that consumers could easily connect their own device. It is important to understand that the Registered Jack standards are really about wiring, not connectors. They describe the way that connectors should be wired to meet specific standard applications.

The most straightforward is number 11, RJ11, which specifies a 6P2C connector with a single telephone pair. But what of the 6P4C connector we use today? Well, that's RJ14, a 6P4C with two telephone lines. The problem is that neither consumers nor the telephone cable industry have much of any appetite for understanding these distinctions, and so today the RJ standards have become misunderstood to such a degree that they are only poor synonyms for the modular jack configuration.

Cables with 6P4C connectors are routinely advertised as RJ11 or RJ14, sometimes RJ11/RJ14. Most of the time RJ11 is manifestly incorrect as they do, in fact, contain four wires and thus provide 6P4C connectors. Actual 6P2C telephone cables are uncommon, as they don't really cost any less than 6P4C (manufacturing cost by far dominating the small-gauge copper) and consumers tend to expect any telephone cable to work with a two-line phone. RJ14 here is even incorrect, as there really is no such thing as an RJ14 cable. It's in the name, Registered Jack: RJ14 describes the jack you plug the cable into, the electrical interface presented on the wall. Any 6P4C cable could be used with any RJ that specifies a 6P4C connector. Incidentally, this is only academic, as RJ14 is the only 6P4C jack. This is, of course, much of why the terminology has become confused: Most of the time it doesn't matter! If the connector fits, it will work.

This whole thing becomes famously complex with Ethernet. It is common, but entirely incorrect, to refer to the 8P8C connector used for Ethernet as RJ45. This terminology is purely the result of confusion, a real RJ45 connector is actually keyed differently (and thus incompatible with) the 8P8C non-keyed connector used for Ethernet. They just look similar, if you don't look too close. A true RJ45 connector provides one telephone line and a resistor with a value that would tell a modem what transmit power it should use. In practice this jack was rarely used and it is entirely obsolete today.

In fact, Ethernet is wired according to a standard called TIA 568, which famously has two different variants, A and B. A and B are electrically identical and differ only in the mapping of color pairs to pins. The origin of this standard, and its two variants, is arcane and basically a result of awkwardly shoehorning Ethernet into telephone wiring while trying not to interfere with the telephone lines, or the RJ45 resistor if present. The connectors are wired strangely in order to provide crossover of transmit and receive while using the pins not used by the RJ45 standard: ironically, Ethernet is very intentionally incompatible with RJ45. It's sort of the inverse, plus a twist to swap RX and TX.

So you have to know why? Well, on any modular wiring, the center pins (4 and 5 for an 8P connector) are almost guaranteed to carry a telephone line. That's what modular wiring was for! Additionally, the RJ45 standard that closely resembles Ethernet uses pins 7 and 8 for the resistor. For these reasons, Ethernet originally avoided those pins, using only pins 1, 2, 3, and 6. Pins 3 and 6 would likely already be a pair, as they are the conventional position for either a second telephone line or a key system control circuit. That maintains, of course, the symmetry that is standard for telephone wiring. But that leaves pins 1 and 2 to be used for the other pair. And this is where we get the weird, inconsistent wiring pattern: 1 and 2, and 3 and 6, respectively were used for pairs by 10/100. When Gigabit ethernet came around and used four pairs, 4 and 5 were obvious since they were already going to be a telephone pair, and 7 and 8 were left. Ethernet connectors grew like tree rings: the middle is symmetric according to telephone convention, the outside is weird, according to Ethernet convention.

And as for why there are two different color conventions... well, the "A" variant was identical to the telephone industry convention for the two center pairs, which was very convenient for any installation that reused or coexisted with telephone wiring. The "B" pattern was actually included only for backwards compatibility with a pre-Ethernet, pre-TIA 568 structured wiring system called SYSTIMAX. SYSTIMAX was widely installed for a variety of applications in early business networking, carrying everything from analog voice to token ring, but particularly emphasized serial terminal connections. Since both telephone wiring and SYSTIMAX wiring were widely installed, using different color conventions for mapping pairs to 8P8C connectors, TIA-568 decided to encompass both.

It is ironic, of course, that SYSTIMAX was originally an AT&T product, and so AT&T created the whole confusion themselves. Today, it is the legalistic view that TIA-568A is "correct" as the standard says it is preferred. TIA-568B, despite being included in the standard for backwards compatibility, is nonetheless extremely common. People will tell you various rules of thumb, like "government uses A and business uses B," or "horizontal wiring uses A and patch cables use B," but really, you just have to check.

But that's not what I meant to talk about here, and I don't think I even explained it very well. Ethernet is weird, that's the point. It's the odd one out, because it was shoehorned into a wiring convention originally designed for another purpose, and in many cases it had to coexist with that other purpose. It's some real legacy stuff. And also Ethernet was originally used with coaxial cables, yes I know, that's why it only needed one pair to begin with, but then we wanted full duplex.

So that's the great thing about phone cables: they're actually using the cable and modular connector the way they were intended to be used, so they fit right into each other. So quick and easy, and there's nothing to think about.


With Ethernet, there used to be this confusion about whether or not RX and TX were swapped by the cable. Today, because of something originally called auto-MDIX and replaced by the media-independent interface part of GbE, we rarely have to worry about this. But with older 10/100 equipment, there was a wiring convention for one end, and a wiring convention for the other, but if you tried to connect two things that were wired to be the same end, you had to swap RX and TX in the cable. This was called a crossover cable, and is directly analogous to the confusingly named "null modem" serial cable.

Telephone cables are... well, if you go shopping for RJ11 or RJ14 telephone cables, you might run into something odd. Some sellers, typically the more knowledgeable ones, may identify their cables as "straight" or "reverse." Even more confusingly, you will often read that "straight" is for data applications (like fax machines!) while "reverse" is for voice applications. If you consider that the majority of fax machines provide a telephone handset and are, in fact, capable of voice, this is particularly confusing.

See, the thing is, a reverse cable has the two ends swapped relative to each other. It's not like Ethernet, the RX and TX pairs aren't swapped, because there are no such pairs. Remember, the two pairs of a 6P4C telephone cable are used as two separate circuits. Instead, the polarity is swapped within each pair.

Telephone cables are wired in such a way that this is easy: in a 6P4C connector, the "first" pair is the middle two pins (3 and 4), while the "second" pair is the next two pins out (2 and 5). That makes them symmetric, so you can swap the polarity of all of the pairs by simply putting one of the modular jacks on the other way around. With Ethernet, not coincidentally, the "inner" two pairs still work this way. It's the outer ones that buck convention.

When the jacks are connected such that the pins are consistent---that is, pin 1 on one connector is connected to pin 1 on the other, we could call that a straight cable. If the ends are mirrored, that is, pin 1 on one end is connected to pin 6 on the other, we could call it a reverse cable.

With a telephone, we already talked about the hybrid situation: the two directions are not separated on the telephone line. We don't need to swap out RX and TX. So... why? why are there straight and reverse cables? Why do they have different applications?

Telephone lines have a distinct polarity, because of the DC battery voltage. For historic reasons, the two "sides" of a telephone pair are referred to as "tip" and "ring," referring to where they would land on the 1/4" connector that we no longer call a "phone" connector and instead associate mostly with electric guitars and expensive headphones. The ring is the negative side of the battery power, and the tip is the positive side. As standard, these are identified as -48v and 0v, because the exchange equipment is grounded on the positive side. Both sides should be regarded as floating at the subscriber end, though, so the voltages and positive or negative aren't that important. It's just tip and ring.

There is a correct way to connect a phone, but older phones with entirely analog wiring wouldn't notice the difference. When touch-tone phones introduced active digital electronics, polarity suddenly mattered, but you can imagine how this went over with consumers: some people had telephone jacks wired the wrong way around, and had for years, without any problems. When they upgraded to a touch-tone phone and it didn't work, the phone was clearly at fault, not the wiring. So, quite a few touch-tone phones were made with circuitry to "fix" a reverse-wired telephone connection. Besides, just to keep things complex, there were some types of pre-touch-tone phones that required tip and ring be correctly preserved for biasing the magnetic ringer.

But wait... why, then, would so many sources assert that reverse-wired cables are appropriate for voice use? Well, there is a major problem of internet advice here. Look carefully at the websites that are the top results for the question of straight vs. reverse telephone cables, and you will find that they don't actually agree on what those terms mean. There are, in fact, two ways to look at it: you could say that a straight cable is a cable with the same correspondence of color to pin, or you could say that a straight cable has the two modular connectors installed the same way up.

If you think about it, you will realize that these conflict: if you attach both modular connectors with the latch on the same side of the cable, they will have mirrored pinouts and thus opposite polarity. To have a 1:1 pin correspondence that preserves polarity, you must attach the connectors such that one has the latch up and the other has the latch down. Now, this only makes sense if you lay your cable out perfectly flat, and for a round cable (like the twisted pair cables used for ethernet) you still wouldn't be able to tell. But telephone cables are flat, and what's more, the manufacturing process leaves a distinct ridge on one side that makes it obvious which way the connector is oriented. Latch on the ridge side, or latch on the smooth side?

There's another way to look at it: put two 6P4C connectors face-to-face, like you are trying to plug the two into each other. You will notice that, if the wiring is pin-to-pin, they don't match each other. Pin 2 on one connector is a different color from the adjacent pin 5 on the other connector. This isn't all that surprising, because we're basically doing the same thing: we're focusing on the physical orientation of the connectors instead of the electrical connection.

Whether "straight" refers to the wiring or the connector orientation varies from author to author. I will confidently assert that the correct definition of "straight" is a cable where a given pin on one end corresponds to the same pin on the other, but there are certainly some that will disagree with me!

Diagrams of two ways of terminating

Here's the thing: as far as I can tell, the entire issue of straight vs. reverse telephone cables comes from this exact confusion. Oddly enough, non-pin-consistent wiring (e.g. with pin 2 on one connector going to pin 4 on the other) seems to have been the historical convention. Many manufactured telephone cables are made this way, even today. I am not sure, but I will speculate it might be an artifact of the manufacturing technique, or at least the desire of those manufacturing telephone cables to have an easy, consistent way to put the connector on. Non pin-consistent cables are often articulated as placing the connector latch on the ridge side of the cable at both ends. Which makes sense, in a way!

The thing is, these cables, standard though they apparently are, will reverse the polarity of the telephone line. If you connect two with a mating connector, the second one might reverse it back to the way it was before... but it might not! mating connectors are made in both straight and reverse variants, although in this case straight seems much more common.

And I believe this is the whole origin of the "data" vs "voice" advice: telephones, the voice application, rarely care about line polarity. Data applications, because of the diversity of the equipment in use, are more likely to care about polarity. Indeed, for true digital applications like T-carrier, the cable must be straight. The whole thing is perhaps more succinctly described as "straight vs. don't care" rather than "straight vs. reverse," because as far as I can tell, there is no true application for what I am calling a reverse cable (one that does not preserve pin consistency). They're just common because of the applications in which polarity need not be maintained.

But I would love to hear if anyone knows otherwise! Truthfully I am very frustrated by this whole thing. The inconsistency of naming conventions, confusion over applications and the history, and argumentative forum threads about this have all deeply unsettled my belief in the consistency of telecommunications wiring.

Also, if you're making telephone cables, just make them straight (pin-consistent). It seems to be the safer way. I've never had it not work!

two phone cables, terminated opposite ways


>>> 2024-03-17 wilhelm haller and photocopier accounting

In the 1450s, German inventor Johannes Gutenburg designed the movable-type printing press, the first practical method of mass-duplicating text. After various other projects, he applied his press to the production of the Bible, yielding over one hundred copies of a text that previously had to be laboriously hand-copied.

His Bible was a tremendous cultural success, triggering revolutions not only in printed matter but also in religion. It was not a financial success: Gutenburg had apparently misspent the funds loaned to him for the project. Gutenburg lost a lawsuit and, as a result of the judgment, lost his workshop. He had made printing vastly cheaper, but it remained costly in volume. Sustaining the revolution of the printing press evidently required careful accounting.

For as long as there have been documents, there has been a need to copy. The printing press revolutionized printed matter, but setting up plates was a labor-intensive process, and a large number of copies needed to be produced at once for the process to be feasible. Into the early 20th century, it was not unusual for smaller-quantity business documents to be hand-copied. It wasn't necessarily for lack of duplicating technology; if anything, there were a surprising number of competing methods of duplication. But all of them had considerable downsides, not least among them the cost of treated paper stock and photographic chemicals.

The mimeograph was the star of the era. Mimeograph printing involved preparing a wax master, which would eventually be done by typewriter but was still a frustrating process when you only possessed a printed original. Photographic methods could be used to reproduce anything you could look at, but required expensive equipment and a relatively high skill level. The millennial office's proliferation of paper would not fully develop until the invention of xerography.

Xerography is not a common term today, first because of the general retreat of the Xerox corporation from the market, and second because it specifically identifies an analog process not used by modern photocopiers. In the 1960s, Xerox brought about a revolution in paperwork, though, mass-producing a reprographic machine that was faster, easier, and considerably less expensive to operate than contemporaries like the Photostat. The photocopier was now simple and inexpensive enough that they ventured beyond the print shop, taking root in the hallways and supply rooms of offices around the nation.

They were cheap, but they were costly in volume. Cost per page for the photocopiers of the '60s and '70s could reach $0.05, approaching $0.40 in today's currency. The price of photocopies continued to come down, but the ease of photocopiers encouraged quantity. Office workers ran amok, running off 30, 60, even 100 pages of documents to pass around. The operation of photocopiers became a significant item in the budget of American corporations.

The continued proliferation of the photocopier called for careful accounting.


Wilhelm Haller was born in Swabia, in Germany. Details of his life, in the English language and seemingly in German as well, are sparse. His Wikipedia biography has the tone of a hagiography; a banner tells us that its neutrality is disputed.

What I can say for sure is that, in the 1960s, Haller found the start of his career as a sales apprentice for Hengstler. Hengstler, by then nearly a hundred years old, had made watches and other fine machinery before settling into the world of industrial clockwork. Among their products were a refined line of mechanical counters, of the same type we use today: hour meters, pulse counters, and volume meters, all driving a set of small wheels printed with the digits 0 through 9. As each wheel rolled from 9 to 0, a peg pushed a lever to advance the next wheel by one digit. They had numerous applications in commercial equipment and Haller must have become quite familiar with them before he moved to New York City, representing Hengstler products to the American market.

Perhaps he worked in an office where photocopier expenses were a complaint. I wish there was more of a story behind his first great invention, but it is quite overshadowed by his later, more abstract work. No source I can find cares to go deeper than to say that, along with Hengstler employee Paul Buser, he founded an American subsidiary of Hengstler called the Hecon Corporation. I can speculate somewhat confidently that Hecon was short for "Hengstler Counter," as Hecon dealt entirely in counters. More specifically, Hecon introduced a new application of the mechanical counter invented by Haller himself: the photocopier key counter.

Xerox photocopiers already included wiring that distributed a "pulse per page" signal, used to advance a counter used for scheduled maintenance. The Hecon key counter was a simple elaboration on this idea: a socket and wiring harness, furnished by Hecon, was installed on the photocopier. An "enable" circuit for the photocopier passed through the socket, and had to be jumpered for the photocopier to function. The socket also provided a pulse per page wire.

Photocopier users, typically each department, were issued a Hecon mechanical counter that fit into the socket. To make photocopies, you had to insert your key counter into the socket to enable the photocopier. The key counter was not resettable, so the accounting department could periodically collect key counters and read the number displayed on them like a utility meter. Thus the name key counter: it was a key to enable the photocopier, and a counter to measure the keyholder's usage.

Key counters were a massive success and proliferated on office photocopiers during the '70s. Xerox, and then their competitors, bought into the system by providing a convenient mounting point and wiring harness connector for the key counter socket. You could find photocopiers that required a Hecon key counter well into the 1990s. Threads on office machine technician forums about adapting the wiring to modern machines suggest that there were some users into the 2010s.

Hecon would not allow the technology to stagnate. The mechanical key counter was reliable but had to be collected or turned in for the counter to be read. The Hecon KCC, introduced by the mid-1990s, replaced key counters with a microcontroller. Users entered an individual PIN or department number on a keypad mounted to the copier and connected to the key counter socket. The KCC enabled the copier and counted the page pulses, totalizing them into a department account that could be read out later from the keypad or from a computer by serial connection.

Hecon was not only invested in technological change, though. At some point, Hecon became a major component of Hengstler, with more Hengstler management moving to its New Jersey headquarters. "Must have good command of German and English," a 1969 newspaper listing for a secretarial job stated, before advising applicants to call a Mr. Hengstler himself.

By 1976, the "Liberal Benefits" in their job listing had been supplemented by a new feature: "Hecon Corp, the company that pioneered & operates on flexible working hours."

During the late '60s, Wilhelm Haller seems to have returned to Germany and shifted his interests beyond photocopiers to the operations of corporations themselves. Working with German management consultant Christel Kammerer, he designed a system for mechanical recording of employee's working hours.

This was not the invention of the time clock. The history of the time clock is obscure but they were already in use during the 19th century. Haller's system implemented a more specific model of working hours promoted by Kammerer: flexitime (more common in Germany) or flextime (more common in the US).

Flextime is a simple enough concept and gained considerable popularity in the US during the 1970s and 1980s, making it almost too obvious to "invent" today. A flextime schedule defines "core hours," such as 11a-3p, during which employees are required to be present in the office. Outside of core hours, employees are free to come and go so long as their working hours total eight each day. Haller's time clock invention was, like the key counter, a totalizing counter: one that recorded not when employees arrived and left, but how many hours they were present each day.

It's unclear if Haller still worked for Hengstler, but he must have had some influence there. Hecon was among the first, perhaps the first, companies to introduce flextime in the United States.

Photocopier accounting continued apace. Dallas Semiconductor and Sun Microsystems popularized the iButton during the late 1990s, a compact and robust device that could store data and perform cryptographic operations. Hecon followed in the footprints of the broader stored value industry, introducing the Hecon Quick Key system that used iButtons for user authentication at the photocopier. Copies could even be "prepaid" onto an iButton, ideal for photocopiers with a regular cast of outside users, like those in courthouses and county clerk's offices.

The Quick Key had a distinctive, angular copier controller apparently called the Base 10. It had the aesthetic vibes of a '90s contemporary art museum, all white and geometric, although surviving examples have yellowed to to the pallor of dated office equipment.

As the Xerographic process was under development, British Bible scholar Hugh Schonfield spent the 1950s developing his Commonwealth of World Citizens. Part micronation, part NGO, the Commonwealth had a mission of organizing its members throughout many nations into a world community that would uphold the ideals of equality and peace while carrying out humanitarian programs.

Adopting Esperanto as its language, it renamed itself to the Mondcivitan Republic, publishing a provisional constitution and electing a parliament. The Mondcivitan Republic issued passports; some of its members tried to abandon citizenship of their own countries. It was one of several organizations promoting "world citizenship" in the mid-century.

In 1972, Schonfield published a book, Politics of God, describing the organization's ideals. Those politics were apparently challenging. While the Mondcivitan Republic operated various humanitarian and charitable programs through the '60s and '70s, it failed to adopt a permanent constitution and by the 1980s had effectively dissolved. Sometime around then, Wilhelm Haller joined the movement and established a new manifestation of the Mondcivitan Republic in Germany. Haller applied to cancel his German citizenship, he would be a citizen of the world.

As a management consultant and social organizer, he founded a series of progressive German organizations. Haller's projects reached their apex in 2004, with the formation of the "International Leadership and Business Society," a direct extension of the Mondcivitan project. That same year, Haller passed away, a victim of thyroid cancer.

A German progressive organization, Lebenshaus Schwäbische Alb eV, published a touching obituary of Haller. Hengstler and Hecon are mentioned only as "a Swabian factory," his work on flextime earns a short paragraph.

In translation:

He was able to celebrate his 69th birthday sitting in a wheelchair with a large group of his family and the circle of friends from the Reconciliation Association and the Life Center. With a weak and barely audible voice, he took part in our discussion about new financing options for the local independent Waldorf school from the purchasing power of the affected parents' homes.

Haller is, to me, a rather curious type of person. He was first an inventor of accounting systems, second a management consultant, and then a social activist motivated by both his Christian religion and belief in precision management. His work with Hengstler/Hecon gave way to support and adoption programs for disadvantaged children, supportive employment programs, and international initiatives born of unique mid-century optimism.

Flextime, he argued, freed workers to live their lives on their own schedules, while his timekeeping systems maintained an eight-hour workday with German precision. The Hecon key counter, a footnote of his career, perhaps did the same on a smaller scale: duplication was freed from the print shop but protected by complete cost recovery. Later in his career, he would set out to unify the world.

But then, it's hard to know what to make of Haller. Almost everything written about him seems to be the work of a true believer in his religious-managerial vision. I came for a small detail of photocopier history, and left with this strange leader of West German industrial thought, a management consultant who promised to "humanize" the workplace through time recording.

For him, a new building in the great "city on a hill" required only two things: careful commercial accounting with the knowledge of our own limited possibilities, and a deep trust in God, who knows how to continue when our own strength has come to an end.


<- newer                                                                older ->