I haven't talked about my goal for personal computing in a while.

With Sundog nerdsniping me in to attempting to turn the LibSSH-ESP32 port in to the basis of a full fledged SSH client for the ESP32, I guess I should spend a few minutes talking about why I bother with this bullshit.

Computers could be good, but they aren't.

That's the gist of it.

I guess I mean Good with a capital G, as in "a force for good in the world", but I also mean good with a lowercase g, as in "not super shitty to use, or think about".

I'm not going to waste a lot of bits talking about how computers are bad. I've done this a lot before, and you probably already agree with me. I'll quickly summarize the high points.

What's wrong with (modern) computing?

- Computers spy on us all the time
- Computers are insecure, while pretending not to to be.
- Computers enable new modes of rent seeking, further exasperated by shitty patents and worse laws
- Computers/the modern internet encourage behaviors which are bad for our mental health as individuals.
- Computers and the modern internet, in concert with modern capitalism have built a world essentially without public spaces.

You know, all that bullshit.

As I said, it's a summation. There's nuance. There are more problems.

That list should serve as an okay shorthand for the kind of thing I'm talking about.

Computers? They're bad.

But I'm here, talking to you, through a computer. I derive my living from computers. I spend most of my free time in front of a computer.

In spite of all the ways computers are lowercase b bad, computers enable a lot of Good.

I believe in the potential of computers, in our digital future.

I've spent a lot of time thinking about what the next 30 years in computing might look like, the successes and failures of the last 30 years, and the inflection point at which a computer is Good Enough for most tasks.

I've spent a lot of time thinking about the concept of planned obsolescence as it applies to computing, and what modern computing might look like without the profit motive fucking everything up.


I'm just a dude.

I'm a sysadmin. I spend a lot of time using computers, and specifically I spend a lot of time fixing machines that are failing in some way.

But I'm just some dude who thinks about stuff and imagines futures which are less horrible than present.

I've said that as a way to say: I don't claim to have The Answer, I just have some ideas. I'm going to talk about those ideas.

Sidebar over.

So how did we get from the gleaming promise of the digital age as imagined in the 70s to the harsh cyberpunk reality of the 20s?

Centralization, rent seeking, planned obsolescence, surveillance, advertising, and copyright.

How do we move forward?

Re-decentralization, a rejection of the profit motive, building for the future/to be repaired, building for privacy, rejecting advertising, and embracing Free software.

Personally, there's another facet to all of this:

I use and maintain computers professionally, and recreationaly.

Sometimes, I want to do something that doesn't feel or look like my day job. Unfortunately, most of my hobbies feel and look a lot like my day job.

To that end, I have some weird old computers that I keep around because they're useful and also because they're Vastly Different than the computers I use at work.

My , mac plus, and palm pilots fall in this camp.

I can do about 80% of what I want to use a computer for with my 486 based, non-backlit, black and white HP Omnibook.

Add my newly refurbished and upgraded Palm Lifedrive, and I'm closer to 95%.

Let's run through those tasks:

- Listen to music (The palm is a 4GB CF card with a 2GB SD card, basically.)
- Watch movies (I have to encode them specially for the palm, but the lifedrive makes a great video iPod.)
- Read books (plucker is still pretty great)
- RSS (ditto above)


- Email (via some old DOS software the name of which I'll have to look up, and lots of effort on getting my mail server configured correctly. + an ESP32 based modem. This took some doing and I still don't love how I'm doing it. I'll do a blog post about it eventually.)
- Social (mastodon via brutaldon via lynx via telnet over tor to an onion endpoint I run in my home, not ideal, or via BBS)
- Write (via Windows 3.1 notepad)

- Consult reference material (via the internet or gopher over my esp32 modem with the appropriate DOS software and SSL proxy, or more likely, via a real hacky thing I wrote to mirror a bunch of websites to a local web server.)
- Develop (frankly Via GW-BASIC, although I'd love to start doing real programming again.)
- Games (this is the thing the omnibook is worst at! I consider that a strength most of the time, but I do have a lot of parser based IF games on it.)

There was a time in the recent past when I used a Pentium MMX laptop as my only computer outside of work for weeks at a time.

It could do basically everything I wanted it to do, including some far more impressive games.

It's batteries gave out, finally, but until then it made a decent little computer.

The only real problem I run in to in these setups are the hoops I have to jump through because I'm the only one using them, and because (wireless) networking technology has advanced in ways that are not backwards compatible on the hardware level, while leaving laptops without a clear upgrade path.


This feels like a kind of rambling sidebar, but there's a point:

Most tasks that computers are used for on a daily basis could be completed on much less powerful hardware if there wasn't a profit incentive in the way.

So, circling back to the original point: I'm imaging a world in which computers are different.

Specifically, different in that they are designed to be cheap, easily repaired or replaced, and to just Do Their Job forever.

(This requires defining the job they are supposed to do.)

No one gets upset that their typewriter can't browse the internet, you know?

But a computer isn't an appliance, it's an everything machine, and as an Everything machine, if it can't do the New Shiny Thing we have to throw it away and get a new one.

That's the mentality I'm trying to break out of.

I want to define a(n extendable!) list of tasks that I feel like will be relevant indefinitely, and build a machine to last 30 years.

@ajroach42 Sorry to butt in here, but this has inspired an Idea™: what if we designed computers to be upgradable without having to /replace/ the old parts?

Like, for example, imagine if processors came on PCIe cards, and when you needed extra compute for the latest game/software/etc., you could just install another CPU card and use the cores of both?

Not only would it avoid wasting the old CPU, but you could also take that card out and loan it to a friend when you're not using it.

@keith some backplane designs kinda sorta work like this, but you're limited by the bandwidth and connectivity of your backplane.

@ajroach42 @keith This is, in part, the premise behind RapidIO. As long as you have a switch fabric that can handle the traffic, you can add processors, memory, etc. all day long and expect it to Just Work.

@vertigo @ajroach42 @keith This is a very old idea, but I think it ends up being less useful in practice than you might think because of the vastly different bandwidth and latency requirements between the CPU<->RAM channel and other peripherals. In the microcomputer era and for longer with "big iron" servers everything just lived on the memory bus. But that got prohibitively expensive with GHz clock speeds.

I think we should stop thinking of those devices with CPU and RAM and a couple ports as computers and start thinking of the network as the computer, in a far more real sense than Sun's marketing people ever meant it.

@freakazoid @ajroach42 @keith RapidIO encompasses all of those configurations, and does so more efficiently than Ethernet (until you get to jumbo frames, and even then, the difference isn't huge). Its three biggest applications today are in telecommunications (e.g., cell towers), supercomputers, and spacecraft, all of which are hard real time environments with lots of compute nodes and, often, strong requirements for link redundancy.

Because it's switched, aggregate bandwidth is very high, as long as CPUs talk to devices not in contention. (Memory is considered a device.) There is no bus. It is entirely point to point, like 10-base-T and higher. Unlike Ethernet, hypertransport, and PCIe, no switch port is distinguished.

RapidIO supports not only RDMA-like transactions, allowing one to build ccNUMA architectures with relative ease, but also message passing as well. These can be mixed on the same link as well, allowing for flexibility in systems design.

@vertigo @ajroach42 @keith I can buy an Ethernet switch. I don't seem to be able to buy a RapidIO switch, at least not in consumer quantities.

@freakazoid @ajroach42 @keith For the same reason you can't buy a HyperTransport or PCIe switch.

@vertigo @ajroach42 @keith If I can't, though, then isn't RapidIO off the table for any hobbyist project?

@freakazoid @ajroach42 @keith Ooh, I see what you're getting at. I was just looking at it from a pure "what's possible" point of view, if costs were no object.

Yeah, there's no way I can afford to put spec-conforming RapidIO devices into my homebrew designs.

The spec, though, is open in exactly the same sort of way as RISC-V is open. Which means I could, if I were so motivated, create my own, much slower, homebrew and/or FPGA-friendly variants. I just couldn't actually call it RapidIO.

Fun fact: at one time, I was going to do exact this, routing traffic over ordinary SPI links. RapidIO even followed me on Twitter. I think they still do.

The question now is, am I so motivated?

No. :) At least, not right now. I would get a much bigger bang for my buck if I just adopted STE or the RC2014 backplane bus standards.

I would like to revisit porting RapidIO to a hobby-friendly environment in the future though. But, I'd do it purely for the "neat" factor alone.

@vertigo @ajroach42 @keith So, like, switched SPI? That sounds pretty interesting. SPI is great from a speed perspective, but the need for separate chip select lines is a big downside. I2C is better for hanging multiple peripherals off of, but it's way slower.

Was your plan to make it packet switched with buffering of some kind? I'm reminded of cell-switched systems like the internal switches of some routers, and of course ATM.

@freakazoid @ajroach42 @keith

So, like, switched SPI? That sounds pretty interesting. SPI is great from a speed perspective, but the need for separate chip select lines is a big downside. I2C is better for hanging multiple peripherals off of, but it's way slower.

Not in any generic sense, no. (Although, I have an idea for working around your specific problem too. I should blog about that some time.)

For my RapidIO mapping to SPI, the host PC would implement MOSI, MISO, SS#, and CLK, like any normal SPI interface. To support data transfer in the reverse direction, a separate pin SR# (basically, Switch Ready) would be implemented which would indicate that data was queued up for the host to receive. The host hardware would need to assert SS# and CLK to receive the data queued up.

Things get complicated when switches are introduced, however, because some ports end up as slave ports, and others as master ports, and this would have to be configured in the mesh enumeration phase somehow. RapidIO's switch protocols don't support this, so it would necessarily be a protocol-breaking feature.

Was your plan to make it packet switched with buffering of some kind? I'm reminded of cell-switched systems like the internal switches of some routers, and of course ATM.

Buffering is actually optional with RapidIO. Switches are required to hold at least one packet, but can hold up to 8 on copper links, and something like 32 on fiber-optic links, if I recall correctly. (It's been a while, so my memory on this specific detail is hazy.)

I actually did consider the use of ATM. Like, honest to goodness, 100%, 53-bytes per frame no matter what ATM. I even had a way to work around the "cell tax" by using the otherwise unused GFC field to allow a device to send shorter-than-48-byte payloads.

However, I eventually opted against it because some microcontrollers that I thought people might want to use didn't have the RAM capacity to hold even a single cell. But, OMG, I'm so tempted to just say "screw it, use a bigger microcontroller and just deal with it." It would indeed simplify so much.

I am super tempted to go this route, even to this day.


@vertigo @ajroach42 @keith If you weren't planning to do switches, were you thinking just point to point or a bus? Without a switch, it seems like something similar to token ring might be ideal for multiple access.

@freakazoid @ajroach42 @keith For the #ForthBox, I'm planning on using the 65SIB bus. It's built on top of SPI, but can support up to 7 devices on a single channel. The channel has seven select lines on it.

I helped create this standard back around 2009 or so with Garth, and later on, Garth wrote it up in a more formal way.

@freakazoid @ajroach42 @keith It's occurred to me that I didn't completely answer your question. I explained what I had planned for the ForthBox, but I don't think that's what you're asking about.

The idea I had regarding switched SPI was as follows:

When the host asserts the SPI select line, none of the attached devices drives the MISO line. The host transmits a single byte. Each device compares the byte sent against its local address. If there's a match, that one device then starts driving MISO, and starts behaving as if its own, local select line was asserted.

That's basically it.

Sign in to participate in the conversation
R E T R O  S O C I A L

A social network for the 19A0s.