Follow

Maybe our real mistake was in ever identifying a role called "computer programmer."

You may note that no such role exists in TRON. Everyone is either a program or a user.

· · Web · 4 · 4 · 6

@freakazoid I've come to prefer "operator" instead.

In particular, the distinction between "user" and "programmer" is an artifact of our presently barely-programmable and barely-usable computing systems. I would like to use the neutral word "operator" instead.

Source: loper-os.org/?p=284

@theruran "Barely-usable and barely-programmable" is an excellent way to put it.

@theruran I can't say I agree with all of those laws, though. In particular, I strongly disagree that orthogonal persistence is The Way. Systems need to have a way to get into a known, reproducible, transparent state. If the only way to get to a state is to load a dump of that state, then you don't really know what's in the state or how you got there.

@theruran Of course, this applies to the state of the filesystem as well; there is no way in general of getting to a particular filesystem state aside from copying the filesystem. But at least the filesystem itself is for the most part transparent.

What if the entire system *including the filesystem* were a cache of the evaluation of some function which can itself be examined, in the same way the NixOS store is?

@theruran I guess with such a system there would be no filesystem, just a log and a cache.

Actually, two logs: one that captures past snapshots of the state so you don't have to replay from the beginning, and the other that has the actual function in it, in some concatenative functional language.

@freakazoid @theruran It should be noted that OS/400 systems do actually have a file (well, "object") system. The file system exists within the memory that is persistently stored, however it like every other application is memory protected.

If an application gets into a corrupt state, you can delete the cached native binary "files" (similar to deleting .pyc files for Python scripts), and rerun your application which causes them to be recompiled in situ.

Data objects that get corrupted can be treated like data files, since they're passive entities as long as the program responsible for them isn't actively running.

If the entire system image gets corrupted, that's tantamount to having your entire filesystem getting hosed, which probably requires a reinstall regardless of the persistence model.

@vertigo @theruran Hmm. I've been thinking of the system state as a monolithic entity like a Smalltalk image, but I guess it doesn't have to be. Having it be compartmentalized into "objects", "processes", "actors" or whatever that can be independently updated or rolled back would make me a lot more comfortable with orthogonal persistence.

I think we need to treat getting into an unknown/untrusted state as a regular occurrence, though, because of rule number 2: forgives mistakes. Erlang processes are supposed to simply crash and get restarted by their supervisors when this happens. The more extreme version of this is a "crash only" architecture, where every startup is treated as recovery from a crash, and there is no such thing as a clean shutdown except maybe flushing any dirty caches. Which I guess fits perfectly well with rule 3 without requiring the version of orthogonal persistence I was thinking of.

Ok, maybe I do agree with everything there.

@freakazoid @theruran I basically agree. While I am a fan of orthogonal persistence, I am not a fan of making it a law because there are so many different ways to accomplish a similar outcome.

We almost have it in our modern laptops now, assuming you can get sleep modes working correctly.

Orthogonal persistence seems like an implementation detail especially when there's already another law that covers the intended outcome of that technology, namely Survives Disruptions.

@vertigo @theruran Well sleep modes in laptops are precisely the kind of monolithic all-or-nothing state I want to avoid. All you can do if it gets corrupted is throw it out entirely and reboot, which is a fundamentally different process from waking up from sleep.

@vertigo @theruran The author's writings about Skrode and Loper seem to mirror my feelings about the damage that happened to computing when UNIX won out over Lisp machines.

@freakazoid @theruran It doesn't have to be that way though. A Unix kernel can basically execute:

For all processes p,
Swap out(p)
Save set of running process descriptors.

before putting the system into a low power state. Unless I'm mistaken, the swap file will already have enough space for the process allocated anyway, and only changed state needs to be saved (since modern kernels basically mmap the original binary image and code is never self-modifying anymore).

Alternatively, a per-process swap file can be used. Or, like how PC/GEOS works, a good amount of application state can basically be a part of the currently open data file (vis-a-vis is so-called "virtual memory files").

@freakazoid @theruran That's an interesting idea....

Taking GEOS' approach to a limit, where to dynamically allocate memory, you must specify which pool it will be allocated from. A data pool object will appear in a data file somewhere, while a process pool will basically piggyback on the system's default process swap file, assuming the process created one.

I'll need to think this through some more.

@vertigo @theruran IIRC Erlang processes that only infrequently receive messages can be put in a sleep state.

We need to be able to take consistent snapshots, and it should be possible to do so even across nodes. Which means even if individual process snapshots can happen at arbitrary points in time, the process has to be able to know when a given state change is durable at some given level of reliability, so it can communicate that to other processes. That should be enough to implement distributed transactions. If the node or process crashes and comes up in a more recent state, it should just result in any in-progress transactions that already failed on other nodes failing and rolling back.

One can implement transactional memory with a page fault handler, and pointer swizzling for any pointers outside the store or if you don't want to rely on mapping to the same address range. Probably better to use a more cooperative language, though.

@vertigo @theruran I think one could say that there's really no such thing as orthogonal persistence, because it's always possible to have I/O that's related to state changes that have not yet persisted. It's kind of like the CAP paper that kicked off the NoSQL movement: you cannot hide persistence and consistency issues from the operator. Same with data locality.

The actor model deals with data locality very well: anything in local storage is local. Anything you have to send a message to get is not. Though messages can still go to the local node, over the LAN, or over a WAN.

@vertigo @theruran For display you make it so your language can run on the GPU. Though I haven't decided yet if the GPU should be treated as a separate node such that you have processes that appear to run entirely on the GPU, with data being passed back and forth through messages. Erlang's "binary" type is a lot like a VBO, or whatever they're called under Vulkan.

Note that at this point I'm talking about prototyping, since this is not necessarily how you would want to implement such a system from scratch. But Vulkan does make it easier than it's ever been to provide a decent UX (OX?) for graphics acceleration. It's still stateful, though, so even if a process spans the CPU and GPU, there probably can't be more than one per GPU context.

I'm currently implementing a game engine in Rust. It uses Rhai as its scripting language. I think I'll try out some of these concepts with it. Maybe even including compiling Rhai to SPIR-V at some point. But certainly message passing via capabilities, transactional persistence, and the GPU as just another process.

@vertigo @theruran I think I'm wrong about the GPU needing to be a single process. CPU cores are also stateful, but the kernel multiplexes that state so that it looks like a bunch of different states. I guess with a GPU the equivalent would probably be a render pipeline (a set of shaders and their corresponding input types) and binding groups (the input data and storage). Each draw call would be a message.

A scheduler could know enough to minimize state changes, but there are some cases where you can't do that, like when you're rendering objects with transparency, in which case you have to go with strict back-to-front order. I guess for those you could use a process whose job is to sort the calls for you.

Minimizing state changes would just be a matter of letting each process handle all messages that are pending for it any time it's scheduled. So the scheduler wouldn't even need any special knowledge of the GPU or even that it was dealing with a GPU.

@freakazoid @theruran That wooshing sound was things going well and truly over my head. 😏​

I had in mind a system which was simpler. I was thinking about GEOS' VLIR files (on Commodore 64) and VM files (on PC) and Oberon "library" files and how they all worked under the hood to provide the user with a system that was (relatively) fast and responsive while at the same time providing some semblance of persistence on a platform that was otherwise not built for it (e.g., an 8088 running in real-mode).

Now take that concept and project it into an environment which is built for it (e.g., something using an MMU for paged memory), and have the system blur the management of virtual memory and persistence into data files.

In theory, you can do this today with mmap on Posix environments. mmap only provides the lowest level mechanism of manipulating the address space, though; it doesn't provide any of the policies that need to ride on top of that foundation. I'm pondering those missing layers.

You mentioned Erlang, for instance, but I'm thinking more along the lines of, what would this sort of middleware1 look like for (e.g.) Rust or C(++)?

____

I hate using this term, as what I'm envisioning is systems-level services; but, I can't think of anything better. I'm explicitly not thinking about a "middleware framework" like CORBA, though.

@vertigo @theruran I'll just consider Rust, since it's a lot easier to prevent the use of unsafe code in Rust than in C++. For C++ or C you're pretty much limited to using the MMU unless you want to force the use of smart pointers and rely on documentation to tell the operator what unsafe things they need to avoid.

With Rust you can use smart pointers, but you can statically enforce that a persistent object cannot have a pointer to an object that isn't persistable. As long as you have a decent API for creating and accessing persistent objects, this lets you emulate something like Erlang's DETS¹. It's not orthogonal persistence, but in combination with a framework like Erlang's behaviors², you can implement programs that will just pick up where they left off if they crash, while being resilient to bugs. And there won't be a weird disconnect between startup and crash recovery; they'll be the same thing.

¹ erlang.org/doc/man/dets.html
² erlang.org/doc/design_principl

@vertigo @theruran I think if you really want to prototype the computing system of the future, though, you should be using a VM and not limiting yourself to the interface imposed by the need of the hardware to support C and SMT. I don't know how hard it would be to create a new Rust backend to support a VM with more constraints than WASM, but I imagine it would be a hell of a lot easier than doing the same with C++.

@vertigo @theruran WASM doesn't support SMT, but it does support C, which I think makes it unsuitable for something like this. Ideally your VM will enforce memory safety and prevent construction of arbitrary pointers.

@freakazoid @vertigo @theruran A primary disadvantage afaik of WASM currently is the linear memory management, which makes it impossible to free memory (although there are currently drafts to allow a reclaim of unused memory at the end of the address space, and maybe something like madvise(DONT_NEED) in the future)

@zseri @vertigo @theruran Eh, plenty of UNIX systems and software have had no way to return memory to the system. For a long time all you had to get more memory was sbrk, and I'm pretty sure plenty of systems did not support negative arguments. And even when you could use a negative argument a lot of programs never did that.

@freakazoid @theruran

I think if you really want to prototype the computing system of the future, though, you should be using a VM and not limiting yourself to the interface imposed by the need of the hardware to support C and SMT.

My thoughts were abstract; I mention C for several reasons. It is sufficiently close to Oberon and Forth that it is relevant as a stand-in for both. A colossal amount of software exists in C, and it's not going away just because I wish it to.

That said, I agree; all future platforms should be VMs of some kind. Even if that VM is a model of another piece of physical hardware. C.f. my thoughts about using Rust to compile to RV32 or RV64 instruction set, and "run it" on my ForthBox computer via a RISC-V emulator of some sort.

Performance will truly be dire, being that the host CPU is a 65816 at 4MHz. But, it should be good enough to get a basic runtime environment up and running, I'd imagine. I doubt it'd be much slower than running BCPL code under a port of Tripos.

@freakazoid @theruran DETS looks like Berkeley DB from my point of view, which is also a good analog for what GEOS VLIR files were like (although, VLIR files were structurally much simpler; keys were always restricted to the integers 1..127, in part due to the limitations of a 170KB capacity disk).

@theruran @freakazoid I, too, have come to prefer the term operator, at least in my documentation. Colloquially, I still tend to use the word user, although it is a bad habit I seek to break some day.

My rationale for doing it, interestingly, is both historical as well as social. Historical because that was the preferred term used in most historical mainframe documentation. If I'm not mistaken, the System/360 Principles of Operations manual is where the term "system operator", and thus, "sysop", comes from.

Social, because there was a document online somewhere, which I'm ill equipped to locate now as I'm sitting at a red light, which made mention that the computer industry and the drug industry are more similar than most people consider, as those are the only industries which refer to their customers as users.

@vertigo @theruran "Operator" dates from a time when "users" didn't even touch the machine, even through terminals. They'd drop of card decks at the machine room and then come back later to retrieve their printouts.

Later when computers started supporting interactive use and time sharing and users started getting terminals in their offices, the operator role continued, because they were required for the purpose of mounting and unmounting tapes, retrieving printouts and maintaining the printers, booting the machine, etc.

"Operator" seems entirely appropriate when talking about a personal computer. Even though I'm partial to TRON's use of the term "user", that's because I was at a good age when it came out. It'll be ambiguous to anyone else. Whereas the term "operator" sort of jolts the mind out of the standard mode of thinking.

@freakazoid The impenetrability of mainstream programming paradigms fairly doomed us to generate a scribe class.

@freakazoid @jalefkowit the programmers in Tron are hiding from their creations AND their users.

@tewha @jalefkowit Perhaps the only actual programmer is Dillinger. Who died recently, I'm afraid.

Sign in to participate in the conversation
R E T R O  S O C I A L

A social network for the 19A0s.