Some of these shortcomings are legitimate bugs. Some of them are bafflingly short sighted or poorly considered architectural decisions. Just as many are cases of a divergence between the needs of the user and the abilities of a program. Modern programs are often feature incomplete, poorly supported, and difficult or impossible to customize. Modern computers are often slow, and cranky. I'm responsible for handling the fallout of this unfortunate situation.
I've seen how revolutionary a computer can be, if it is designed with the needs of the user in mind, and how disastrous the same can be when it is not. I've seen computers used to empower people, and used to oppress. I've seen computers be Good, and the consequences of when they are not.
So that's who I am, and my experience with computers so far. Those are my credentials, and my qualifications.
The Computer Chronicles was a TV show that ran from the early 80s through the early 00s. Over it's nearly 20 year run, The Computer Chronicles covered nearly every facet of the newly developing Computer industry. It was hosted by people with Opinions.
The guests were, frequently, people who were proud of the things they made, or the software they represented.
Watching the developer of CP/M and DR DOS talk to a mainframe engineer who worked at IBM in the 50s about the future of computers as seen from the 1980s was eye opening.
On the one hand, this show serves as an excellent introduction to, or reminder of, the capabilities of computers 35 years ago. It helps us see how far we've come in terms of miniaturization, while also demonstrating again that, in many ways, there is nothing new under the sun.
Before the advent of the internet, reporters were writing their stories on laptops and sending them in over phone lines, 25 years before the release of the iphone HP released a computer with a touchscreen, three years before microsoft released he first version of windows Apple and Visicorp demontrated GUIs wih features that Windows wouldn't be able to approach for another 9+ years.
And, of course, I'm reminded again of Douglas Engelbart's 1968 "Mother of all Demos", in which he demonstrated the mouse, the GUI, instant messaging, networked gaming, and basically every other important development of the following 50 years.
It took 5 years for Xerox to refine and miniturize Engelbart's ideas to the point that they thought they could market them, and another 10 years before Apple refined and further miniturizaed the same ideas, and brought us the Mac.
Nothing is ever new.
There were others working around the same time on similar ideas, or at least from a similar philosophy. Working to make computers, if not intuitive, at least comprehensible. I think this is a noble goal.
The computer is often thought of as a tool, but it is more like a tool shed, in which we store a collection of tools, a source of power, and a workspace.
That is to say, in the 60s and 70s, computers were weak and slow and computer users were also computer programmers. A small, tight knit circle of developers and computer scientists were responsible for the bulk of the progress made in that time, and the idea of designing tools for non-technical users was never considered.
Computers became more affordable, slowly. Affordable computers became more powerful, quickly. Within 10 years, non-technical users were interacting with computers on a daily basis. It was against the beginnings of this backdrop that the phrase I mentioned earlier was coined. "Human Literate Computers" or "Human Centered Computing."
Ease of Use was the holy grail for a lot of computer companies. A computer that was so easy to use that they could sell it to grandma. But, to me at least, Human Literate and Easy to Use are distinct ideas. Many modern applications are Easy to Use. Netflix is Easy to Use. Facebook is, for all it's faults, pretty easy to use. The iPhone, the iPad, and ChromeOS are super easy to use.
Well, they are easy to use as long as you use them in the prescribed way. As long as you let them tell you what you want to do, instead of the other way around.
That, IMO, is the distinction.
I think that many of the steps towards demystifying the computer of the 80s and 90s did good work, but ultimately, the computer industry left the whole idea behind, in favor of making some tasks Very Easy while making other tasks Practically Impossible, and turning everything into a surveillance device.
When I was a kid I was brought up with computers that showed you how they worked.
You booted in to a command prompt or a programming language, or you could get to one, if you wanted to.
I got to play with GW Basic and qBasic and, a little, with hypercard.
I got to take apart software and put it back together and make things that made people happy.
I often wonder why Hypercard had to die.
It was because Jobs wanted the Computer to be an Appliance. A thing only used in prescribed ways.
Letting people build their own tools means letting people control their own destiny.
If I can make what I want, or if someone else can make what they want, and then I can take it apart and improve it, why would I pay for an upgrade? Why would I pay you to build something that doesn't meet my needs?
Hypercard, if your unfamiliar, is powerpoint + instructions.
Here's a great introduction/example: http://www.loper-os.org/?p=568
The author walks you through building a calculator app in about 5 minutes, step by step.
Warning: There's a bit of ableist language tossed around in the last paragraph. Skip it, there's nothing worth reading there anyway.
You use the same kinds of tools you would use to build a slideshow, but you couple them with links, multimedia, and scripting.
Want a visual interface for your database of client data? Great! slap together a roladex card, and drop in a search function.
Go from concept to presentation ready in an hour or two (or less, if you've done this before!)
My nephew has an ipad.
He asked his dad how to write games. His dad didn't know. His dad asked me how to write games on an iPad. I told him not to bother.
My nephew asked me how to learn to write games.
I gave him a raspberry pi and a copy of pico 8.
Now he writes computer games.
He couldn't do that on his iPad.
In the first episode of computer chronicles (https://www.youtube.com/watch?v=wpXnqBfgvPM) the mainframe guy is real adamant about how mainframes are good and micros are bad.
The host, a microcomputer legend, disagrees pretty strongly.
Later, when they talk about the future of networking, the mainframe guy talks about it as a return to mainframes. The micro guy talks about BBSs, peer to peer networks.
The mainframe guys are winning.
Lots of people in the replies to my original thread had lots of very negative comments about computer users vs computer programmers, and some of them seem to think that every human alive (excepting themselves) is some kind of half creature, incapable and undeserving of tools designed to meet their needs.
This is the technoelitism I mentioned earlier.
I'm done having that conversation. If you wanna talk about how computers should remain complicated, or how people should just learn to use the tools we foist on them, go somewhere else.
I'm not really even interested in talking about programming tools beyond lamenting the loss of programming as a fundamental part of the computing experience, rather than a niche secondary thing.
My core point here is mostly about the ways computing has gotten worse with it's most recent evolution, and what we can do about that.
There is no easy answer.
There is no single answer.
Anyone who claims otherwise is either selling something, or misunderstands the fundamental issue that people are unique, and therefore solutions most also be unique.
The computers of my childhood and my early teen years afforded me the same or greater utility than the computers I use today in all but one respect: communication.
I like a lot of parts of the internet. I like that it enables me to download software, to access media, to research, and to talk with people.
The internet does good, valuable things. Wifi and cellular data are revolutionary.
Part of that is going to mean exposing more of the underlying complexity of the computer to the users (when they want to see it.)
Part of it is going to mean redefining our networks, our relationship to networks, and our tolerance for surveillance.
Things that should be peer-to-peer must be allowed to be peer-to-peer. Federated systems must rise.
Tangentially related: https://boingboing.net/2018/04/19/post-internet-lament.html
A former resident of the USSR draws parallels between the russian revolution and the modern internet.