So web browsers are bad, right?
And web browsers being bad is making the internet bad, right?
Or maybe the internet being bad is making web browser bad.
The upshot is that we should stop using bad web browsers recreationally, and stop using services that can only be accessed from bad web browsers.
And when that isn't possible, build alternatives that work from not bad browsers.
That's why I'm so happy that Brutaldon exists.
So, what are the core features a good web browser should have?
What shouldn't it have?
@ajroach42 I guess the question is, how would you *split up* the web, so that applications that really do need the abused functionality went off into their own space (perhaps with its own protocol), while the pieces we like would stay in their own space in which annoyances are relatively difficult to implement.
@freakazoid Right. I'm not suggesting that we try to replace the web entirely. It is very useful, as much as it is a giant problem.
I'm wondering aloud what the core functionality of a modern document delivery platform should look like.
A thing that does what the web was supposed to do, rather than what the web does.
This deeply ties to: Most of my computer use is offline interaction with documents, or interaction with knowledge bases I would prefer to cache my segment of and use offline. There is no sense in hitting the net again just because I needed a refresh on the definition of Witt vectors for the fifth time this week.
With regard to formatting -- well, a subset of html might do, but maybe markdown would be better. Give the user complete control over fonts, sizes, and colors. Eliminate scripting entirely.
IPFS makes the guarantees web tech depends upon conceptually but cannot enforce & has never attempted to implement. I'd like actual permanent addressing, sure, but it's not necessary for a web replacement.)
the browser is an application environment and it doesn't have to be a bad one. when we visit URLs that serve JS we are downloading software; it should be permissioned and constrained, but there's nothing fundamentally wrong with downloading software or executing it on your machine. major browsers over-privilege JS apps in order to favor surveillance and ads.
- The web that exists should stay, and we should work to improve it. I am not trying to replace it, but augment it.
- A subset of the web that exists + other stuff that exists outside the web should be made available through a protocol/in a format that resists the problems with the web that exists, while also limiting it's functionality.
@ajroach42 @aeonofdiscord @enkiv2 @chuck yes, SSB and its ilk are really fascinating but are still dealing with critical standards and implementation issues (no deletes, etc). beaker and the dat ecosystem are muddling through tooling hell (hashbase, etc). all the pieces of a better web are coming together, and they improve with time
@garbados @ajroach42 @aeonofdiscord @enkiv2 @chuck ability to delete/be forgotten is as important in tech as it is IRL. Until IPFS etc have privacy and deletability (within the realms of what's practical, mind) then it's just a really cool name with a bunch of smart people working very hard on something that is not widely applicable.
That's exactly my line of thinking.
Even things that seem like they need networking don't necessarily. For instance, a social network 'app' could be offline-first (like ssb is) -- a daemon syncs posts periodically, and the 'app' just renders & allows you to drop hints for things the daemon should post next time it syncs.
This would allow devs to share layouts for common design patterns, or users could modify existing page layouts to increase accessibility. And I would want it so different pieces of page content could be user signed, for federated identity.
I have this dream of a browser with basic flex layouts, svg-inspired styling, and lua for lightweight scripting.
But LaTeX is focused around good text layout by default, and has every tool you'll need for that without style sheets or other garbage.
@ajroach42 @enkiv2 @freakazoid But a lot of the default assumptions in LaTeX would work well for the web: You float figures to the place they fit, rather then trying to put them EXACTLY where you want most of the time.
It figures out exactly how the text flows, based on the size of the page, etc.
You'd have to modify it to not be turing complete, but from a conceptual perspective it seems a good fit.
But what LaTeX compiles to was always meant to be generic. These days if you removed some of the bits that made it possible to program in it, and a few of the slower bits, you could easily have it compile to the dimensions of the browser page. either as a single massive page, or with some sorts of breaks in it.
Basic text formatting, fine. Blockquotes, tables, lists, good. Images with captions, but letting the UI choose the dimensions for displaying them, and whether to put them into a gallery.
Instead of header and footer and sidebar, there's just a <navigation> element. Maybe several, to represent levels of a site or document. UI can choose how to display it (header, floating sidebar...)
@Canageek @enkiv2 @ajroach42 At the other end, there's the criticism (from Alan Kay?) of the fact that we've essentially replicated paper books on computers. So maybe we're taking too narrow of a view and over-simplifying. Perhaps we're limiting ourselves too much by trying to make annoyances impossible; maybe that's a problem to be solved socially instead of technologically, except perhaps for the elimination of 3rd party content (or at least cookies).
@ajroach42 @enkiv2 @freakazoid I just thought of something: Browsers are going to have to help control text width if it isn't specified in the document. Ever tried to read a raw text file on a wide monitor? Once you are over a few inches across its just unworkable.
But I don't want to have to resize my browser constantly.
On the other hand, if there are apps and everything uses the same formatting then one window size would be fine? But if I hit maximize getting it back might be a pain.
Wrap is a solved problem for plaintext. Even word wrap: backtrack to word boundaries unless the token is longer than the line, in which case switch to character wrap.
This mechanism works so long as you don't switch text directions in the middle of a line & don't try to apply restrictions like non-breaking spaces to character wrap.
@enkiv2 @ajroach42 @freakazoid 1) I mean, how do you pick how wide a column to show? In HTML either it is as wide as the window (Fine when we used 800x600 monitors, not fine at 1920x1280!) or the document specifies a width.
If the document doesn't specify a width, and we don't want it full window width wide, browsers are going to have to handle that.
@enkiv2 @ajroach42 @freakazoid 2) That algorithm should have been cast into a fire years ago. Knuth wrote a better algorithm in 1978, and its been possible to run it in real time for quite a while https://en.wikipedia.org/wiki/TeX#Hyphenation_and_justification (I've heard that you CAN use this in browsers these days, just no one does)
Would tabs still be the best approach if you aren't going to be using all the space at the sides? I've thought they should move UI elements to the left and right sides of the screen for a while.
Or would it be better to go back to a multiwindow model so the OS can do nice layout things?
Would it be better to have split windows inside the browser, or open two browser windows, etc?
I like the idea of pulling in content from multiple sources in to multiple columns on one screen, personally. But I suspect there is no “right answer” here.
Which brings up an interesting point: applications are hobbling themseves by being crossplatform. They're stuck either not integrating in any interesting way or doing their own bespoke internal integrations that don't match anything else on the platform.
@Canageek @enkiv2 @ajroach42 I have tried dozens of window managers, including many few here will have ever heard of (GWM, for example), and I think i3 is the best one hands down. I tried Awesome and Ratpoison when I was looking for a tiling window manager, and I found Awesome to be too inflexible and Ratpoison to be too minimalist. I3 is fully controllable via CLI (dbus, actually), and its configuration language is powerful without being overly complex.
I liked how KDE2 was like, if Windows just threw in every linux feature they could think of, without breaking the basic design. Window bar at the bottom, start menu, but also able to pin windows to the top, snap them to eachother and so on.
@Canageek @ajroach42 @enkiv2 I seem to recall GNOME going back and forth on focus-follows-mouse. I've been using X11 since 1995, with my first window manager being TVTWM. I used that one for a long time before switching to FVWM, which I probably used longer than I've used any other manager. i3 will probably surpass it soon in terms of longevity.
A bit after KDE went WIDGETS, WIDGETS EVERYWHERE and then was too slow to run on the Pentium 4 I was using at work.
@enkiv2 @ajroach42 @Canageek Even X11 apps used to use multiple windows quite extensively. Not sure what led to the switch. We're not quite back to MDI, but the main difference between the current approach and MDI is that we have little to no control of the layout of subwindows within the main window.
SDL doesn't have a native widget mechanism because SDL doesn't have widgets.
I'm thinking of the equivalent of mmtk for tk. Swing has one, whose name I've forgotten. GTK & QT have them but I never knew their names in the first place. It's a mechanism to skin toolkit widgets based on current OS themes & make them behave like native widgets (sometimes by actually turning them into native widgets).
Both TK & Swing actually have this mechanism (sort of) out of the box, in that both can simulate sets of native widgets through a built-in config setting for a certain set of styles -- notably motif. mmtk seems not to be a wrapper over this that identifies which style is apprpriate: it creates even the elaborate osx translucent scrollbars and such.
For example, LaTeX the standard way to size an image is as a fraction of \textwidth (or \pagewidth) (Though you can do it in cm, that could be removed)
Markdown is fine if you're (a) expecting it and already familiar, and (b) talking about programming stuff, but that cuts out a LOT of people.
(... also (c) just... don't use asterisks a lot I guess, like gosh DARNIT it trips me up every time.)
@ajroach42 No JS. No transclusion of third-party content. Cookies, but no third-party cookies (which are meaningless without third-party content anyway). Little to no control over layout by the author, with all formatting being based on semantics. Images can be embedded, but they need to be literally embedded in the document.
@ajroach42 I'd build a web without imperative clientside code.
Everything would be declarative!
And everything that sent an HTTP request to a non-origin location would be behind a permission dialog:
* Always allow [domain]
* Always allow [domain] from [origin]
* Allow [domain] from [origin] this session
(+ disallow variants of the above options)
This is one of the reasons I love #gopher.
The Gopher protocol, bolted on to a basic rendering engine could actually do something pretty special.
I like inline images, for example. I'd love to be able to render documents with a stylesheet of my choosing. I'd like to see some very basic markup, and hypertext linking.
But that's it.
No client side code. I'm okay with first party cookies, because they enable some good behavior, but they should be transparent and Obvious.
In my head, a gopher browser with a markdown rendering engine, with support for inline images and in document hyperlinks would be something close to an ideal balance.
Not perfect, but close.
@ajroach42 so i don't understand most of what you've said in this here toot thread but i can tell you i measure response times in chrome in the seconds or tens of seconds, to say nothing of the minutes for my laptop outside of chrome.
it stinks and i would indeed like an alternative.
@ajroach42 I've just started a project to provide my answer to this question: The "Memex" browser engine.
It'll take care to not have any Turing Completeness on the client side. You'll be informed of any network requests that'll happen before you trigger them, and I'm looking to a cookies alternative so you can be better informed of those too.
And if you want a site to update live, send the site your location, etc you'd have to bother to look to activate that.
@ajroach42 For blogs, journalism and articles I want text (a single column preferably), light / dark theming support, pictures and videos. For many websites, this is enough. Perhaps some navigation stuff but keep everything as simple as possible. Because I want the focus to be about the content not the site. However, sites for other purposes (eBay, email, peertube etc) will need different styles. But simplicity, imo, is key.
Everything except for client-side scripting and stuff that is only useful with it, automatic loading of pictures or other embedded elements unless hosted on the same server as the page and the expectation that the client faithfully follows the style sheet.
The problem with the modern web browser is that it's become a software runtime. This is handy for writing lightweight(ish) clients and games but 95% of the web is just documents.
So every document is potentially a program without looking like one and a huge amount of effort goes into (semi-successfully) trying to keep those secret programs from harming us.
So I say cut the Gordian Knot: use a document reader when reading web pages and the app runner ONLY on websites you trust.
Server-side objects can provide some things to determine what data to send to represent that object, and can also start the same thing for whole new lists of objects.
Remember plain html has forms too. Can only respond with a whole new page.. Or into a frame. So thinking of extending the above to that, but filling in just part of the page.