And welp. Looks like that's exactly what's happening!
I can just gasp in awe at how skillfully Microsoft did this. Longtime Github hater here; it was proprietary even before it was Microsoft, and we had all already been burned twice by Sourceforge and Google Code so I never understood why everyone was so eager to slam their necks into this new guillotine.
@natecull never mind; I finished reading the article and the rape joke and the irony of publishing it on actual Medium got a bit too much.
@Sandra My reaction exactly.
My problem with compilation as compilation is that typically the compiler is built to *run outside of the runtime environment*
and as such it often requires special permission to run, and even specialer permission to modify.
Permissions that increasingly aren't even offered to ordinary users. "No binaries for you, and compilers are hacking tools!" is becoming the norm in corporate environments, and consumer devices are often just as locked down via app stores.
If the only execution environment a user is offered is a single managed one, then the two processes of:
2. "compiler development"
MUST be able to take place WITHIN this single managed environment - not outside it - if the user is to have any kind of freedom to modify and trust their personal computational environment.
I keep repeating this because it seems both necessarily true, and yet so often forgotten by developers who have special dev rights.
Increasingly, those special dev rights (the right to compile/package/publish code, and the even stronger right to modify and develop new compilers and languages) are going to be tied VERY strongly to being vetted members of a coporation, or institution like Github.
The "trusted compiler" is extending softly to become "the trusted organization" and even the open-source world isn't quite noticing that this is happening.
But it was obvious that it would happen from the start.
If I want to create an android app, I can just do that and put it on f-droid. And I can still create binaries for macos and linux, and last time I checked (about two years ago), I could still build on windows too. I'll need administrator access to install the initial binary, be it compiler, interpreter or VM. But home users have that kind of access on their machines.
So clearly I am missing something here, what is it?
This is a sort of parallel dev universe though. Most Android users don't know f-droid exists.
Breaking into vetted spaces gets harder as vetting gets more common for "security" purposes, & then that vetting gets applied in different circumstances. (None of my projects work on iPhone because I'm not paying $90/year for the privilege to be rejected by their app store.)
Yep. I have F-Droid on my Android, but I'm pretty bleak in my expectation that it will be around forever.
All it will take is one "industry-wide best practices" update and the next phone I buy won't be able to install F-Droid "because security/privacy risk" or something.
Source: My lived experience this year, seeing Work From Home + ransomware push organizations into MASSIVELY locking down all their machines and "security consultants" whipping them to do it faster.
Seriously, before this year I hadn't realised that the tide could turn so swiftly to "literally being able to run a non-corporate-approved EXE is Evil Hacking" but we're here now, it's happening.
We already had Administrator rights removed from all users for years before this. But this year, "running an EXE" is now considered Malware. And running a compiler is doubly so, "because it can allow running non-signed EXEs"
Compilers are generally EXEs.
I suspect most individuals and businesses who want a laptop that locked down will get Chromebooks. Though maybe MS and Apple will try to compete in the Chromebook niche.
I feel like my original point, that the issue is not if a language is compiled or interpreted, has gotten lost along the line.
What is described in the previous posts is control over execution on the CPU.
What we tend to call an interpreter is almost always a compiler combined with a VM, and more often than not the VM incorporates a just-in-time compiler.
I'm not sure "almost always" is true. A JIT is pretty tough to write (and until LLVM, if you wanted one you'd have to roll your own) so only quite established languages tended to get one until about 10 years ago. Most interpreted languages, including quite popular ones like perl & Lua, either don't have one or only have a third party JIT that is used by a minority of projects.
Anyhow, the context here is how users can be locked out of general purpose computing -- and how a separate compile step supports keeping dev tools out of the hands of users.
The important thing here is that REPL access gives you general purpose computing (whether an interpreter or a JIT is used).
With desktop OSes following in the footsteps of mobile ones (even windows and osx now prefer you get binaries from their pre-approved list in their 'app store' & warning you about unapproved apps at launch, & some Linux distros doing the same) we're headed for a time when writing your own code is way less accessible.
Consider the state of console dev for, ex., Nintendo. You pay to apply for access to a dev kit, then pay to apply for official approval to release the finished game, which is manufactured & distributed by the console manufacturer. Copy protection mechanisms also prevent unauthorized releases. The console manufacturer enforces content restrictions, rejects games that compete with their internal dev groups or preferred 3rd partes.
There are plenty of programming environments on iOS, ranging from Apple's own Shortcuts (née Workflow) and Playgrounds (Swift, so it's slow but it works). Tons of 3rd party programs like Pythonista, Koder, Replete, Hotpaw BASIC; Coda is defunct for financial reasons, but still works if you have it. Or use any of dozens of ssh apps (I use Termius, Prompt is also nice) to run make on a real UNIX.
On iOS, bypassing the app store entirely?
This is not the sniping argument between Apple fans and Android fans that you seem to want it to be. We're talking about a general trend that affects all popular platforms, and one that Apple's PR department was vital in articulating in the early to mid 80s.
If computers don't come with development environments, then only self-identified programmers learn to code.
@enkiv2 @wim_v12e @freakazoid @Sandra You miss the point entirely. iOS devices have easy access to development apps. Macs have every dev app, Xcode is pushed heavily by Apple. Anyone can easily write iOS software on a Mac, and deploy it themself, doesn't go thru the store.
Your paranoid belief that Apple's taking away "developer freedom" is INSANE. It does not match reality in any way. It is a lie that linux fanboys tell.
You can sync files in Files; it's complicated whether or not Dropbox, git, SFTP, etc. are allowed, but most of the code editors do it. Generally you can download any file and "action" it into an editor.
Frustratingly, Editorial still has no trivial way to import/export except by email or copy-paste encoded data URLs.
It's a sandbox, you can't h4xx0r the OS, but works well enough, it's just kind of silly since you can use a good editor instead.
I've used it before to youtube-dl a video and got it out to Files where I could watch it.
It means >50%, which is extremely dubious. There are *maybe* hundreds of languages with JIT support -- a tiny fraction of the total programming language landscape.
It's also dramatically missing the point of the original post to focus on JIT vs traditional interpreter rather than focusing on whether or not end users can edit code.
The original poster said
"I have always been suspicious of Typescript just because it's *compiled*."
My reply to that was this:
Nate's been posting about these issues on the fediverse for several years now, & that post should be understood in the context of the whole history of discussion that's been happening.
A JIT may *technically* be a compiler, but from the point of view of a developer working in the language, it is an optimization for a language and tool chain that acts as though it's an interpreter.
There's a meaningful accessibility distinction to be made between code that takes an additional compilation step and code that doesn't, because code that takes an additional compilation step can be distributed in compiled form without a compiler.
It's perfectly reasonable to assume MS is trying something shady here.
Thanks for your thoughtful response!
But I'm still not a huge fan because you lose information when you compile.
We went though this whole thing decades ago with "fourth-generation languages" and preprocessors. Source code generated by machine is not human-friendly at all.
I'm sure we could build this on WASM.
All source code is generally kept in files.
But browsers often don't give you files.
This MIGHT be about to change / in process of changing now that the File System Access API has been approved and is in the most recent Firefox (not yet in long-term support but I think next month).
Anyway, what I want is for the whole "source, version control, build, compile, verifiy, package, sign" toolchain to be available at runtime, for every user, within every deployed endpoint.
Generally I think that means the toolchain should be small not big.
Some form of "signing" or secure attestation might well be an important basic primitive to have in any distributed code execution environment. Especially if users are going to construct things like "types" at runtime and send them over a wire to untrusted distant machines. That's a thing I think we need so we might as well assume that we need it.
But that kind of signing is gonna have to not be based on X.509 certificates and PKI.
Yep, object capabilities does seem like a good way forward. Maybe ocaps don't actually need public key cryptography, though I think mutable objects like in Goblins probably do. That's the sort of 'signing' I mean. It needs to be small and fast and not involve a third party, and you probably wouldn't even use it locally, within your own trusted hard drive or LAN.
(I'm sure to what extent we can trust hard drives and LANs, especially removable media).
I would love for us to have some kind of universal "trusted persistent storage" where you could store very simple object-type structures where each object is guaranteed to meet some contract/type/criteria/function. Preferably in as simple and general a way as possible.
Filesystems don't get us this. Databases sort of do but they're chunky and very hard to update the schema. "Object databases" tried to do this but failed - I think embedded too much code.
Some people are advocating SQLite for this kind of personal data store. I still think SQL is too slow and chunky for this, but maybe it's okay.
But whatever we used, it would have to deal with "what happens if someone imports a foreign disk / SD card / zipfile / SQLite file that's been constructed/edited by an adversary so the attestations it makes about contracts/schemas/types are deliberately evil and broken, how do we validate it fast and safely?"
Interesting thread, bookmarked,
I don't understand exactly what you are trying to do, but:
1. We've used sqlite a lot and it has been pretty bulletproof.
2. Depending on what you want to validate, some structure to the data (which a relational database gives you) can help.
3. It's not the only way to structure the data, though.
(Or 90% of it anyway, if you do it right)
So you'd want to roll your own that lets you fragment out shared keys and shared values with references to other objects or the same object at different times while being fast to index & very small -- which an RDBMS won't do by itself for objects & can't be reliably made to do.
Naturally this means rewriting the JS implementation entirely too. (And breaking compatibility to add those features you want)
@enkiv2 @wim_v12e @bhaugen @Sandra @mathew @natecull Yeah I don’t know where this whole notion of SQL being slow and clunky or bloated comes from. SQLite, PostgreSQL, and MySQL/MariaDB have all had engineer-centuries put into them, and they’re rock solid, high-performance databases. They’re incredibly hard to beat. Most “schemaless” databases are extremely immature by comparison.
@bhaugen @wim_v12e @Sandra @mathew @natecull @enkiv2 I should say they’re incredibly hard to beat in any practical application. The NoSQL snake oil salesmen love to show how they beat them on narrow benchmarks or specific applications. But even if you have an application for which some other DB is faster, what about tooling? How do you back up? Can you take a consistent snapshot? How do you restore?
@bhaugen @mathew @natecull @enkiv2 @Sandra @wim_v12e For schema migration, your best bet is usually to stick an application-specific API in front of the database both to centralize all your data access code (including caching) and handle everything related to migration, including double-writing and falling back when a record isn’t in the new table yet.
Right, but if you are storing arbitrary objects, you literally don't have a schema. (Or, your "schema" is so granular that the DB is useless.)
@wim_v12e @Sandra @bhaugen @mathew @enkiv2 @natecull Then the question becomes what it means to store “arbitrary objects”. Why are you doing it and what do you expect the database to do with their structure? Do you expect indexed search on every field an object might have? What do you plan to do with that?
OpenStreetMap’s tags are kinda like this, but they work fine with SQL.
@freakazoid virtually every “nosql” database I’ve used in a production environment winds-up reimplementing SQL & RDBMS features*, and in all cases the results are worse than a mature RDBMS.
That’s not to say I don’t love k:v stores, object warehouses, etc. these things are awesome when used for the right applications.
Where it goes to shit is when something needs a database and the implement it’s choose nosql not because they make a conscious compromise on features, but when they don’t understand the problem sufficiently (or just don’t want to) to design the database properly from the beginning.
(* Redis may be the only exception to this, which is why it’s my favorite.)
You have to remember that originally it was The Company Database which ran on The Company Mainframe. At google scale that didn't work, hence NoSQL.
But since then this whole ecosystem has grown up around selling developers this dream, while in fact the value they're providing is making it so app devs don't need to learn SQL.
'I don’t know where this whole notion of SQL being slow and clunky or bloated comes from. "
By "chunky" I don't *necessarily* mean "slow" - but rather that SQL is quite an extremely high-ceremony wrapper around "creating or looking up an object", if you wanted to use it to, eg, persist in-memory objects of unknown shape.
In Lisp, an object creation is CONS and an object lookup is CAR/CDR.
SQL is *quite a bit* chunkier than that, no?
Also, there's no direct connection (except in some exotic very platform-specific frameworks, like I think .NET has) between defining a type in a programming language and defining a schema in a SQL database.
Despite types and schemas both being ways of describing the "shape" of objects.
It's this amount of high ceremony, where you have to commit to a schema before you store data, that makes SQL not a good fit for exploratory programming.
IF you've got a LARGE AMOUNT of EXTREMELY REGULARLY SHAPED objects, and you need the accesses to be fast, and you aren't going to be moving them between systems very much, and you're comfortable with there being quite an involved schema-definition / table-creation process that might not be very easy to just pick up and move elsewhere...
... then sure, SQL, okay.
I *guess* you could maybe use it for bulk save/load of in-RAM objects?
But (with the possible exception of SQLite) I kinda distrust SQL databases in the "app running on my personal desktop/phone" sense, because a facility I REALLY REALLY want there is the ability to, in an emergency situation and under time pressure, just be able to instantly copy off "my data" to another machine and bam, instantly be up and running, zero data loss.
Migrating data with zero loss is traditionally quite dangerous in SQL.
A social network for the 19A0s.