One tool that I have played with in the past is Kiwix.
Kiwix is an offline wikipedia reader, and it's pretty useful, especially when paired with Kiwix serve.
It seems like you should be able to abstract it in to a full fledged LAN to internet bridge, leveraging RSS support and maybe a couple of hacky hotsync jobs to download content when you have a network connection and then sync it with your LAN machine when you're on your LAN.
This doesn't work, though.
It has a few pretty severe limitations that relegate it to the world of Just Wikipedia, and not more generally useful for me, and I always forget about that until I try to build my own content for it.
Basically, it uses the file format from zim-wiki, a desktop wiki tool. It's really not great at serving content from multiple websites at one time (I can make it work, but it's needlessly complicated) and building content for it is an absolute nightmare.
So I'm looking for something better.
I want to cobble together a LAN internet in a box.
Something like the piratebox project, but that enables me to sync in content from outside my network.
I have some options.
Dat/Beaker always looks like it would be a solution to this problem, but it's node based and the last time I used it the command line functionality was limited. I'll re-investigate it.
More importantly Dat only solves the *sync*, not the capture or the serve.
Which brings me to the reason I started this thread in the first place:
What are your favorite LAN-first or offline first applications for desktops, servers, and mobile devices?
I would prefer to hear about applications with which you have firsthand experience, but I also wanna know about cool stuff that you've just heard about in passing.
Assume storage is free, assume local bandwidth is very high, assume that internet access is unavailable most of the time, but occasionally Very Fast.
Looks like the pirate box project is officially dead.
LibraryBox is still around, but hasn't seen a software update since 2015.
http://nethood.org/links/ has some info on software services, but they're mostly ... I mean, just not useful for my needs?
This article https://changelog.complete.org/archives/10219-a-simple-delay-tolerant-offline-capable-mesh-network-with-syncthing-optional-nncp proposes using syncthing + nncp + some custom scripts to do more or less what I'm imaging.
Maybe I'll try that this week.
That linked article, (by @jgoerzen) lays out some strong methods for using syncthing and NNCP to do a lot of the things that I've done in other worse ways in the past.
It makes the case for a solid foundation for a delay tolerant store and forward network, and I think I can use those building blocks to build one for myself.
It does not appear that anyone has defined the userland layer for this delay tolerant network in a way that supports interoperability.
There are a lot of features exposed by these tools, and these features can be composed to do exactly what I want (this is good) but it will be an entirely bespoke solution (this is less good!)
I think this basic premise could be made useful and viable for ... everyone
When I had considered nncp originally, the thing that stopped me from using it was that it expects (demands?) a level of planning. Nodes have to be known. It's all point to point.
I saw this as a drawback at the time, but I can't remember why?
It seems apparent to me in this moment that this is the only way for a network like this to work, and that we would want any new BBS citizens to introduce themselves and be known, rather than to appear (and lurk or troll.)
Yeah, both syncthing and nncp have a requirement to explicitly add new machines.
I can think of a few ways that this kind of certificate/authentication management would be super annoying, but that's what support software is for. This is the #mBBS equivalent of a DNS registrar.
Requiring node operators to authenticate their transport peers is probably excessive in a world in which all the transported data is e2ee, but it's also probably fine.
@bremner I'm not familiar with it, but offline search for email seems pretty great.
I'll definitely take a closer look.
@ajroach42 Full disclosure: I'm chief cat-herder for the notmuch project. "mu" is another similar one.
@ajroach42 afaik SSB still has the fatal flaw of "compatibility requires bug-for-bug identical json as node.js" problem; very frustrating that wasn't addressed early on in its life because it could be so promising if it were a protocol with multiple feasible implementations.
@ajroach42 IIRC dat is just an application that's build on the SSB protocol; and the node problem is a protocol-level flaw, so I would figure yes
@ajroach42 I was getting dat confused with beaker. looking at it again, those two don't seem to be interoperable with SSB/patchwork?
@technomancy I don't think they're actually interoperable, no.
SSB was based on some ideas from Dat, IIRC, but I think they both have the implementation depends on undocumented bugs thing.
@ajroach42 what makes this even more frustrating is how obvious this problem should have been up front. bittorrent designed an entire encoding format specifically to get around this problem; did they seriously design a whole distributed content-addressed protocol without looking at any prior art? they could have just used bencode; problem solved. ಠ_ಠ
@technomancy @ajroach42 (2) Here was a nice post on the Dat blog showing how the different protocols could interoperate while remaining distinct: https://blog.datproject.org/three-protocols-and-a-future-of-the-decentralized-internet/
@ajroach42 Thanks to the weird Mastodon threading, I didn't see you'd mentioned this until after I linked you to it. Hah. I do have email and backups flowing across it.
@jgoerzen I appreciate you sharing, and the detailed answer. You've got some great tips in there.
I have traditionally approached this problem from another direction entirely (that is to say, how to link up a bunch of servers with no dedicated network infra between them) rather than focusing on specific usecases.
I think a combination of syncthing and NNCP will get me to where I want to be, and even allow me to do the #mBBS thing the way I want to do it, instead of the bad way we do it now.
@ajroach42 Followed the #mBBS hashtag to some posts on your blog. You and I are definitely interested in some similar things. (I ran a #BBS back in the day, on both #FidoNet and #UUCP, so this stuff all rings true). If you ever used UUCP, NNCP is basically modern UUCP... or sort of UUCP+ssh+tor in a way. It's the ssh of async, a generic building block. Heck, you can nncp-exec to sh if you really want, maybe with a little wrapper to send you back your results.
@ajroach42 Python has much better support for crypto and for building large-scale applications. PHP's crypto APIs are idiotic, for example leading to Nextcloud "accidentally" base64ing all your files if you use server-side crypto. They couldn't even get MD5 right the first time around.
@ajroach42 I don't really have a good idea of how you pick what kind of data to send around in an amorphous network like that. With Usenet and FidoNet you subscribe to newsgroups/SIGs, but the connections are still static, so you can make sure you connect to one or more nodes that carry the groups you want. It seems like your best bet with the network you're talking about would be to flood everything to everyone.
@ajroach42 If it's generally the same group of nodes that talk to one another, I guess you could have nodes ask each other about specific topics, and then the node could be configured to either automatically request topics it's been asked for or to present the list of new topics to the operator and let them subscribe if they want. Or it could act like PIM dense mode and nodes could get every new topic unless/until the operator unsubscribes.
@ajroach42 PHP is fine; once it works it will be relatively simple to port to other stacks if appropriate and desired
@ajroach42 (because this is one of the edges where ESP32 and other IoT type devices start to have interesting potential too)
@jgoerzen I did a lot of this groundwork five years ago, so right now I'm just revisiting old ideas and concepts and seeing what I can simplify.
@ajroach42 You're right. Let me add a little bit of detail. NNCP requires that a node knows about peers it directly communicates, and if it originates a packet, every node on the chain to the destination. The intermediate nodes don't have to know about either the source or the destination, only about their peers. So a UUCPnet-style network is indeed possible (with UUCP, you had to know nodes names; with NNCP, you have to know their public keys). Also app-level relay can remove more needs
@ajroach42 I should also clarify that nncp-caller/nncp-daemon authenticate peers. But nncp-xfer and nncp-bundle don't (and can't, since they don't have a live connection to them), relying on E2EE as you say.
@dheadshot @ajroach42 I tried out #SSB for awhile. My observations: 1) Every client quite buggy. The main desktop one is usable, if quirky; the mobile ones are just too buggy to be useful. 2) The append-only protocol is going to get old real fast. 3) It can achieve offline social media. Though if we take an expansive view, Usenet and mailing lists got there a few decades earlier, albeit with the need for a cental server. 4) SSB still needs central serverish things
@soapdog @dheadshot @ajroach42 Hi! My experience was that a pub was necessary to get connected to anyone. I even had to pay for a couple to get invites. The list at https://github.com/ssbc/ssb-server/wiki/Pub-Servers got me started but many there were inoperative.
A pub is just a simple way to get a relationship graph started. You don't need one, but it helps if you're onboarding and don't know anyone.
Most pubs are run by volunteers, and we're encouraging people to migrate to rooms instead of pubs since they provide more controls and don't affect replication.
You might want to check rooms out, this is a nice one:
There is a nice getting started guide at:
Also, just adding a bit more context. What a pub does is provide you with a internet-accessible peer. Since the pub autofollows people, once you add a pub, you and the pub are friends.
SSB by default replicates friends and "friends of friends", so you end up downloading all data from everyone who added the pub as well.
Rooms don't work like that. All they do is provide a tunnel for you to see other people on the room, it doesn't affect the friendship graph
there is no blockchain. The data is stored as an append-only log on your machine. Every message is signed so that it can be verified that it comes from who it says it comes from.
There is no proof-of-work, no consensus, no intensive wasteful computation like blockchains, no financial incentives either.
Isn't there "wasteful computation" built into the protocol, where the signed portion of the JSON structure includes the signature key itself, meaning more computation is necessary than should be? Or did I completely misunderstand that? I do feel that requiring Node.js as the only implementation that works with it is an issue...
@ajroach42 All my suggestions would be either ancient UNIX (ed, troff), or 8-bit (Tiny BASIC, Turbo BASIC XL, LDOS, VisiCalc), or Mac (BBEdit, maybe Pages, Numbers, Keynote, GarageBand even as fat as they are). All the mobile stuff I use is Internet-based, even Drafts exists to move text between devices.
LAN software kinda lived and died by Novell back in the day, and now I dunno what you'd use to expose that functionality. Twiki's a good wiki/fileserver on corp intranets.
the inevitable sundog rambling non-answer
this is an interesting problem space to approach from a green field angle as well.
how would you ideally like to interact with this tool or toolset?
like, it'd be cool to have a li'l daemon and associated userland cli app that would take arbitrary URLs and add them to a queue (as complicated as you like - add queueing priority levels, separate queues by tag, whatever) and the daemon has a registered set of fetch handlers it uses to iterate over the queue when network conditions meet certain criteria, and maybe a notification when various queues are processed successfully or have newly available items, and maybe configurable pre-fetching settings on a per-protocol-and-host basis, so one could queue, say, a wikipedia url and it would automatically also fetch any other wikipedia articles referenced by that url, but only one level deep.
but at the same time, it'd be cool to have a kernel level shunt that intercepted tcp/ip and turned it into a batch of store and forward requests and somehow satisfied the tcp/ip client with some sort of placeholder content that would get refreshed once it had actually had a chance to fetch it kind of scenario - an invisible conversion to non-realtime networking would be slick as shit really if it were to be done well
anyway, as usual, I have lots more ideas than solutions, but I'll keep my eyes open and maybe go spelunking a bit this week
re: the inevitable sundog rambling non-answer
from a different angle, one could assume that only one solid and well implemented store and forward protocol for requesting and syncing data would be necessary, and all other requests could be handled by gateways into and out of that store and forward protocol.
for instance, email is pretty store and forward resilient by design and implementation. so a web-by-email gateway that took input from emails sent to a specific address (maybe as simply as the body of the email being a curl command) and returned output as web content (warc?) to the requestor would make the web store and forward. could have a similar torrent-by-email gateway, activitypub-by-email gateway, rss-by-email, etc. as a pluggable ecosystem that'd be pretty shiny to a certain set of users...
re: the inevitable sundog rambling non-answer
@djsundog I'm not opposed to klduging something together on top of email or NNTP but I have to imagine there's a more elegant solution.
A social network for the 19A0s.