Andrew Roach is a user on retro.social. You can follow them or interact with them if you have an account anywhere in the fediverse.

The thing that intrigues me the most about this is the concept of Delay Tolerant networking (in general) and the idea of developing secure, delay tolerant analogs for existing networking bits.

Email and RSS are great examples of the potential for this. We currently do both of these things through a web service, and in the vision of delay tolerant networking presented in the article we could continue to do so, or we could replace the web service with a local application.

(Thread, of course)

Email can already be delay tolerant, if we use pop3 and smtp instead of imap.

Blogs, vlogs, podcasts, etc can already be delay tolerant if we use RSS.

Message boards can already be delay tolerant, using NNTP/usenet protocols.

Social has a few delay tolerant options, including SSB. More are possible, if we build up to them.

That leaves us with a few of the more interactive tasks that are far less delay tolerant.

Media streaming (aside from, as mentioned earlier, podcasting) tends to be an on-the-fly activity. If we designed our systems to do so, there's no reason (other than copyright, I suppose) media that was streamed couldn't be stored locally so long as there is room (google play music does this to great effect)

Once stored locally, that same media could be shared with others within range (Dat or peertube style)

This kind of thing isn't exactly suited for 100% asynchronous networks, but if you're in a situation with intermittent connectivity it works well.

Research is another activity that functions better with a synchronous connection.

But it's not entirely impossible with an asynchronous connection, depending on how you go about it.

For a while, when I had a portable computing device but not mobile internet (a palm pilot), I used a piece of software that would automatically fetch web pages, format them for the palm, and sync them. I set it to fetch every 1st and 2nd degree link from a page (so every link on that page, and every link on those pages) with some other tolerances to keep the resultant bundle of files from getting too huge (I only had 1GB of storage available for web pages, after all)

The software would strip out the images (unless I told it not to) and reformat the page for the smaller screen of the palm.

I could set the system up so that pages I wanted to visit that weren't available would be requested the next time I connected.

While it was primitive by modern standards, and it was occasionally frustrating, It was actually pretty damn functional. I used this as my primary means of web browsing throughout highschool and well in to college.

Now, this makes browsing very difficult, unless the things that your browsing are well indexed, with good metadata.

In my case, most of what I was browsing was from Archive.org or project Gutenberg, so I was fortunate enough to be able to sync a single page that was a large index of content, and then to queue up all the content I wanted to request.

This also wasn't mission critical for me. I did research as entertainment, so the delay did not cause problems.

Andrew Roach @ajroach42

So, I guess what I'm saying is that many internet activities were originally designed to work asynchronously, and many more can be made to work asynchronously.

If we make delay tolerant networking a priority, the end result could probably be pretty fucking useful, especially as traditional online spaces get less trustworthy.

· Web · 19 · 25

So my ideal implementation of something like the data mules (I hate this phrase, I'm going to say data trafficker, even though that's not much better) described in the article would look something like this:

1 - In a high traffic, relatively central area there is a wireless node that can be connected to from PCs, and that will automatically connect to passing data traffickers.

2 - This node serves as a local intranet for anyone in range (via long range wifi or local wifi) and works like an old school BBS. (ajroach42.com/a-modern-bbs/)

2.5 - It's almost certainly powered by something like Dat or IPFS (ajroach42.com/steps-towards-a-) and probably gives users a local home page to serve as a jumping off point, as well as having an option for local real-time services and storing asynchronous requests to pass along to the data trafficker when the time comes.

3 - If I make a request for a piece of content that isn't stored locally, that request get's forwarded to the data trafficker when they are in range. If they have the data, they send it. If they don't, they store the request until they are in range of an internet connection or a connection to a larger network.

4 - Until a request is fulfilled, it's sent out to everyone that passes through. When/if someone passes back through with the data, the request is cleared, and that data is stored locally until the original requester picks it back up.

5 - since most of what we are doing is being done on local machines with fault tolerant applications, the nodes don't have to be particularly high powered, which means they can survive on alternative power sources, and it's acceptable if they are not always available (as a result of weather, or general impermanence, or whatever.)

6 - Whenever possible, the data we're passing around is cryptographically signed, so that we can be reasonably certain it hasn't been tampered with (this is why I like Dat or IPFS).

7 - Any machine participating in the network can be/is a gateway node, and a data trafficker.

The biggest upshot of doing things this way, IMO, is that it eliminates the problem of discovery.

That is to say, in a synchronous system you can quickly look for a thing and, if you don't find it, modify your parameters through several quick permutations until you do.

having these gateway nodes with relatively large caches of local content can eliminate the hassle of dealing with the *very* high latency of your queries, by providing indexes of known (even if not currently available) content, and by providing faster feedback to initial queries.

Executed correctly, and carefully, this kind of not-really-the-internet asynchronous network could provide a viable alternative to traditional connectivity in rural areas and poor communities (that is to say that I fully expect the system I described here to be more useful and much cheaper to implement and maintain than traditional internet in rural parts of, for example, the south-eastern US where I live.)

@ajroach42 Not sure if you're aware but there is a whole HAM hardware radio with high latency projct. FaradayRF. They're implementing parts of your idea. Might be a cool starting point if you have't seen them before?

@Elucidating I'm not familiar, no.

I was under the impression that any kind of crypto over HAM waves was strictly illegal in the US, and without at least cryptographic signatures, I'm not sure I could trust the system.

@ajroach42 No one actually listens to that. Your body is almost certainly saturated with encrypted waves.

The FCC doesn't even have a dozen people there.

And I agree that the protocol level should be where encryption lives. You're allowed to do signatures (as: "checksums") and that's all you need to ensure you know which node you're talking to.

@ajroach42 Anyways: yes. Delay tolerant networking with ham radio as a carrier is an arduino-like product you can get today. It's cheap, the specs are open, the people seem cool, and the battery life is pretty good.

@Elucidating Neat. I'd still need a HAM license though, right?

@ajroach42 @Elucidating Yes, you would.

And you'd be required to transmit your callsign, which is tied to a public listing of your name and mailing address in the FCC's Universal Licensing System database. (The mailing address need only be one that you can receive mail at, however.)

Zero anonymity on amateur radio, mandatory legal name policy by law.

@bhtooefr @ajroach42 @Elucidating As an amateur radio operator I can tell you that nobody who's part of the mainstream amateur community ignores the restrictions on encryption, and many of them will report those who do.

If you're going to ignore part 97 anyway you're almost certainly better off *not* getting an amateur radio license, since you are probably violating more laws by doing it as a licensed amateur than as a random member of the public.

@seanl @ajroach42 @Elucidating Yeah, even if the FCC isn't heavily staffed, they don't need many staff.

There are a lot of people who treat amateur radio as a homeowner's association, and they take it on themselves to hound the FCC until they deal with a perceived violation. Actually transmitting encryption on amateur radio is one that the FCC could actually care about.

@seanl @ajroach42 @Elucidating Ultimately, the better bet is to do this kind of thing in Part 15 territory, between the licensing requirement (I mean, a Technician license isn't hard to get), the content restrictions, and the crypto restrictions.

Or, licensed commercial bands (things like the 3.65 GHz 802.11y band), for higher power operation. Those can carry basically whatever you want legally.

@bhtooefr @ajroach42 @Elucidating If licensing were the only problem with amateur radio we could just get node operators who run long distance links to get licenses and then use wifi for the last 100 meters. But the crypto & content restrictions have basically made amateur radio irrelevant for all but pie-in-the-sky experimentation & talking about your radio unfortunately.

Plus, with part 15 you don't have busybody hams reporting you to the FCC, since that's how part 97 tends to be enforced.

@seanl @Elucidating @bhtooefr

That was basically the conclusion I came to years ago when I first started working on this.

@ajroach42 @bhtooefr @Elucidating I was just reading the FaradayRF page and it looks like while it's intended for amateur radio it can probably be used under part 15. Part 15 technically requires type certification but they've been pretty lax about that, and that applies to manufacturers anyway. I think the reason they're selling this under part 97 is because it's easily modifiable by the user.

@ajroach42 @bhtooefr It's not clear to me what @Elucidating meant by "high latency" though. I can't find anything about that on the site.

@ajroach42 @bhtooefr @Elucidating 915 MHz is shared between part 15 and part 97 users, so I have no idea how anyone would even know you were using it unless it's using a recognizable protocol or you were interfering with someone.

915 MHz can also tolerate not having line of sight better than 2.4 or 5.8 GHz. It was used for the link between Ricochet modems and the base stations, while they used 2.4 GHz with line-of-sight for the uplink from the base stations.

@ajroach42 @bhtooefr @Elucidating That's also the band used by the GoTenna and similar products to get reasonable range without needing line of sight. Line of sight is still better because you're relying on the waves bending around or reflecting off of objects rather than penetrating them, of course.

@seanl @Elucidating @bhtooefr I’ve looked at some other ~900mhz radios in the past, but I’ve been scared off by the amount of work that would be left for me to do on implementing the stack (balanced against my available free time), and the relatively low bandwidth of the radios.

This, or something like it, would work very well for sending text over reasonably long distances, but we’d still need the sneakernet/data traffickers.

@seanl @ajroach42 @bhtooefr they're woking on a software package that isn't TCP/IP that can support infinite latency and forwarded messages.

This is a pretty decent idea. It's not like folks are getting a lot of out conventional network protocols over the air anyways.

@Elucidating @seanl @ajroach42 I mean, there's existing protocols that do this, too - the MBL/RLI and FBB protocols are store-and-forward protocols used pretty widely for packet BBSes and for e-mail. They're typically carried over 300-1200 bps AX.25, although 9600 bps links can be done with good equipment (emphasis on good).

@seanl @Elucidating @bhtooefr Yeah, I'm pretty uninterested in pissing off the FCC, so I'll likely continue to avoid amateur radio for my purposes.

@ajroach42 @Elucidating Messages encoded for the purposes of obscuring their meaning are prohibited (with certain exceptions, IIRC spacecraft related).

Cryptographic signatures are not meant to obscure meaning, and therefore are legal.

@ajroach42 @Elucidating That said, amateur radio has other content restrictions: law.cornell.edu/cfr/text/47/97

The big ones that would make it awkward for current Internet usage models are commercial communications, obscene/indecent communications (AFAIK that may be unconstitutional), and music.

@ajroach42 @Elucidating It is. At the very least, it'll cause a turkey hunt. Same with binary protocols that cannot be identified as documented.

Years ago, we asked the ARRL about cryptographic signatures on broadcasts, and they said those fell into the "Hell, no" category as well.

@drwho @ajroach42 @Elucidating Well, /broadcasts/ fall under the "hell no" category, almost always.

Cryptographic signatures on transmissions, though? The ARRL isn't the ultimate authority on amateur radio. There's plenty of groups doing signatures over amateur bands publicly, I've never heard of anyone getting in trouble for it.

@bhtooefr @ajroach42 @Elucidating I think what happened was that we specifically asked about it, after asking some fairly detailed questions about use of encryption over packet radio, and they basically had enough of us pestering them. We spoke to our lawyer about it and she advised us to let the matter drop.

@drwho @ajroach42 @Elucidating One thing is that general best practice appears to be, don't ask the FCC for a ruling on whether something's OK, because you might get a bad ruling.

Maybe your lawyer was trying to avoid that scenario.

@drwho @bhtooefr @ajroach42 @Elucidating

over here (UK) encryption on ham bands is only alllowed if assisting Emergency Services.

it is allowed on commercial PMR systems (I have a personal license for these and look after two more for work) but these have limited range and bandwidth. Long range wifi links/community wifi are allowed here and only lightly regulated.

For USA I remember reading about an unique allocation (not amateur) somewhere in the GHz range, another possibility maybe?

@vfrmedia @bhtooefr @ajroach42 @Elucidating The only thing that comes to mind at the moment is WiFi in the US.

@drwho @bhtooefr
@Elucidating

alas I cannot find the PDF, it was on some random electronics link on internet archive, maybe even something @ajroach42 linked to a while back, possibly a section of L-band, but it might be old info and since the FCC may have sold off commercially what bits aren't already used by the Pentagon (and the company which was /supposed/ to use it for some "medical telemetry" went bust anyway...)

@vertigo @drwho @vfrmedia @ajroach42 @Elucidating FRS (and GMRS) is not in the GHz range, and has heavily restricted emission types though.

@bhtooefr @vertigo @drwho @ajroach42 @Elucidating

The allocation was way higher up the band than FRS, possibly in the L-band.

I was looking through all FCC bandplans at the weekend and couldn't find it again, I vaguely remember the info I saw about it being from late 1990s/early 2000s.

I suspect it was quietly sold off to commercial operators around the time USA telly went digital (and may be planned for use for internet in rural areas, as an alternative to satellite (that is laggy!)

@vfrmedia @drwho @bhtooefr @ajroach42 @Elucidating There are several ISM bands above 1 GHz but the higher you get the more line-of-sight you become. There are also ranges where you have to deal with water and molecular oxygen absorption. I'm not sure anything other than 900 MHz, 2.4 GHz, and 5.8 GHz are worth considering for RF links. Free space optical links are an interesting option for short range (few km) and high speed.

@vfrmedia @drwho @bhtooefr @ajroach42 @Elucidating Oh I forgot 433 MHz. You can use the LoRa radios on 433 and 915 MHz. They can be set fo up to 300 kbps but I don't know what kind of link margin you'd need for that. adafruit.com/product/3073

@seanl @Elucidating @bhtooefr @vfrmedia @drwho

Have any of you done anything with the xbee/zigbee and raspberry pi?

Or, better, do you know any xbee/zigbee folks that might be able to give me some pointers?

@seanl @Elucidating @bhtooefr @drwho @vfrmedia I’ve looked at LoRa and xbee in the past. Probably going to buy some of each after the move.

Where our new home is, the only options for traditional connectivity are satellite (high latency, low bandwidth, expensive) and cellular (low latency, high bandwidth, intermittent connectivity due to signal, data caps, very expensive.)

I will personally be installing a cell extender and getting my internet through the Calyx institute's cellular connection, but that's not an option for everyone, and it's still pretty pricey.

This kind of system, if we can support it with the right software, has a real potential to revolutionize and revitalize some communities.

And I, for one, couldn't be more excited.

@ajroach42 For someone who doesn't need a range extender it's pretty cheap compared to any other data plan I know of. $42/mo?

@ajroach42 And that includes DSL, though I have never tried to measure the bandwidth of mine. Speeds seem pretty competitive at least in the SF Bay Area though.

@seanl As far as I know, they don't offer a monthly package? At least all I could find online was something like $500 for the first year, and $400 for each subsequent year.

It's super reasonable compared to traditional cellular, and when you break it down in to a monthly rate it's way less than I currently pay for internet.

When I say that it's still pretty pricey I'm coming from the perspective of a town with a median household income of $25k.

@Shamar It seems like this conversation is mostly just people trading petty jabs over nothing.

I have no real desire to read pages of that.

:-\

@ajroach42 well, maybe... the point was exploring answer to this question: can we build a better internet over a partition tollerant protocol? And how such protocol would look like?what's the pripr art?

It's pretty similar to what you call asynchronous don't you think?

Indeed while reading your thread I was surprised by the similarities...

@Shamar I’ll give it another shot, maybe it gets more interesting further down the thread. Most of the stuff at the top just seemed like people fighting, and I don’t really want to get in the middle of someone else’s arguments.

So, how do we get started on a delay tolerant network?

Do we start with hardware or software? What existing software can be used? or repurpused? What software needs to be abandoned and replaced?

What software solutions already exist for Store and Forward? UUCP? NNTP? BBS?

What hardware already exists? How can we use existing hardware? Wifi? Xbee? LoRa?

@ajroach42 the interplanetary internet is real hardware & designed to be delay tolerant.

@ajroach42 hm to hear vint cerf talk about it is to hear it being a lot more far along than wikipedia describes it as. what the

@ajroach42 Well, back in the day FIDONET was extremely delay tolerant; it had to be.

en.wikipedia.org/wiki/FidoNet

Study that.

Also, NASA's DSN is pretty darn cool, too: en.wikipedia.org/wiki/NASA_Dee

@profoundlynerdy Findonet is a large part of my inspiration.

@ajroach42 Epidemic gossip protocols are a good choice for this kind of network

@ajroach42 these are protocols that communicate by exchanging information with your "neighbors" a subset of the network. Secure Scuttlebutt uses one as its basis

@meff Ah, I imagine something similar is at the core of Dat, but I haven't looked that closely.

@ajroach42 IIRC Dat uses what IPFS uses, which is a DHT. Peers request pieces on this DHT to fill in the file that they request

So, if you were going to design your own Delay Tolerant/asynchronous/store-and-forward network application, what would it be?

@ajroach42 not really an answer to your question, but have you seen nncpgo?

@ajroach42 "NNCP (Node to Node copy) is a collection of utilities simplifying secure store-and-forward files and mail exchanging."
www.nncpgo.org

@ajroach42 Music. Pandora has an offline mode that I've never found to be reliable. If I had intermittent internet, I'd want batches of fresh music. It's a relatively easy lift, but important. There could be some interesting licensing issues.

@ajroach42 maybe this will be how the rise of the high latency mix net is demanded by the every day user..... :D

@ajroach42 I think IPFS will be a useful tool, as it'll minimize how far out on the Internet you need to go in order to fetch a webpage.

@alcinnz I'm expecting to use Dat for that, rather than IPFS for that, but I could be persuaded otherwise.

@emsenn @alcinnz i'M FAR FROM AN EXPERT, BUT i'LL TRY?

Dat tracks changes in files, so it can be used as part of a version control system. Syncing Dat is faster than syncing IPFS. Dat is specifically designed to do sync, where IPFS isn't.

IPFS must use the IPFS network. Dat doesn't care what network stack it uses.

@alcinnz @emsenn I go in to some detail about how I want to use Dat here. ajroach42.com/steps-towards-a-

If you're familiar with IPFS, you'll probably see some of the key differences illustrated there. I'm less familiar with IPFS than I am with Dat, because my initial research made Dat appear to be the better choice.

the JS/Electron only aspect is worrisome, but I hope that's temporary.

@ajroach42 @emsenn The first point is pretty much correct (if it mattered you could make IPFS sync your files just as efficiently, but Dat requires data to be structured for fast sync).

But I do want to correct the latter point. IPFS does a lot of work to be independant of the underlying networking and hashing algorithms.

@ajroach42 @emsenn Not that I have any elegience to IPFS.

If anything what I want is for their to be a bridge between IPFS and Dat so this choice becomes more irrelevant.

@ajroach42 it's always been a thing, it's just a matter of what kind of delays you want to tolerate.

FPS games tolerate delays of a few hundred ms max but can recover from larger delays because the context is very short term. But low latency on average is vital or it's worthless.

Email can handle very long delays and an email server can keep trying to send a message for days, but your minimum delay is unbounded; often a few minutes in practice.

This is not a new topic.

@ajroach42 HTTP has headers for cache control. These tell the client and server, "this is how long a delay may be before you should assume something has changed." DNS as well. Anything that deals with caching, which these days is anything big, has either a timeout, a refresh notification of some kind, or some system to turn stale data into fresh data or die trying. Even CPUs, with memory fences and such.

@icefox Sure, but hae you tried browsing the web at 56k or 1200 baud recently?

It's essentially impossible to do even basic tasks. This is what I'm talking about.

@ajroach42 There are multiple reasons for this. They come down to how resources are managed but there's several orthogonal resources in play here. Bandwidth, latency, ease of creation and how data is organized for human attention are orthogonal. Optimizing for one tends to have costs for the others, and technology changes fast enough that you have order-of-magnitude differences in relatively short time frames.

@icefox Are you building to a point, or just telling me not to bother?

Because if you're building to a point, I'll wait until you've made it.

But if you're just telling me that this line of reasoning is a waste of time, how about don't do that, and instead do something else?

@ajroach42 I guess the point is that it is not a problem with a single solution.

@ajroach42 You gotta understand what tradeoffs existing systems make and why, then decide what tradeoffs you want to make and figure out how.

@ajroach42 Sorry I didn't really mean to be a grumpy know-it-all. X_x I guess I'm just good at it.

@icefox Yeah, unfortunately that's kind of all I got out of the discussion so far.

I figured that maybe you just didn't read all the thread, and that's why you were making points that had already been made, but that didn't explain the combativeness.

I guess everyone has cranky days. Who am I to judge?

@icefox couldn't agree more.

That's partially because it's not a single problem. It's a collection of problems, and we'll need to solve each problem in multiple ways.

But also because people's needs are different. What I need out of my network may be vastly different than what you need. so the solutions will necessarily be different.

So there's plenty of room for experimentation.

@ajroach42 The dichotomy you consider is not delay, it is remote vs local. There are programs now that can retrieve remote websites. It's just that a lot of websites also assume the existence of other remote services which you cannot easily retrieve (because they're database lookups or something). Indexing is the same; do you retrieve remote sites and index them yourself or let google do it for you?

It's not just a matter of latency because it also becomes a matter of control.

@icefox I agree that this is not a new topic, and I talk about several ways that it has been done in the past over the course of the thread.

Most modern software, even software that should otherwise be very delay tolerant and compatible with the kind of store-and forward protocols mentioned here, is written with the expectation that it will have an always available internet connection.

This isn't great for situations where an internet connection is intermittent or flaky.

@ajroach42 Most of what you described is like reverse Secure Scuttlebutt -- it's got the same properties (delay tolerant, intermittent WiFi is good enough to exchange packets). As it supports custom applications built on top of it might be a good starting point. IIRC everything is Node.js and it's pretty heavy on storage and compute though -- not a lightweight protocol but literally good enough for people who are sailing to exchange messages when near land WiFi

@insom Yeah, SSB is probably worth exploring in more detail.

Dat is conceptually very similar to SSB, the overlap between the two is why I'm leaning towards Dat over IPFS right now.

But that's just a piece of the puzzle, and a young, cumbersome, and heavy one at that.

Lots of other pieces also need to be solved, and SSB might fit in to the mix somewhere.

@ajroach42 @insom How about ZeroNet?

The internet as we know it is profoundly unsustainable from an energy and resource consumption standpoint.

What's a valid "Plan B" for keeping us connected?

There's also reddit.com/r/DarkNetPlan

@brianpoe @insom I'm not familiar with Zeronet.

What's that?

I've seem some references to r/DarkNetPlan in my research on this a few years ago, but I haven't checked in recently.

have they made any progress?

@brianpoe @insom I’m leery of anything working from cryptocurrency as the base, as a large part of my personal goals involve removing my reliance on the energy grid, and cryptocurrency solutions tend to be powerhungry.

But I’ll explore it in more detail later.

@ajroach42 @insom Strongly agree with seeking a low electrical power requirement.

Perhaps something that could be run on Raspberry Pi, with meshnet, USB drive "sneakernet", and/or serial ham radio interconnectivity.

@brianpoe @insom ham is off the table because of content restrictions, but everything else you said is pretty much what we’re shooting for.

@ajroach42 @insom @brianpoe You can get online via HAM. It takes work, and you're likely only going to hit 300 baud, but it will work.

There is an entire class A IP block reserved for HAM.

@profoundlynerdy @brianpoe @insom we discussed the various merits of this at several points in the thread yesterday.

I recognize that it is possible. I further recognize that it will not suit my needs due to FCC regulations.

Plain text only transmissions with legitimate restrictions on the contents of your transmissions makes Ham a no go for me.

@ajroach42 One of the hackathon projects I participated in at Facebook was an attempt to send Wikipedia over a one-way, lossy broadcast channel using erasure coding. We were able to get it working pretty well in simulations, though it required tuning the code rate to get it right.

One could use a rateless code like online codes to do this sort of thing without the tuning. They could also replace the "rarest first" part of BitTorrent with "every seed block is useful" en.wikipedia.org/wiki/Online_c

@seanl That's really neat.

I was unfamiliar with online codes prior to this, I'll have to do some research.

@ajroach42 Given the huge size of media compared to text, it seems like you could probably mirror a pretty large fraction of the text out there whose license allows you to do that. For media you have on-demand downloads and then just have a list of everything sitting in the local cache since often people aren't sure what they want anyway and will go for what's already available. This approach works for anything you download on request if you restrict sources & don't worry too much abt privacy.

@seanl That's pretty much how I see it working.

I figure that storage is cheap. I can grab a few 4-6TB external drives, and then I could rehost a sizable portion of the text of the internet and still have plenty of room left over for a library of PD and CC media.

@ajroach42 Another area I'd like to explore/see explored is broadcast. WiFi sends broadcast packets once, so it saves bandwidth if there's more than one listener. A lot of the congested links are point-to-point, but from reading that article medium range multipoint links don't seem uncommon. Could do both realtime broadcast for community programming and sending of requested files over PGM so any interested party can grab them.

@ajroach42 *that* part sounds a lot like git annex

@ajroach42 This is sounding more and more like what we implemented for Project Byzantium.

@drwho Every time I've researched this in the past I've come across references to Byzantium, and I think we may have discussed it some a while ago, but I can't remember many details on the project, and I've never explored it.

@ajroach42 I can answer whatever questions you may have (after a nap - I just got home from the hospital).

@drwho Hope everything is okay.

and if you wanna hit me with the high points, and then we can go from there?

Isn't there a big risk of filling up with redundant traffic though? Sending out request hits a power ratio and multiple responses become mire likely the longer the initial wait. Either you then need to send a 'cancel' command or some limit on repeat levels. And difficulty with cancel is hitting same mules, so unlikely. But interesting concept.

@alisonw

The bandwidth of your average point to point wireless connection is pretty high, so I'm not worried about the redundant traffic being a problem from the gateway to the trafficker.

Having multiple traffickers request the same data could result in a bunch of redundant requests, especially in a situation where lots of traffickers only pass through an area occasionally.

But some smart defaults on rate limits and storage periods could minimize that, I'd imagine.

What do you think?

I was more thinking about the data storage element as you're basically describing a store-and-forward connection series. Although I usually carry a thumb drive everywhere it would have to be continuously powered and seeking/available for arbitrary connections. Making power a possible issue unless you only offer link-up at set times or locations.

@alisonw Ah.

I'm less worried about that. We'd need reasonable defaults on the software that the traffickers used, but storage space is getting cheaper and more abundant all the time, and the gateways could reasonably be outfitted with terabytes of storage.

Indeed. Fixed locations are always going to be easier to spec. It's the transfer outside the local range (wifi, smoke signals, whatever) where the issues will appear. Mesh networks can provide endless hours of fun*.
*fsvo 😎

@ajroach42 Does PirateBox / LibraryBox have chatrooms? The websites look like they only do filesharing. #piratebox

Also, your blog is phenomenal. 👍

@hinterwaeldler Thanks!

And piratebox has a chatroom and a message board, and a UPnP streamer.

@ajroach42 Since I switched my phone from T-Mobile to Ting, I've come back more to this way of thinking. For example, setting my podcast app to download a butch of stuff for me on WiFi at night, rather than streaming stuff using phone data.

(in case you were wondering, $40-45 a month for my wife's phone and my own together, compared to $90+ for just my phone on T-Mobile. Really not much of a burden, for saving half a grand a year minimum)

@ifixcoinops My only concern with this is that phone storage is too limited, and so many phones are shipping without user expandable storage.

If I was really committing to the local storage game, I'd want 256+ GB of storage for my mobile device, and a more reliable and configurable system for syncing than just podcast and an RSS reader.

I could probably get some kind of sync system I was happy with if I had a few days, but I don't know of an android device with the kind of storage i need.

@ajroach42 This is why I get my phones from the past. Never gonna buy anything without an SD card slot or removable battery, just gonna wait 'til that whole ridiculous fad passes by before I get anything newer than a couple years old.

I use Foldersync on an OwnCloud instance - it's alright, so long as I don't work on the same file on my laptop and phone on the same day. If I do, I have to remember to sync in between devices.

@ifixcoinops I don’t know if it will be any good, but I bought a Gemini. My hope is to use it in much the same way I used my palm pilot or my HP LX.

Foldersync from owncloud would probably work pretty well, if I coupled it with some cron jobs to automatically fetch new content and expire some old stuff.

@ajroach42 I mean, email and social media that's not feed based works asynch, and so does RSS and other stuff

@meff Yep, that's what this thread started with.

@ajroach42 very much also applies to '3rd world countries' or other places where internet access isn't 24/7 or always fast enough.

@joop For sure. I'm approaching this from the perspective of someone living in a rural area in the US, where high speed connectivity doesn't exist or is very expensive, because this is where I have the most experience.

But a lot of the same lessons translate over to places outside the US, for sure.

@ajroach42 I read throrugh this chunk of toots and wow, you're around my age and feeling around the same thoughts about Internet as me. Was spooky.