I finished reading World Wide Waste by Gerry McGovern. I'd consider it essential reading for anyone working with computers!

gerrymcgovern.com/books/world-

It's well cited (though I still need to check those citations) & uses maths effectively to make it's point.

That computers + (surveillance) capitalism is actually worse for the environment than the predigital era. That we can and must move slow and fix things, and fund that vital work directly.

Don't get me wrong, computers can absolutely help us regain our environmental efficiency. They just *aren't*.

Not as long as we're:
* constantly syncing everything to the cloud,
* expecting same-hour delivery,
* funding our clickbait via surveillance advertising,
* buying a new phone every year,
* using AIs because they're cool rather than useful,
* running bloated software & webpages,
* buying into "big data"
* etc

Computing is environmentally cheap, but it rapidly adds up!

Show thread

@alcinnz I do find it interesting that even as it has become cheaper and more efficient than ever to have local storage and computation, we're centralizing it more and more heavily.

But I think Rob Pike had a point when he said he wants no local storage anywhere near him except maybe caches. Managing redundancy and backups is *hard*. And any p2p storage system that a) I would trust and b) mere mortals could be comfortable with, may not be very efficient energy-wise.

@alcinnz Large datacenters are incredibly efficient, energy-wise, not just because the bigger processors are more efficient but because when you have that much to work with in terms of workload, you can engage in a lot of neat tricks like shutting off unused machines or running batch workloads in the unused capacity. And with the PCIe fabrics the datacenters are deploying now, you can even do the same tricks with individual cards.

@freakazoid @alcinnz you know, I was surprised to learn in Dercuano that there's very little efficiency difference between different processors. Check out notes/keyboard-powered-computers.html in Dercuano; everything is within an order of magnitude of 1 nJ per instruction. 64-bit instructions do more actual computation than 8-bit instructions, it's true... but that's still usually wasted. I'd be interested to learn there's something I overloked!

@kragen @alcinnz The range on the Green500 list is almost 2 orders of magnitude. And the range of what can be done in a single instruction is well beyond the difference between an 8 bit and a 64 bit computer when you start talking about SIMD, VLIW, etc. While it's true that there are inefficiencies at scale, modern CPUs can also dynamically scale clocks independently on different cores, shut down unused units, etc.

@kragen @alcinnz I think it's absolutely true that modern *software* is not particularly efficient, though even there larger scale means you can do more. For example, MapReduce is highly efficient through the whole memory/storage hierarchy, because it moves the code to where the data is instead of vice versa, and it has very high locality of reference.

@kragen @alcinnz Not having to move that data in the first place, because you're working with a small dataset, would also be efficient, but it's unlikely your data processing needs are going to exactly fit your machine at that level.

So I think putting the same effort into making software more efficient is just going to have a much larger return at larger scale.

@kragen @alcinnz And it's not like Google engineers are looking at marketing brochures for CPUs and getting fooled. They have a bottom line. They are constantly testing with different kinds of hardware and actually measuring power consumption using real (and by real I mean production) workloads. My team was involved in just such a project, in fact. Boring, though, because we were using these newfangled CPUs to serve up ads :)

@kragen @alcinnz There's also the fact that an ever-increasing amount of computing isn't even being done on microprocessors anymore but on GPUs and TPUs. And the energy savings in moving model execution to a TPU is pretty enormous. And we're talking like half or more of the total computing workload when it was all executing on CPUs.

@freakazoid @kragen I'm thinking of applying the GPU to process richer DGGS data, once I've got a usecase or API to build against...

I was discussing GPUs just the other day, as a simplified example of the performance challenges CPUs face!

@alcinnz @kragen Are you thinking of representing, say, roads, as a contiguous series of hexagonal cells, essentially a hierarchical raster representation with hexagonal pixels, instead of as paths?

@freakazoid @kragen For road maps it would probably make sense to store it in vector form. Which is related to other important operations that needs to be supported!

I'm currently thinking a divide-and-conquer lookup table is the way to go for that...

@alcinnz @kragen Now you've got me thinking I should use nVidia Jetson Nanos as additional compute nodes for my Kubernetes cluster rather than Odroid C2s! 128 cores anyone?

Follow

@alcinnz @kragen You know, something just occurred to me. I think a Kardashev type 1 and even a type 2 civilization is most likely to use the vast majority of its available energy for is computation. Even if that's not the case, at least at the moment the amount of computation we do is limited primarily by the cost of the power it uses. So increasing its efficiency won't reduce the total energy devoted to computing. In fact, it's likely to increase it.

@alcinnz @kragen Oh, and something I forgot to mention earlier: the place where I think computing has most failed to live up to its potential to reduce our ecological footprint is remote work, mostly due to the amount of organizational inertia among employers. Imagine if everyone whose job didn't require actually being in an office stopped commuting! I did, a few months before lockdown.

Sign in to participate in the conversation
R E T R O  S O C I A L

A social network for the 19A0s.