@kragen@nerdculture.de @alcinnz The range on the Green500 list is almost 2 orders of magnitude. And the range of what can be done in a single instruction is well beyond the difference between an 8 bit and a 64 bit computer when you start talking about SIMD, VLIW, etc. While it's true that there are inefficiencies at scale, modern CPUs can also dynamically scale clocks independently on different cores, shut down unused units, etc.

@kragen@nerdculture.de @alcinnz I think it's absolutely true that modern *software* is not particularly efficient, though even there larger scale means you can do more. For example, MapReduce is highly efficient through the whole memory/storage hierarchy, because it moves the code to where the data is instead of vice versa, and it has very high locality of reference.

@kragen@nerdculture.de @alcinnz Not having to move that data in the first place, because you're working with a small dataset, would also be efficient, but it's unlikely your data processing needs are going to exactly fit your machine at that level.

So I think putting the same effort into making software more efficient is just going to have a much larger return at larger scale.

@kragen@nerdculture.de @alcinnz And it's not like Google engineers are looking at marketing brochures for CPUs and getting fooled. They have a bottom line. They are constantly testing with different kinds of hardware and actually measuring power consumption using real (and by real I mean production) workloads. My team was involved in just such a project, in fact. Boring, though, because we were using these newfangled CPUs to serve up ads :)

Follow

@kragen@nerdculture.de @alcinnz There's also the fact that an ever-increasing amount of computing isn't even being done on microprocessors anymore but on GPUs and TPUs. And the energy savings in moving model execution to a TPU is pretty enormous. And we're talking like half or more of the total computing workload when it was all executing on CPUs.

Β· Β· Web Β· 1 Β· 1 Β· 0

@freakazoid @kragen I'm thinking of applying the GPU to process richer DGGS data, once I've got a usecase or API to build against...

I was discussing GPUs just the other day, as a simplified example of the performance challenges CPUs face!

@alcinnz @kragen@nerdculture.de Are you thinking of representing, say, roads, as a contiguous series of hexagonal cells, essentially a hierarchical raster representation with hexagonal pixels, instead of as paths?

@freakazoid @kragen For road maps it would probably make sense to store it in vector form. Which is related to other important operations that needs to be supported!

I'm currently thinking a divide-and-conquer lookup table is the way to go for that...

@alcinnz @kragen@nerdculture.de Now you've got me thinking I should use nVidia Jetson Nanos as additional compute nodes for my Kubernetes cluster rather than Odroid C2s! 128 cores anyone?

@alcinnz @kragen@nerdculture.de You know, something just occurred to me. I think a Kardashev type 1 and even a type 2 civilization is most likely to use the vast majority of its available energy for is computation. Even if that's not the case, at least at the moment the amount of computation we do is limited primarily by the cost of the power it uses. So increasing its efficiency won't reduce the total energy devoted to computing. In fact, it's likely to increase it.

@alcinnz @kragen@nerdculture.de Oh, and something I forgot to mention earlier: the place where I think computing has most failed to live up to its potential to reduce our ecological footprint is remote work, mostly due to the amount of organizational inertia among employers. Imagine if everyone whose job didn't require actually being in an office stopped commuting! I did, a few months before lockdown.

Sign in to participate in the conversation
R E T R O  S O C I A L

A social network for the 19A0s.