So, while so many of the staff members are out this week, i'm taking the opportunity to train up interns and other junior devs with some semblance of "calmness" around the remote office.
So without any further context:
Whats the general consensus on Mono Repos 'round these parts?
We have a diverse git layout of repositories that are all inter-linked and the distributed nature of the config / app code / etc. is startling to these jr devs.
Is a monorepo a good fix?
@chuck Monorepos have one use case, which in the experience of many developers is the only relevant one: when you just want a modular structure in your code, but effectively deliver a single monolithic product.
There are many, many constellations in which that's not true.
And then, this is also highly dependent on your implementation language and build/packaging system. Some languages make modules and packages largely equivalent. A monorepo is actually overkill here.
All I'm trying to get at is that mentioning an unusual counter example or two doesn't substantially alter the rule. But I'm also not going to pretend that this rule must absolutely apply everywhere, just... as a rule.
Google AFAIK doesn't actually have everything in a huge monorepo. I have never worked there, but I guess could ask. I wonder how representative the Android (AOSP) is.
Here, they use https://android.googlesource.com/tools/repo ...
I guess from a usage point of view, it acts like a monorepo, but it still allows composition of individual repos into one such virtual monorepo.
I really wonder how representative it is. Gotta ask my insiders 🤔
@freakazoid @chuck Anyway, completely unrelated to what they *actually* do, it's also a fair view IMHO that every product team within Google effectively delivers a single monolithic product. A bit of an oversimplification for sure.
The main point being, if you have some core tech used across many different products that release each at their own pace, an actual monorepo is more likely to hurt than help.
@jens @chuck I think it's also very different between software that's running on your own infrastructure and software that's going to be downloaded onto a device. When it's easy to release updates, the potential for accidentally incorporating a bug in a dependency isn't a big deal compared to the advantages for testing and not having to version your dependencies.
Google uses a "build horizon" instead of versions: anything running in prod must have been built within the past 3 months.
@jens @chuck I actually have a client who ended up paying a fairly big price for splitting up their monorepo and would have been far better off just doing the work to deal with the problems they'd been having with it that led them to split it up in the first place. They still haven't solved dependency versioning.
I suspect this is a case of the general problem of people underestimating the work to make a big change compared to the work they know they have to do in order to not make the change. This leads to misguided rewrites all the time.
This sort of thing is often spearheaded by newcomers who never bother learning the old system very well and end up with similar or worse problems.
@jens @chuck The model I have in my head of the monorepo vs microrepo decision makes me think that they are isomorphic in some way. For each problem one has, the other seems to have a conjugate version of the same problem. Internal dependency versioning versus third party dependencies. Dependencies for testing versus needing to use a monolithic build system.
It seems like we need a meta-build system of some kind. And I don't mean a CI system but a declarative meta-build language.
Nix and Guix seem to at least have gone the right direction. It seems like a lot of build systems exist because people couldn't be bothered to learn Make and Autoconf, and thus they failed to learn any actual lessons before creating systems that have many of the problems Make and Autoconf solve.
@freakazoid @chuck What I was actually thinking of was more of a meta build system plus a toolkit of utilities. As in, the only thing we really need to know about a dependency is where to find its files (and dependencies of its own). In particular, we don't need to know anything about how it arrives there.
So the trigger should be that the expected files aren't to be found, and the action to fix that... a script. It might invoke git or curl to get soruces, or RPM for...
@freakazoid @chuck mostly I figured that this low effort approach actually provides you with what you need, especially when each of your dependencies uses a different build system, distribution method, etc.
In that sense it's not significantly different from a packaging system. Might even zip up the final products with a manifest.
@freakazoid @chuck I've dabbled with things like that before. It's always a mess when you try to be smart, because there is always going to be that one package that uses its build tooling in a weird and unpredictable way. Like, don't expect that "configure" is actually autotools based. Don't expect that DESTDIR is a thing. Etc.
Also, there's no point in supporting the notion of fetch/build/install phases, with optional patching thrown in. It's going to break somewhere.
A social network for the 19A0s.