Hot take on economics Show more
I have heard people talking about "economics" and "economic systems" as if they're this unnatural thing imposed on innocent humans. But the fact that we use the same word for both the thing being studied and the science of studying it is just an unfortunate word choice.
"Economics" is really just the study of a particular class of human activity that pre-dates agriculture. An "economic system" is what you have any time you have a complex society.
Hot take on economics Show more
So "not having economics" just means ceasing to care about a particular class of human activity. It doesn't make that activity stop. And having an "economic system" isn't optional; it's just a question of whether you pay any attention to what that system actually is and how it behaves.
Hot take on economics Show more
@o I suspect it's a similar issue to the widely varied definitions of "markets" and "capitalism" that people use. They might have been talking about the combination of macroeconomics and finance, in which case there's a reasonable chance I at least partially agree with them.
Hot take on economics Show more
@freakazoid So ...
Back on Google+, there were a set of self-described Libertarians (and @woozle will remember some of these conversations -- they're not one, but were a fellow participant) who I'd occasionally engage with mostly to try to understand what the hell they were on about.
This included a few rounds trying to suss out just what they meant by terms such as "markets" and "capitalism", in particular.
It's one thing to disagree with someone.
Hot take on economics Show more
@freakazoid ... tend to go quite poorly.
This includes reading and referencing their generally prefered sources (if you can even wrest these from them at all). Stuff like Hazlitt, Rothbard, and von Mises, if you're lucky.
There was one YT vid in particular @woozle had turned up at one point, I _think_ it was "Objectivist Girl", discussing something. Which we realised was basically word salad. Oh, on von Mises and "praxeology". Let me see if I can find that...
Hot take on economics Show more
@dredmorbius @o @woozle I've read Economics in One Lesson. I've also read Economics for Real People (Gene Callahan's intro to Austrian economics), Free to Choose, and Machinery of Freedom. I think a big issue with libertarians is that they don't realize just how much the framework in which markets exist matters. There's no such thing as a "free market".
Hot take on economics Show more
@freakazoid For a long time my trite dismissal of Libertarianism was: "It's a fundamental inability to understand or acknowledge that wealth is in fact power."
When I finally started reading Adam Smith and found his "Wealth, as Mr Hobbes says, is power", I was gobstopped. That had been a principle gripe about economics (Libertarian or otherwise), and Smith directly confronted and acknowledged it.
There's another error I see now that is deeper: Weber & NAP.
Hot take on economics Show more
@freakazoid Akerloff's "Market for Lemons" addresses a *part* of this, but only in part, and only in some cases.
The dynamic ends up being a validation for establishing minimum standards which *must* be met for market entry in many instances. Ergo: fully unregulated markets fail, and spectacularly.
(So do *badly* regulated ones, another issues.)
And that's not the only problem, but I'll stop there for now.
Hot take on economics Show more
@dredmorbius @o @woozle That's specifically about situations where there's heterogeneous quality and no reputation for the seller, though. People certainly sell high-quality used cars, and you can get used cars with warranties from dealers. I think we do see this phenomenon with some consumer electronics. Often it's not asymmetric information so much as lazy consumers, though.
Hot take on economics Show more
@woozle @o @dredmorbius It's a little different from Gresham's Law because in that case the "price" is dictated to the the same for all coins of a given denomination by law, whereas prices of goods in the coinage assume the coins with the lowest actual precious metal content.
(Now that I say this I realize Gresham's Law has nothing to do with non-use of cryptocurrencies, since relative prices can adjust.)
There's fiat or imposed value, as with coin. Also with transjurisdictional standards, such as divorce law and shipping registries ("flags of convenience"). Whatever the *minimum* acceptable *somewhere* is, is acceptable *everywhere*.
There's effective perceived value -- Mencken's "Brayard", or consumer technologies, or bicycles.
@dredmorbius @woozle @o These fall into a few different possibly overlapping categories: implicit bias, laziness or ignorance (because the information is available but people don't bother to look or don't know it's there), and places where it's genuinely hard to know, like interviewing and managing (though there's a lot we do know about management and interviewing so laziness and ignorance applies there).
@o @woozle @dredmorbius Volume also contributes to this a lot: for cheap things, the cost of research can be a significant fraction of the cost of actually buying it. This is probably why for many things there's not much of a "middle ground", just super cheap and super expensive things.
You can also get seemingly paradoxical effects where the brand with the better reputation has lower quality at a higher price point. I've noticed in general an inverse correlation between marketing and quality.
@freakazoid Totally agreed on the cost-of-research factor.
The fact that this is very frequently *excluded* from economic analyses is one huge source of the fallacy of the sunk cost fallacy. That is: the *putative* sunk cost excludes a tremendous number of actual, but non-apparent costs. Which provide benefit, and must be re-invested when switching to another option.
Another case of the #manifestations problem: information that's not manifestly evident.
@dredmorbius @o @woozle Can you give an example of where it's excluded to the detriment of the conclusion? Since the sunk costs fallacy is widely known in econ circles, I would be surprised if it's being excluded in places where there's a significant effect.
One thing I don't generally hear when people are talking about human cognitive biases w.r.t. markets is how government avoids being affected by the same bias. The sunk costs fallacy, for example, is manifest in every large project.
@woozle @o @dredmorbius That's pretty interesting. It seems like the sunk costs fallacy might make more sense as part of public choice theory than behavioral economics. Speaking of which, I wonder to what extent public choice theory explains decisionmaking inside corporations and other organizations where there aren't prices?
Mature markets tend to end up with two market leaders and a bunch of also-rans. In that kind of market, the #1 is often complacent and of poor quality, but the #2 tends to be better because it wants to knock the leader off the top spot.
e.g. VHS vs Betamax, Windows vs macOS, VW vs Toyota for cars, etc.
(Obviously there are counterexamples, and I think the trend is becoming less clear as markets fragment.)
@mathew @o @woozle @dredmorbius Two of the three examples you cite have strong network effects, where that's certainly true. But car manufacturers don't have this problem. Globally, in 2014 (the year I can easily find data for), the number 8 automaker by number of cars (Honda) sold almost 43% of the number of cars of the number one (Toyota). In the US, the number 7 manufacturer, Kia, sold 43% as many passenger cars as the top manufacturer, GM. And number 3, Toyota, has almost 83% of GM's sales.
@dredmorbius @woozle @o @mathew Actually, now that I think about it, VHS vs Betamax happened in a market that wasn't remotely mature, and it was a competition among standards, not companies. There were plenty of manufacturers of both tapes and players.
I'm having difficulty coming up with an example in any situation where there aren't strong network effects, at least in the US.
@mathew @o @woozle @dredmorbius During my orientation at Google, when they were talking about the datacenters, someone asked if they ever planned to open source the designs like Facebook had. The speaker replied, "Open source is what the company in second place does." That wasn't the only thing in orientation that made me think about just walking out.
@dredmorbius @woozle @o @mathew Maybe commercial airline manufacture is a good example? Boeing and Airbus are definitely the top two, and Bombardier is the only other manufacturer I can even think of, but they only do regional jets AFAIK. But I'm not sure either Boeing or Airbus ever really acts like they're either especially comfortable or hungry; the competition seems to keep both companies on their toes pretty well.
@freakazoid China and Russia both have indigenous aircraft industries, and there's Embrar of Brasil, though theirs are also largely regional / corporate jets.
There are more small- and mid-sized aircraft manufacturers.
The industry as a whole is *extremely* conservative, almost wholly governed by engineering and aeronautical constraints (there are only so many arrangements of sausages, engines, and lifting surfaces).
Plus insurance risks and regulation.
@freakazoid An interesting parallel is actually cargo ship design and use in the 13th / 14th centuries, about the time the lateen rig was adopted by Europeans, a millennium or more after its appearance on the Indian Ocean and Arabia.
The problem was insurers.
Shipping is high-risk, and voyages were insured individually, as separate ventures. Insuring syndicates wouldn't take risks on new-fangled tech like lateen rigs.
As a consequence, European ships could ...
commercial airliners Show more
Bombardier tried to break into the low end of the narrow body market, scared Boeing, and got slapped with nasty tariffs, at which point Airbus swooped in and bought a majority stake in the project and moved production from Canada to the US, the resulting plane is now called the Airbus A220.
Bombardier’s also selling their turboprop commercial airliners (the Q400) to Viking Air (who already owned the rights to the predecessors to them), and their regional jets to Mitsubishi (who has their own homegrown design that will effectively replace the CRJ line, they’re just in it for the CRJ sales and service infrastructure as I understand).
And there’s also Embraer, who does regional jets and some small short-range narrow-bodies. (And Boeing’s buying into a joint venture with them.)
And there’s ATR doing turboprop airliners, although they’re a joint venture between Airbus and Leonardo.
@freakazoid There are some interesting exceptions, yes, but most of them show strong evidence of forces encouraging regionalisation.
The film industry is a key case in point. Reels of film, or now, digitial streams or recordings, can be transmitted virtually effortlessly worldwide. The *fixed* infrastructure of film development is largely the support industry: carpenters, casting agencies, caterers, coaches, costume & set designers, electricians.
@freakazoid So centres of specialisation appear.
But you also have *globalised* centres, especially in India, China, Japan, and multiple European countries.
Most of that is language, though culture also plays a major role, and government programmes specifically encouraging and suporting an indigenous film industry -- a powerful propaganda and cultural tool, kept under local control.
The auto industry is similar, in respects. Not because it's projectable...
@freakazoid ... though cars ship easily, factories don't, so it centralises, at least within countries.
(JIT and improved transport networks are changing that somewhat. Factories are more distributed in the US than they were in Detroit's heyday, but still cluster somewhat.)
But: there are both regional taste differences and economics, as well as national interests involved.
Building cars and military vehicles shares much in common, and military manufacture is ...
@dredmorbius @mathew @o @woozle Is physical colocation a problem? It's generally centralization of control or coordination that somehow discourages defection (cartels have tended to disintegrate rapidly historically) that is the problem, right? And often when a company does manage to dominate that's because new entrants have to face regulatory barriers to entry it didn't, like Amazon with sales taxes.
@freakazoid Colocation used to be highly important because there was a lot of interplay between the automanufacturers themselves and supplier pipelines. Sometimes meeting F2F and getting your mitts on metal is the best way to resolve stuff.
That's either not so much the case, or other factors matter more, but you still have forms of clustering which matter, ranging from support industries to education and infrastructure.
Early colo was driven by bulk materials.
@freakazoid Detroit was where raw iron ore and steel could be directly offloaded via ship and cars shipped by rail to mostly Eastern markets.
Interestingly, Los Angeles once featured pretty much the largest of every factory plant *outside* the primary core group, within the US. Which is to say: at LA's distance from the Rust Belt, 2ndary localisation made sense.
@freakazoid ... of strategic interest. So countries otherwise not particularly vested in car manufacturing sustain it.
Local tastes and regulations vary, so cars get built for specific regulatory and cultural markets, as well as price points -- both inputs and consumers. Hence: much more variance *between* national markets, but typically little *within* them.
Aircraft are somewhat similar, though more constrained.
@freakazoid @dredmorbius @woozle @o Cars may not be a two-player market, but I still maintain that VW has gotten lazy (and indeed downright criminal), lets its quality slip and failed to invest in new tech, while Toyota has focused on making better cars, even if they did make a disastrously bad move betting on hydrogen rather than battery storage. (There's probably an interesting case study there on why they went the way they did.)
VW's failure to invest in new tech is the case with car makers across the board. Their cheating was to try to avoid losing a bunch of car sales as diesel was essentially getting regulated out of business. Which IMO was a stupid move on the government's part since diesel has lower CO2 emissions than gasoline.
@dredmorbius @woozle @o @mathew Actually I should qualify that - it has lower emission not because its specific CO2 is lower but because diesel engines have higher compression ratios so tend to be more efficient. You can also get more of it from oil without having to resort to cracking. But hybrids are better, so probably not stupid to regulate its emissions, really.
@freakazoid Deisel fuel itself has a slightly higher energy content than petrol/gasoline, the engines run at higher compression ratios, and at higher temperatures (Carnot efficiency), all of which net more mileage and lower CO2 emissions.
Emissions of *particulates* (especially PM2.5, v. bad for lungs and health), and of NOx (nitrogen oxidising at high temps and pressures) are *worse* for deisel than petrol engines.
Also possibly sulfer and other sour crude contaminants.
@freakazoid Incidentally, two cases of dyanamics I've been describingl
Toyota's forray into hydrogen fuel cells is based on government policies and incentives, creating a localised specialisation.
Volkswagon's diesel emissions fraud is a #GreshamsLaw dynamic: trying to substitute a lower-value quality for a higher-value one, through fraud.
@dredmorbius @mathew @o @woozle It seems like this is also the case with software. People pick software on the basis of features or price, because they have no idea how to measure quality. So there's no market for high-quality software.
A "Consumer Reports for software" might help. It could track historical bugs, usability/accessibility problems, vulnerabilities, attacks, and the maker's response to them, etc.
@freakazoid "The Tyranny of the Minimum Viable User"
Since users' _capabilities_ also vary strongly, the problem goes beyond this.
You see similar types of dynamics in, e.g., "audiophile" gear, much of which seems principally engineered to separate rich idiots from their lucre.
A better comparison might be precision or highly-skilled equipment, also somewhat affected.
@freakazoid P.T. Barnum's dictum isn't an absolute universal, but it's close.
You can swim upstream, but you're going to find yourself in niche space. That *may* be a *profitable* niche, but it's still a niche.
The useful thing to do is look for cases of exceptions to the rule -- where is coplex, respectful, high-information-density content (or products or services) found?
Quality literature, news, education, music, information gear, etc.
@woozle @o @mathew @dredmorbius To expand on the "You already know how to use it" example, it could be that pedagogical Apple failed because they didn't have much business sense, and dumbed-down Jobs apple succeeded not because of Jobs's dumbing-down of their products but because he understood business and marketing.
@freakazoid The winner-take-all dynamic of many tech-based products (hardware, OS, software, services, social media) makes attribution highly risk prone: success succeeds, failure fails. Survivor bias is manifest.
But having witnessed enough cases directly, and studied numerous others, the general rule of "don't outsmart your market" seems to hold.
Apple's big success is smartphones. Mac is a fairly small share of their market. Though they seem to be catering it again.
@freakazoid I was looking at the specs for the upcoming Mac Pro release. It's mind-boggling.
Base model: $4000. Top of the line is 28 cores and 1.5 TB RAM, 4 TB SSD, 4xGPU. Speculation is that this will run north of $35,000, and I suspect that's low. This is a supercomputer in a mesh cage.
(I'm wondering what Linux or Window equivalents there might be.)
Definitely drool-worthy: https://apple.com/mac-pro
@freakazoid Mostly it's cases where one of several conditions is met:
1. The good is a signalling mechanism. I can advertise my own capabilities in a space by using (or producing) the good. Uni education especially.
2. Direct beneficial use. If the good provides a _direct_ and _quantifiable_ or _perceptible_ benefit, it may find a niche. That will by definition be limited, and faces challenges by imitative competitors and measurement difficulty/costs. Examples ...
@freakazoid ... include business/financial news, policy news, etc.
3. Quality professional gear: audio, photo, video gear. Linux/BSD vs. Windows/Mac, Mac vs. Windows. Small, focused, niche audiences.
4. Regulatory quality floor on goods *or* users. Commercial and civil aviation vs. automobiles. Any idiot can (and does) drive a car. Pilots are licensed. Commercial pilots are certified to specific aircraft. There is a very strong quality floor.
@freakazoid 5. With some limits: self-use. Especially where tools are mutually developed by specialists within a craft. Linux *used* to occupy this space, it's drifting from it. Whether there's a replacement isn't yet clear. The death of the desktop, may, paradoxically, save Linux, if the idiots all use smartphones instead.
There are some parameters that may influence this. The scope of network effects especially. If intelligence counters network, then a ...
@woozle @mathew @o @dredmorbius Linux has dramatically dumbed down over time. I wouldn't really call it "self-teaching" at any point, but it used to be that, *if you used Linux*, you made heavy use of man pages and documentation that was included with Linux. So simply having sufficient interest in using Linux to get you over the hurdles would have left you significantly more competent in using Linux than it does today.
@dredmorbius @o @mathew @woozle Today people's response to some random thing breaking in GNOME 3 or KDE (let's ignore Android and Chrome OS) seems to be about the same as it is if something breaks in Windows: format and reinstall.
Some of that is just an increase in accessibility. But it's specifically an increase in accessibility gained by dumbing down the system instead of by improving the system's self-documentation/self-teaching.
@starbreaker Debian still makes the effort. Not having manpages remains a bug (though not a release-critical one).
Red Hat has almost always been far less useful -- missing manpages and even /usr/share/doc/<packagename> entries.
Debian's dwww is hugely useful.
FreeBSD / OpenBSD manpage quality is typically higher, though they use the wrong utilities.
GNU's insistance on info is a fucking brain disease.
@woozle Not so much that, as "developed principally by its own users", much as early Unix had been (1970 - 1990 or so).
That is, "users" weren't a separate class, they were "us", from the developers standpoint.
Today, you've got a much larger nontechnical userbase. The total installed base hasn't changed much by _percentage_ but it's vastly greater by _number_ than in the late 1990s.
Self-documentation through code, manpages, info docs, & HOWTOs has varied.
@freakazoid ... quality product stands a far better chance. If production can be readily distributed and decentralised, similarly. Open source software seems vastly more tractable than open source hardware. Fabrication, logistics, and distribution are far harder for physical commodities.
In particular, if there's no way to impose some kind of effective floor (as with pilots/aircraft, certified industrial equipment, etc.), the market will seek the minimum viable user.
@freakazoid To counter that, you've got to raise the bound on that minimum.
You can gatekeep the users (certification). Or you can make sufficient degrees of incompetence nonviable -- harms or at least does not help the incompetent user is one route. This will still limit the scope of the market, but at least won't dilute the product. Call it a talent bar.
This also means a noneconomic motivation. You're not profit-maximising, but maximising for individual benefit.
@dredmorbius @mathew @o @woozle I don't agree with that, because it assumes that the reason for incompetence is lack of ability or desire to become competent. If it's lack of desire, let's exclude them not just from products, but from the planet, since they're ruining humanity. And I suspect lack of ability represents only a tiny fraction of the population.
But I think the real answer is that our system selects for people who are shitty at teaching.
@freakazoid So ... well, current use of idiots notwithstanding, I try to avoid prejudiced language, and the whole long first part of the Reddit essay goes into detail about why simple tools are often a net win.
The problem is where the dynamic directly impedes development of useful tools, systems, goods, services, etc.
And I really _don't_ think it's something you can chalk up only to pedagogy. Put another way: we're at the end of a phenomenal 300 yr ramp up in literacy.
@freakazoid Which includes a hell of a lot of pedagical reforms (the history's interesting, esp. for trivia, or trivium, fans).
Literacy ~1700 was ~10%. By 1900 in US/Europe, 90%+
HS graduation in the US 1900: 6% 1950: 90%. Bachellors is now 30%+ and PhD > 8%. There are more PhDs in the US now than HS grads 120 years ago.
But: the quality of that HS education is also, in some measures, much lower: lower language/logic skills, better scientific knowledge.
@freakazoid And the general informational tools we have *are* better than 300 years ago. (In part: they deliver us those 300 year old works instantly.) But they're far short of their potential.
And I'm trying to suss out *why*.
Teaching/training is _part_ of it, and yes, the education system likely doesn't meet its potential either, but it does a tremendous amount, for a tremendous number. And hasn't _worsened_ appreciably since the 1950s.
Most variation is ...
@freakazoid ... actually, if you look at it, changes in either who's included in classes or testing. Increased access => falling test scores. Rising test scores => falling access. That points to some population-level intractability.
(With exceptions. "Stand and Deliver".)
But trying to make all the children above average is a Sysiphean task, and a doomed premise for progress. You've got to work with the talent you've got.
My point is to not get in its way.
@dredmorbius @mathew @o @woozle Replying mid-thread because I think a lot of your reasoning farther down hinges on what I believe to be a mistake in this post. The fall in test scores from "increased access" is not necessarily because the larger group is not learning as well, but because the test wasn't actually testing how effectively the students were being taught. Most of our standardized tests are really indirect tests of socioeconomic class, not of how much students are learning.
@woozle @o @mathew @dredmorbius I think that the real problem is that "education systems" are super bad at educating. They can take a subset of students who have the right background and right set of parents and get them to do well on standardized tests, but they cannot take a random person out of a population of, say, English speakers, and on net provide them significant benefit.
@dredmorbius @mathew @o @woozle The reason students at "elite schools" tend to do better is that the school only allows in students who are going to be successful no matter what. They're *filtering*, not teaching. But they're not really filtering for innate skill. They're filtering for what the student has already absorbed from the world, largely due to the circumstances of their birth.
@dredmorbius @mathew @o @woozle The fundamental problem IMO is that almost all societies treat teaching and learning as just one function among many, and something that's confined to particular institutions and particular phases of a person's life.
IOW it's not just Americans who are anti-intellectual but most of human society. And the reason is that we have entrenched groups who have a vested interest in a stupid population.
@woozle @o @mathew @dredmorbius Politicians don't want an educated population because they want people to be swayed by their emotional arguments. Pretty much every skilled profession has a vested interest in everyone else being stupid (or at least not knowing THEIR skill) because that's how they make their money. And the victims of this "uneducation system" want everyone else to be stupid because otherwise THEY feel stupid.
@freakazoid I'd mentioned the long history of pedagogical evolution. This is a huge part of it.
In particular, there's a long-standing divide between "liberal" and "technical" education. Politicians (and employers) want a _skilled_ but _pliant_ public.
The "servile arts" is another term for technical arts. Technical / polytechnic schools specifically excluded much of the liberal education, whose heart is the Seven Liberal Arts, the Trivium + Quadrivium previously hinted.
@freakazoid There's a huge (if obscure) literature on this, stretching to medaeval and ancient times. I've touched on it occasionally, see Hans Jenson on John Stuart Mill, 1860s England:
Effectively, there are forces working for and against this.
Of late, high-tech skills have been in need, but also tend to create hugely intelligent people with inconvenient consciences: Einstein, Oppenheimer, Chomsky, Ellsberg.
A dilemma for oligarches.
@freakazoid I hadn't even realised that.
I was temporising and didn't think through the list at length. I did think of adding Edward Snowden (not Jewish). I'm trying to think of other dissident scientists and engineers ... and several of the obvious contenders are _also_ Jewish.
The Jewish relious Talmudic tradition is one that treats many questions as _not_ answered, but as _subject to inquiry_. So there's a call for questioning concepts.
The lack (until lately) of a nation state in which a uniform orthodoxy might be imposed (and in which that seems to be happening) is also notable. Prior to 1948, any Jew anywhere was part of an internal minority, often at least discriminated against, if not actively oppressed. The WWII oppression was only the latest in a very long history of similar such actions.
What effect that has had culturally or otherwise I don't know.
@dredmorbius @woozle @o @mathew Snowden is not an idiot but he's not in the same category as any of your other examples. He did, for example, misunderstand how the Prism program functioned, thinking it required the cooperation of the companies involved when even just looking at their own slides it was clear it did not - otherwise reverse engineering the protocols involved wouldn't have been necessary.
I don't know what impact the existence of Israel has on people's general competence, but I think it could as easily be negative as positive.
Current implementations are tilted heavily (if not always obviously) towards protecting existing power-structures.
It's pretty easy to imagine small improvements that would significantly undercut this tendency, and not hard to design improvements that would do more than that.
@woozle @o @mathew @dredmorbius @feonixrift I think the big problem with democracy is that we treat it as if it has some magical power. Instead of fighting for human rights and rule of law, we fight for "democracy", even though the only rights democracy can protect by itself are the rights of the majority of the enfranchised, i.e. the people who least need it, and people will almost always vote *against* rule of law as long as it gets them what they want.
@feonixrift @dredmorbius @mathew @o @woozle I see democracy as a tool for implementing rule of law, because it allows succession of governments in a way that's always governed by law, whereas hereditary succession has always ended up with edge cases and conflicts. And of course can produce rulers who are so incompetent or evil that it causes revolutions where none were necessary (cf. the French and American revolutions).
@woozle @o @mathew @dredmorbius @feonixrift Randomly selecting representatives from among the population may well produce as good of or better results than having the population at large vote. You couldn't pick someone with as much power as the US President has that way, but those randomly selected representatives could certainly choose someone. And they'd probably do a better job since they'd be more visible and have some feeling of responsibility for their choice.
@freakazoid Again, agreed on multiple roles (teaching is often ~= daycare) and anti-intellectualism. I've addressed the latter as urban-rural / complex-simple divide:
@freakazoid I agree.
Teaching works best where:
1. The students are roughly equivalent in skill.
2. The teacher can work 1:1 with students (often described as "the gold standard" -- direct tutoring -- very expensive BTW), to see _how the lesson is being assimilated_.
#2 is a failure of technical teaching "solutions". Teachers aren't merely information delivery systems, they are *guides*, who see where students stumble, and can give learning cues, not just repeat rote.
@freakazoid That's a really good point.
It's also directly testable: _if_ we can agree on a set of non-socially-discriminatory test criteria, we can apply those to a wide sample of subjects and see how well they do.
At the same time, this problem highlights a fundamental dynamic of ToTMVU: that *assessing quality is itself difficult, expensive, and not generally agreed upon*. How do you find what is best *if you cannot even agree on what is "best"?*
@freakazoid The other issue is that as more things get more complicated, you've got more to teach, to more people, but only so much time, bandwidth, and effectiveness with which to do it.
Civilisations progress based on intergenerational knowledge transfer. *That* is based on both explicit (linguistic) and tacit (experiential) knowledge. We can bump up explicit knowledge transfer fairly effectively. Tacit not so much. Students need to DO, under direct guidance.
@freakazoid So you either start stopping teaching things, or you split up what you teach to whom.
The first leads to overall knowledge loss. And that happens anyway.
The second leads to a separation of cultures, and ultimately literally social tribes who do not understand each other. C.P. Snow's Two Cultures, but multiplied.
How easy is it for you to describe what you do professionally to someone who doesn't do it? Your parents, family, friends, strangers?
@freakazoid Hell, I've struggled to explain sysadmin + data hat to other (skilled, recognised) tech types. Or even myself.
(A partial realisation is that the knowledge transfer and pioneering I've been describing is actually a large part of the job, though never a part of job descriptions.)
What happens when major, *mutually critical* parts of society and technological workforces cannot understand *or trust* one another?
(This goes beyond shitty products, IKYMI.)
@freakazoid But, pulling this train back on the rails: if you have worlds where people don't understand each other fundamentally, they're not going to understand their tools, the work requirements, workflows, habits, etc.
And if you put management in the hands of yet another class or classes (PMs, sales, finance, business), that translation gets all the worse.
There's a type of language that emerges between multiple independent cultures in close contact: pidgin.
@freakazoid Pidgin languages share some characteristics:
1. They're not native tongues (those are creoles.)
2. They draw on multiple sources, extracting what they need.
3. They're limited to the interface requirements only. Trading cultures are especially prone to pidgin tongues. Not the production, transit, consumption, or use, but the _exchange_.
4. Of necessity, pidgens are simplifications of their source languages.
And the kicker: ...
@freakazoid 5. Pidgen is the linguistic equivalent of a Tyranny of the Minimum Viable User. Pidgen languages evolve under the same requirements as technical products discussed earlier. They have to be sufficient to the task, but understandable to the least capable beneficially-contributing members of interactions.
Also: I'm not saying pidgens are lesser or not useful. They're shaped by demands and environments. But those influences have consequences.
@freakazoid And back to ToTMVU and products: there are a minimal number of ways out.
You can try educating users. If that's _everyone_, you've got a big problem. Advertising is one answer to that challenge. I'm not sure that's a net positive. "High touch" sales are another.
(The prevalence of sales-guy-as-tech-CEO may derive from this, a recent realisation.)
You can limit your market through regulation and/or licensing. That keeps standards up, but size down.
@freakazoid You can go mass-market. You'll run square into the ToTMVU here. The firm that caters to this more viably, modulo other influences, has a higher likelihood of winning. Trying to chase the market up-skill generally loses: Compaq, SGI, Sun, IBM. Bicycles. Fine foods. Quality TV programming. Especially with network effects.
You can exit the market. Survive on government grants, patronage, subscribers. Small-scale, may work.
That's what I've got.
@freakazoid @woozle @o @dredmorbius Oh yes, the "people choose software for features, not quality" thing is something I've been yelling about since the 90s. It's why we have people insist on using Word when they would have an easier time with Markdown, insist on Excel when all they need is a table with no computation, etc.
@freakazoid Largely agreed, though I'll take exception to "laziness".
Case in point:my professional career as a sysadmin / data / ops / dev mutt. I've struggled to answer "what do you do", because much of it is a rather ill-defined "figuring out where the fucking plane is flying", where the plane isn't the system under my control, but the larger technical landscape.
What tools, methods, formats, applications, languages, devices, regulatory changes, market pressures, etc.,...
@freakazoid ... are coming down the pike, and what *minuscule* fraction of those will actually matter?
It's a case of overhwhelming information, both in quantity *and* complexity. And I landed in the spot *not* from Great Skill In Predicting the Next 30 Years of IT Infrastructure, but dumb young luck: I liked Unix.
After your first few years in the field (5, 10, 15, 20, and in my case, closer to the latter), your luck in being where the firehose happened to be squirting...
@freakazoid ... tends to run out, and you've got to get back in front of it, or bail out.
I've spend most of the past 5 years trying to decide which I want to, or *can*, do -- the process is tremendously exhausting.
Trying to suss out complex consumer options, and not just tech: investments, job/career, medical advice, services, and devices devices devices devices is HARD.
For all intents *NOBODY* can keep up at all of it. It's not laziness, it's overload.
@freakazoid So you fall back on heuristics.
A heuristic is a cheap (that is, low-cost, low-effort) way of discarding virtually all information in the process of coming to a decision.
Even *totally random* heuristics can be useful, in fact, *more* useful than biased ones, *if they prevent systemic error*.
(In exams I once took out and flipped a coin for questions on which I didn't know the specific answer though I'd ruled some out. Effective. And intimidating to others.)
@freakazoid Cargo-culting is heuristics gone bad.
Though in many cases, with some luck, *it can actually work*, if the facsimile is sufficiently close to the original, and/or you happen to be opportunistically in the cargo stream.
Not always, but just by odds, a decent bet.
So: yes, there's laziness. But there's also too much to know.
And: in a prescientific culture, virtually all human practice was cargo-culting, generally following tradition or myth.