5 Comments
User's avatar
Thomas Parker's avatar

I think you see what you're talking about in movies, specifically in musicals. When the genre - and the studio system that made it viable - died (Killed off by Hello Dolly and the like) no more people came up through the system learning how to do those kinds of movies. No more Alan Freeds, no more Busby Berkleys, no more Stanley Donens or Vincente Minellis, no more Fred Astaires or Cyd Charisses or Gene Kellys. So now, when a musical is occasionally made, (La La Land), something is missing. It's thin somehow, lacking all that "thick" experience that was built up over decades. And it can't be recovered or learned on the fly - there's no longer anyone alive qualified to teach it.

And don't even get me going about what CGI has done to stunt driving!

Expand full comment
Patrick Watson's avatar

I teach machine learning so I'm acutely aware of the skills gap you point to here. Yet I tend to think of it as a *revealed* skills gap rather than a *developing* skills gap.

Before G.AI, creative writing and visual art were go-to examples of something robots didn't have the proper souls to excel at. Before symbolic AI, chess and mathematics understood as uniquely dependent on human insight. In both cases, this turned out to be wildly inaccurate.

To me, this seems like pretty strong evidence that we have remarkably poor insight into what our "uniquely human" capabilities actually are. Thus, we should be considerably more cautious with AI prognostications. It might even be wise to bet *against* any scenario someone can describe, since the reality has consistently violated our stereotypes.

There are might be some narrow insights available about AI's trajectory (e.g., I suspect sophisticated, flexible motor movements are going to be tricky because there's no analogue to smart phones for collecting large samples of mechanosensory data). But hey, you should probably bet against me! The larger socioeconomic impacts are so intrinsically unpredictable it's best to treat any forecast as mostly bull.

Expand full comment
dp's avatar
May 11Edited

G.AI poses two threats to human capacity. First, is threat to human creativity as a practice and discipline (vs. Eureka moments), which you describe here.

Second, is the loss of practical system level knowledge required to be held by the QA / Managers / Overseers / Editors to identify plausible, but incorrect G.AI output. In business context, the AI advocates reassure us that "humans in the loop" will check the machine outputs. But humans with the contextual depth of knowledge are nurtured over years of "apprenticeship" working lower level jobs in the overall business process.

Said another way, managers value is in their experience-based contextual knowledge (to spot when something is amiss) and their intuitions on how to respond to anomalies. If we no longer have "minor league team" managers-in-waiting rising through the ranks of business operational work, because these roles have been replaced by AI agents, then we face a coming managerial / oversight talent shortage.

I wrote about this using some basic BLS statistics on my substack. https://businesstechnologyvalue.substack.com/p/genais-near-futures-senior-talent

Expand full comment
throwaway's avatar

I've been pointing this out for almost two years now in various places, glad some people are starting to wake up.

In effect this is a problem with cascade failures. There are certain problem types that nearly everyone, with only few exceptions, cannot recognize without domain specific knowledge as a reference. The rules painted in blood.

When engineer's started building dams, they didn't really pay so much attention to the minor cracking of the materials, they used greater margins for error, but that wasn't enough in some areas where they didn't account for dynamics properly.

It wasn't until the first few catastrophic failures that seemed to happen all at once, with fatalities, but only because we as a people weren't paying attention; and because these types of problems, Humanity is really bad at noticing upfront with no reference.

The what happens when all your time value of labor goes to zero is simple. When there is no economic benefit, people don't build the skills. The intelligent don't spend time on it, they go where there is a return on their investment; if the environment changes they are often the first to leave being able to accurately predict beforehand or with warning enough that they get out relatively unscathed compared to others.

We've already seen the problem you describe happening with regards to vacuum tube technologies and manufacturing which were superseded by modern day transistors, and have become effectively lost technologies, kind of like the sodium vapor lamps that Disney used to have which showed up prominently in the animated sections of Mary Poppins.

Its a 10 year problem, like a ponzi scheme. You get short front-loaded benefits, then diminishing returns, then outflows exceed inflows at which point it completely collapses. With regards to broad-scale AI, that's socio-economic collapse where Catton covers Malthus and the food dependency based on the dependency of maintaining order fails; in a mathematically chaotic way.

There are requirements that must be met for a distribution of labor society to function. The factor markets must be able to be compensated sufficiently to support themselves, a wife, and three children such that one child makes it to have children themselves. Producers must also make a profit, and these are in terms of purchasing power.

It goes negative because of bad investments driven by money printing, you get sieving of wealth into few hands where the rich win so much that they lose... everything.

Money requires specific component properties too which fail. Mises actually wrote quite a lot about the various types of failures related to centralized heirarchical systems back in the 1930s. 6 problems that are impossible to solve once an economy becomes sufficiently socialized that it fails to non-market socialism, then chaos as a direct consequence of money printing and chaotic distortion driven by the failure of economic calculation. Price discovery has failed when 25% of the market is invisible, Wallstreet now does >50% volume in the dark. There's also the banking entities that meet requirements for FASAB S-56, and Basel 3 based in objective value and fiat (value is subjective, rationally).

So instead of just one problem happening in limited scope, you get chaos where its happening everywhere at the same time, and there's nothing to be done. This is one of the dangers of contagion. Unintelligent but educated people blind in their specialties have left us where our survival may very well depend upon solving what we have known for almost a century as impossible, intractable, unsolveable problems. Hysteresis requiring future sight is just one of those 6 problems. This is the filter that happens when a society marches towards totalism and total control, people blind themselves. Then Darwin's fitness fails.

Expand full comment
Mark P's avatar

RE: "The next generation of engineers will grow up using A.I. to solve basic problems all the time—they’ll never have to solve those problems without its help. As a result, though, they’ll never really have their hands in the guts of what they are making, and they won’t really understand, at a tactile level, how those guts work." -- Isn't that what most skilled jobs are like? No single person knows how to build an automobile or jet aircraft from scratch. Specialists come together to build something, one part at a time. And we're already highly reliant on computers to calculate for us.

Expand full comment