I have occasionally joked that it's very easy to tell generative neural networks (genAI) are not actually worth very much to the average company — after all, if it were, these companies would have already been paying out the nose for 'synthesists': people who are very good at sitting in one place and doing nothing but responding to random requests for info from their coworkers, skimming data and rapidly cludging together summaries and associations between concepts in human-readable reports.
Of course, this is an oversimplified joke. We do have 'consultants', after all, a nebulous group of overpaid people who float between offices and provide 'advice' of various kinds, but that's not the same thing as sticking a neurodivergent person in a cubicle on a full-time salary and leaning on them to sort through all the stuff you don't want to.
Where this sort of role does exist, it isn't highly paid, much like other jobs genAI is apparently posed to automate. Office assistants (which is generally a job that requires a lot of rapid synthesis), call center and other 'customer service' workers, non-fiction 'content generators', and freelance designers are not paid very well.
As such, the potential savings are not as dramatic as it might seem, even if these roles ever are totally replaced. Part of the problem, of course, is that these are rarely jobs which generate explicit profits - they just solve a wide array of smaller problems and make things more productive overall.
I feel quite validated in my little joke and half-reasoned analysis, though, because the money-minded bastards at Goldman Sachs agree with me. According to their latest public-facing report on global tech, genAI can't replace shit-all right now (on an industry-wide scale), won't be able to for at least the next decade, and if it ever really does, it's still not going to be all that wild in terms of savings.
I like reading this kinda report, in part because I know its purpose. I used to help write ones just like this one: they exist to make your clients and prospective clients trust you and let you make decisions on their behalf. So, if the report says something positive, it's probably because there's money in it for someone. Likewise, if you stick a warning or negative outlook in, you'd better have a damn good reason for doing it, because clients don't like being naysayed.
So the big GS saying "nah shits fucked son" is. Kind of a big deal, honestly. Because this is a serious, serious naysaying.
All positive sentiments in the industry, even the comparatively sensible ones in this report, boil down to "technology always gets more efficient over time, it'll be fine". Even the most hardline pro-genAI spruker seems fundamentally unable to identify how these efficiency gains will translate to capability gains, let alone how it might lead us to some kind of genuine artificial intelligence.
What we have, broadly, is the ability to take prior data, churn it through a series of very expensive calculations, and output a statistical regurgitation of that data that is, on average, about 80% correct. To quote a *very* good and disdainful long-form response to this GS report:
I think it is, probably, not a great sign when large-scale investors like Goldman Sachs (with 150 billion USD in market capital) start agreeing with a negative outlook on your product that industry outsiders have held for years. In fact, I think it's just not great when industry outsiders consistently have negative outlooks on your product, full stop. It's very easy to go "well, I'm an industry insider, so obviously I have the unique knowledge and skills necessary to be sure I'm right". Unfortunately, the view from inside a ten-foot-deep hole counts as a 'unique' view, and you should't be surprised when people aren't clamouring to hop in and join you.
The long and short of it is that the burgeoning genAI industry is set to dump one trillion (with a 't') USD into the development of genAI in the next five or so years, despite a total inability for the technology to generate returns. Actually, it's worse than that: nobody can articulate how it might ever feasibly generate returns.
Straight-up, nobody has any idea. Every CEO clamouring at the bit to cram genAI into their products (such as Zoom CEO Eric Yuan, who envisions "AI clones" going to meetings and handling 70% of the average workload) has, without exception, totally failed to articulate how their vision is going to come to pass on a technical or even workflow level — let alone why, if it does, their company would be better placed to rent-seek from the massive productivity boost.
Delightfully, GS also uses the same "picks and shovels" analogy I'm so fond of: as it goes, during gold rushes, the people who reliably make the most money are the ones selling picks and shovels to hopeful miners. Here, as in many bubble/grift industries, the people making money are the people providing the infrastructure for all these neural networks: selling hardware, in particular. Everyone else is either running massive risks or being conned.
Even large companies, or 'hyperscalers' of this technology, like Google and Amazon, are only set to earn "incremental" revenue from it. This is not, to put it mildly, what was advertised. Demos are faked, model performance is overhyped, the impact of gaining additional training data is overstated ad nauseum, and the industry churns along on hype so empty it eclipses the 'Dot Com' bubble.
In his piece, in fact, Zitron makes the astute observation that the "un-anticipated" use cases of many prior technologies were, in fact, anticipated by the people developing them into a final product for market. Not just anticipated, but roadmapped, thorougly and plausibly. GenAI doesn't come close to clearing that hurdle. As such, its so-far success at hoovering up investor money is... grotesque.
One trillion dollars. I can't get over that figure. The sheer, spectacular waste of it. One trillion dollars of labour and hardware and electricity, crapped into the void because big hyper-dimensional clouds of statistical associations can rapidly approximate the text output of a confidently wrong human.
The entire tech industry has failed the Turing test in the most expensive and embarrassing fashion possible. Your power bill will probably go up as a result. My recommendation is to find out wherever your local AI server farm stores the water it uses for its cooling system and piss in the storage tanks. I don't think it'll help, but it will be cathartic.
Of course, this is an oversimplified joke. We do have 'consultants', after all, a nebulous group of overpaid people who float between offices and provide 'advice' of various kinds, but that's not the same thing as sticking a neurodivergent person in a cubicle on a full-time salary and leaning on them to sort through all the stuff you don't want to.
Where this sort of role does exist, it isn't highly paid, much like other jobs genAI is apparently posed to automate. Office assistants (which is generally a job that requires a lot of rapid synthesis), call center and other 'customer service' workers, non-fiction 'content generators', and freelance designers are not paid very well.
As such, the potential savings are not as dramatic as it might seem, even if these roles ever are totally replaced. Part of the problem, of course, is that these are rarely jobs which generate explicit profits - they just solve a wide array of smaller problems and make things more productive overall.
I feel quite validated in my little joke and half-reasoned analysis, though, because the money-minded bastards at Goldman Sachs agree with me. According to their latest public-facing report on global tech, genAI can't replace shit-all right now (on an industry-wide scale), won't be able to for at least the next decade, and if it ever really does, it's still not going to be all that wild in terms of savings.
"My sense is that such cost savings won’t translate to more complex, open-ended tasks like summarizing texts, where more than one right answer exists."
—Daron Acemoglu, Institute Professor at MIT, in Goldman Sachs' Global Macro Research, Issue 129
I like reading this kinda report, in part because I know its purpose. I used to help write ones just like this one: they exist to make your clients and prospective clients trust you and let you make decisions on their behalf. So, if the report says something positive, it's probably because there's money in it for someone. Likewise, if you stick a warning or negative outlook in, you'd better have a damn good reason for doing it, because clients don't like being naysayed.
So the big GS saying "nah shits fucked son" is. Kind of a big deal, honestly. Because this is a serious, serious naysaying.
All positive sentiments in the industry, even the comparatively sensible ones in this report, boil down to "technology always gets more efficient over time, it'll be fine". Even the most hardline pro-genAI spruker seems fundamentally unable to identify how these efficiency gains will translate to capability gains, let alone how it might lead us to some kind of genuine artificial intelligence.
What we have, broadly, is the ability to take prior data, churn it through a series of very expensive calculations, and output a statistical regurgitation of that data that is, on average, about 80% correct. To quote a *very* good and disdainful long-form response to this GS report:
"Generative AI at best processes information when it trains on data, but at no point does it "learn" or "understand," because everything it's doing is based on ingesting training data and developing answers based on a mathematical sense or probability rather than any appreciation or comprehension of the material itself. LLMs are entirely different pieces of technology to that of "an artificial intelligence" in the sense that the AI bubble is hyping, and it's disgraceful that the AI industry has taken so much money and attention with such a flagrant, offensive lie."
—Edward Zitron, in 'Pop Culture' on his website, Where's Your Ed At?
I think it is, probably, not a great sign when large-scale investors like Goldman Sachs (with 150 billion USD in market capital) start agreeing with a negative outlook on your product that industry outsiders have held for years. In fact, I think it's just not great when industry outsiders consistently have negative outlooks on your product, full stop. It's very easy to go "well, I'm an industry insider, so obviously I have the unique knowledge and skills necessary to be sure I'm right". Unfortunately, the view from inside a ten-foot-deep hole counts as a 'unique' view, and you should't be surprised when people aren't clamouring to hop in and join you.
The long and short of it is that the burgeoning genAI industry is set to dump one trillion (with a 't') USD into the development of genAI in the next five or so years, despite a total inability for the technology to generate returns. Actually, it's worse than that: nobody can articulate how it might ever feasibly generate returns.
Straight-up, nobody has any idea. Every CEO clamouring at the bit to cram genAI into their products (such as Zoom CEO Eric Yuan, who envisions "AI clones" going to meetings and handling 70% of the average workload) has, without exception, totally failed to articulate how their vision is going to come to pass on a technical or even workflow level — let alone why, if it does, their company would be better placed to rent-seek from the massive productivity boost.
Delightfully, GS also uses the same "picks and shovels" analogy I'm so fond of: as it goes, during gold rushes, the people who reliably make the most money are the ones selling picks and shovels to hopeful miners. Here, as in many bubble/grift industries, the people making money are the people providing the infrastructure for all these neural networks: selling hardware, in particular. Everyone else is either running massive risks or being conned.
Even large companies, or 'hyperscalers' of this technology, like Google and Amazon, are only set to earn "incremental" revenue from it. This is not, to put it mildly, what was advertised. Demos are faked, model performance is overhyped, the impact of gaining additional training data is overstated ad nauseum, and the industry churns along on hype so empty it eclipses the 'Dot Com' bubble.
In his piece, in fact, Zitron makes the astute observation that the "un-anticipated" use cases of many prior technologies were, in fact, anticipated by the people developing them into a final product for market. Not just anticipated, but roadmapped, thorougly and plausibly. GenAI doesn't come close to clearing that hurdle. As such, its so-far success at hoovering up investor money is... grotesque.
"It's genuinely remarkable how many people have been won over by this remarkable con — this unscrupulous manipulation of capital markets, the media and brainless executives disconnected from production — all thanks to a tech industry that's disconnected itself from building useful technology."
—Edward Zitron, again in 'Pop Culture' on his website, Where's Your Ed At?
One trillion dollars. I can't get over that figure. The sheer, spectacular waste of it. One trillion dollars of labour and hardware and electricity, crapped into the void because big hyper-dimensional clouds of statistical associations can rapidly approximate the text output of a confidently wrong human.
The entire tech industry has failed the Turing test in the most expensive and embarrassing fashion possible. Your power bill will probably go up as a result. My recommendation is to find out wherever your local AI server farm stores the water it uses for its cooling system and piss in the storage tanks. I don't think it'll help, but it will be cathartic.