Really enjoyed the section about Mastercard and Visa. I see no scenario in which these companies can survive when AI agents can use crypto like SOL based stablecoins to pay each other for near 0 fees instantaneously, a point I had not reckoned with much before.
One thing I’m unsure about in this scenario is how much broad-based price deflation shows up as abundance. If AI drives the marginal cost of many services toward zero, shouldn’t a lot of the consumer basket get cheaper, raising real purchasing power for anyone with income or transfers? It feels like the memo assumes prices stay sticky while incomes fall, rather than prices falling fast enough to partially offset the demand shock. While indeed debt holders will struggle, everything in nominal terms should in theory get cheaper.
However, I guess this is provided that those cost savings etc. don't go straight back into financing GPUs/AIs consuming their own cost savings. Hopefully we'll all still be here in 2030 as our ASI overloards start terraforming the earth to see how all this shakes out.
It’s funny to consider that much of the economy exists because of inefficiency and friction. Wealth distribution itself may largely be a byproduct of those frictions like information gaps, transaction costs, regulatory barriers, and coordination problems.
If AI meaningfully reduces those frictions, competition should intensify, driving costs down and benefiting consumers. That same dynamic could reshape the labor market: some roles will compress, but entirely new industries, previously economically infeasible could emerge as viable also helping the labor market. Lower input costs and higher productivity expand what’s possible.
From a global perspective, countries that fail to adopt AI aggressively risk losing competitiveness. If productivity gains concentrate in AI-forward economies, capital and economic influence may flow toward them. Nations that hesitate out of fear could fall structurally behind, while countries like the U.S. could disproportionately benefit from absorbing global inefficiencies.
At a high level, it’s difficult to accept that doing more with less would leave society worse off overall.
It does if all the wealth creation and capital flows into the U.S. all end up in a few pockets while millions are displaced and see their lives completely destroyed. I've always been a capitalist / libertarian. But the scenario described in the piece is one of the few that could change my mind, given the structural issue that might not find a way to self correct.
The social contract has been completely obliterated. Grow up, work hard, get good grades, major in something valuable, network, get a job adding value, buy a house, have children, save, invest, retire. Millions that did the right thing, held up their end of the bargain, now are at risk of losing everything they worked for.
I use it every day to augment my work, but from day one have been a believer society would be much better off in 2050 without AI than we will be with it.
The movement will not succeed but there will be an ever growing roar, louder and louder, from society to put the genie back in the bottle. When that movement decides to become violent is when the real collapse will take place.
I'll take the other side of this... Yes, existing software companies will be pressured to at least some degree. Some may even go to zero. That will likely pressure areas of the financial systems to some, albeit highly uncertain, degree in the very short term.
What's missing in the proposed scenario is the part where we assume that what is built or what humans actually want is what is currently available. Residing at the core argument of this article is the observation that "intelligence was scarce". If intelligence is/was scarce then the scope of what that scarce intelligence could be directed at was also necessarily scarce.
Therefore if intelligence is no longer scarce then the scope of what it can be directed at must go up by a proportional amount. Assuming, of course, that we don't think DoorDash was the peak of human desire, that Salesforce isn't the answer to the mysteries of the universe, that this is the healthiest humans could ever be.
Is deflation in software/white collar tasks actually a downdraft or were the costs of those tasks the bottleneck preventing a much higher level of prosperity achieved through a much wider offering of products and services?
Capital has a way of finding ways to turn itself into more capital. As possibilities open up elsewhere and (potentially) shrink in software it would be prudent to assume it will move to areas where returns become apparent because they've become possible.
Do I suggest this and also think there will be no bumps, no casualties along the way? I do not. But underestimating the human ability to adapt, and adapt rather quickly, has always been a losing bet in all but the short time frame.
'By early 2027, LLM usage had become default. People were using AI agents who didn’t even know what an AI agent was, in the same way people who never learned what “cloud computing” was used streaming services.'
I highly doubt this will happen. There is just not enough energy/compute available for this to happen.
I work in Big Tech, in one of the Mag7, in the Cloud and AI division. I've generally found Citrini's understanding of AI and its capabilities poor. What AI can do in software engineering - useful as it is proving - is nowhere near the current hype. Why this is is a much longer discussion, and it should be noted that nowhere near the current hype is still not nothing.
However one thing that has become apparent from all this is that a lot of investors and management are working off very simplistic models of how things work, and genuinely seem to think AI is magic. And something I genuinely do find interesting and valuable in this piece is the idea that it doesn't really matter if it is feasible or not to reimplement the systems a SaaS is providing you as long as you can persuade the salesperson that you genuinely think that you can. Even if it would be a devastatingly bad decision for you, it's a lost sale and that's a strong negotiating chip.
And so I think there's definitely something there that's deeply toxic to SaaS margins even if they continue to be the dominant solution in their niche. I'm not convinced it will necessarily be so forever, but it sets the scene for a while. There's probably a whole host of interesting effects like this caused by beliefs around the technology that don't really require the technology's assistance to have very real economic impacts. I'm going to have to go away and consider what these might be and how they might be tradable.
“The gains from the productivity boom accruing almost entirely to the owners of compute and the shareholders of the labs that ran on it has magnified US inequality to unprecedented levels.”
The solution to the inequality problem is make more people the shareholders of the companies making and selling the machines that are doing the work
Discretionary spending doesn’t collapse if everyone is shareholders of the ai companies receiving dividends and loans backs by shares.
The problem is many of these companies are staying private. Big investors are jostling to get a piece of the ai land and paying a pretty penny.
The scary outcome is a tiny percentage of people owning the rights to the profits of the machines that humans can’t compete against.
InvestAmerica is a small step by the government to make more Americans shareholders. But not enough. Rather than heavy taxation and $ redistribution to people made obsolete, it’s better to help workers become shareholders before they are made obsolete.
Also - if the ai is extremely capable it should be able to help train and retrain humans to be useful, as the in-demand skillset shifts.
We’ll see if Super-Intelligence on tap can resolve the quandaries it creates
A few people no one asked anything of are leading and cheerleading the building of a technology that no one asked for and no one even needed. The result? A few gain infinite wealth while the lives hundreds of millions worked hard to build evaporate before their eyes.
Net benefit? Not a chance. A society with 20-30% unemployment is a terrible place to live. A society where people are paid transfers to sit around all day with free time and no purpose, even worse.
There was zero logical explanation or reason why humanity needed AI. At some point the majority will realize that. They're not going to respond well. The civil unrest will likely turn hot.
The world would be a much better place if AI was treated the same as nuclear weapons, non-proliferation. It will never happen, but one can dream.
Excellent piece, thanks. And wow, this is scary - especially from here in Europe. Europe doesn't want to build AI models - energy costs are prohibitive and the "best" AI regulation world-wide makes investing in AI unattractive in Europe. So, Europe will stay dependent either on the US or Chinese models and thus won't be able to tax AI....
where's the basket? ;)
This is one of the most thought provoking pieces I have ever read. Great work guys!
Are we short white shirt factories?
10:30pm UK time - am I supposed to shit my underwear this late at night?
Really enjoyed the section about Mastercard and Visa. I see no scenario in which these companies can survive when AI agents can use crypto like SOL based stablecoins to pay each other for near 0 fees instantaneously, a point I had not reckoned with much before.
One thing I’m unsure about in this scenario is how much broad-based price deflation shows up as abundance. If AI drives the marginal cost of many services toward zero, shouldn’t a lot of the consumer basket get cheaper, raising real purchasing power for anyone with income or transfers? It feels like the memo assumes prices stay sticky while incomes fall, rather than prices falling fast enough to partially offset the demand shock. While indeed debt holders will struggle, everything in nominal terms should in theory get cheaper.
However, I guess this is provided that those cost savings etc. don't go straight back into financing GPUs/AIs consuming their own cost savings. Hopefully we'll all still be here in 2030 as our ASI overloards start terraforming the earth to see how all this shakes out.
Fantastic article as always Citrini and team.
P.S: Go long $HY9H SK Hynix
It’s funny to consider that much of the economy exists because of inefficiency and friction. Wealth distribution itself may largely be a byproduct of those frictions like information gaps, transaction costs, regulatory barriers, and coordination problems.
If AI meaningfully reduces those frictions, competition should intensify, driving costs down and benefiting consumers. That same dynamic could reshape the labor market: some roles will compress, but entirely new industries, previously economically infeasible could emerge as viable also helping the labor market. Lower input costs and higher productivity expand what’s possible.
From a global perspective, countries that fail to adopt AI aggressively risk losing competitiveness. If productivity gains concentrate in AI-forward economies, capital and economic influence may flow toward them. Nations that hesitate out of fear could fall structurally behind, while countries like the U.S. could disproportionately benefit from absorbing global inefficiencies.
At a high level, it’s difficult to accept that doing more with less would leave society worse off overall.
It does if all the wealth creation and capital flows into the U.S. all end up in a few pockets while millions are displaced and see their lives completely destroyed. I've always been a capitalist / libertarian. But the scenario described in the piece is one of the few that could change my mind, given the structural issue that might not find a way to self correct.
The social contract has been completely obliterated. Grow up, work hard, get good grades, major in something valuable, network, get a job adding value, buy a house, have children, save, invest, retire. Millions that did the right thing, held up their end of the bargain, now are at risk of losing everything they worked for.
I use it every day to augment my work, but from day one have been a believer society would be much better off in 2050 without AI than we will be with it.
The movement will not succeed but there will be an ever growing roar, louder and louder, from society to put the genie back in the bottle. When that movement decides to become violent is when the real collapse will take place.
Best near future I have read!!!
I'll take the other side of this... Yes, existing software companies will be pressured to at least some degree. Some may even go to zero. That will likely pressure areas of the financial systems to some, albeit highly uncertain, degree in the very short term.
What's missing in the proposed scenario is the part where we assume that what is built or what humans actually want is what is currently available. Residing at the core argument of this article is the observation that "intelligence was scarce". If intelligence is/was scarce then the scope of what that scarce intelligence could be directed at was also necessarily scarce.
Therefore if intelligence is no longer scarce then the scope of what it can be directed at must go up by a proportional amount. Assuming, of course, that we don't think DoorDash was the peak of human desire, that Salesforce isn't the answer to the mysteries of the universe, that this is the healthiest humans could ever be.
Is deflation in software/white collar tasks actually a downdraft or were the costs of those tasks the bottleneck preventing a much higher level of prosperity achieved through a much wider offering of products and services?
Capital has a way of finding ways to turn itself into more capital. As possibilities open up elsewhere and (potentially) shrink in software it would be prudent to assume it will move to areas where returns become apparent because they've become possible.
Do I suggest this and also think there will be no bumps, no casualties along the way? I do not. But underestimating the human ability to adapt, and adapt rather quickly, has always been a losing bet in all but the short time frame.
'By early 2027, LLM usage had become default. People were using AI agents who didn’t even know what an AI agent was, in the same way people who never learned what “cloud computing” was used streaming services.'
I highly doubt this will happen. There is just not enough energy/compute available for this to happen.
I work in Big Tech, in one of the Mag7, in the Cloud and AI division. I've generally found Citrini's understanding of AI and its capabilities poor. What AI can do in software engineering - useful as it is proving - is nowhere near the current hype. Why this is is a much longer discussion, and it should be noted that nowhere near the current hype is still not nothing.
However one thing that has become apparent from all this is that a lot of investors and management are working off very simplistic models of how things work, and genuinely seem to think AI is magic. And something I genuinely do find interesting and valuable in this piece is the idea that it doesn't really matter if it is feasible or not to reimplement the systems a SaaS is providing you as long as you can persuade the salesperson that you genuinely think that you can. Even if it would be a devastatingly bad decision for you, it's a lost sale and that's a strong negotiating chip.
And so I think there's definitely something there that's deeply toxic to SaaS margins even if they continue to be the dominant solution in their niche. I'm not convinced it will necessarily be so forever, but it sets the scene for a while. There's probably a whole host of interesting effects like this caused by beliefs around the technology that don't really require the technology's assistance to have very real economic impacts. I'm going to have to go away and consider what these might be and how they might be tradable.
“The gains from the productivity boom accruing almost entirely to the owners of compute and the shareholders of the labs that ran on it has magnified US inequality to unprecedented levels.”
The solution to the inequality problem is make more people the shareholders of the companies making and selling the machines that are doing the work
Discretionary spending doesn’t collapse if everyone is shareholders of the ai companies receiving dividends and loans backs by shares.
The problem is many of these companies are staying private. Big investors are jostling to get a piece of the ai land and paying a pretty penny.
The scary outcome is a tiny percentage of people owning the rights to the profits of the machines that humans can’t compete against.
InvestAmerica is a small step by the government to make more Americans shareholders. But not enough. Rather than heavy taxation and $ redistribution to people made obsolete, it’s better to help workers become shareholders before they are made obsolete.
Also - if the ai is extremely capable it should be able to help train and retrain humans to be useful, as the in-demand skillset shifts.
We’ll see if Super-Intelligence on tap can resolve the quandaries it creates
This hits hard. Great stuff.
Thought provoking but infinitely depressing.
A few people no one asked anything of are leading and cheerleading the building of a technology that no one asked for and no one even needed. The result? A few gain infinite wealth while the lives hundreds of millions worked hard to build evaporate before their eyes.
Net benefit? Not a chance. A society with 20-30% unemployment is a terrible place to live. A society where people are paid transfers to sit around all day with free time and no purpose, even worse.
There was zero logical explanation or reason why humanity needed AI. At some point the majority will realize that. They're not going to respond well. The civil unrest will likely turn hot.
The world would be a much better place if AI was treated the same as nuclear weapons, non-proliferation. It will never happen, but one can dream.
AI use will not be free. therein lies the rub.
Excellent piece, thanks. And wow, this is scary - especially from here in Europe. Europe doesn't want to build AI models - energy costs are prohibitive and the "best" AI regulation world-wide makes investing in AI unattractive in Europe. So, Europe will stay dependent either on the US or Chinese models and thus won't be able to tax AI....
Wow, this is scary. Excellent piece, thanks. I Wonder how this scenario would affect the US elections 5 months later.