Carney and Trump Look For Common Ground While The World Bids Up Ai Stocks
Manage episode 513419288 series 2987371
Happy Thanksgiving everyone,
If you don’t have any video of your family spending time together over the Canadian Thanksgiving holiday, give Sora a try.
Listen on Apple, Spotify, or Google Podcasts.
Market Update📈📉
Okay, so while I’m having my philosophical moment about AI, the markets have been telling their own, more straightforward story. Let’s look at the numbers for the last quarter.
September was a pretty good month for stocks, but you had to be in the right places. The S&P 500 was up about three and a half percent. But get this—nearly 70% of that gain came from a handful of Big Tech companies. So if you weren’t in tech, you probably felt a little left out.
The other interesting trend is that capital seems to be flowing into Emerging Markets, which had a fantastic month, up over 7%. At the same time, money is trickling out of Europe. It’s a bit of a rotation we’re keeping our eye on.
And for the first time in a while, US small-cap stocks actually beat large-caps in the third quarter. That’s a sign that the rally might be broadening out a little bit, which is a healthy thing to see.
What Breaks This Market Rally?
It all comes back to the trillion-dollar AI question: Are we in a bubble?
The giants—Amazon, Google, Microsoft, Meta—they’re going to be fine. They are spending an eye-watering amount of money on AI data centers, but they can afford it. It’s like a “tails I win, heads I don’t lose too badly” situation for them. That infrastructure will eventually pay off.
The real risk, if this bubble pops, is for the companies that sell the shovels in this gold rush. Think NVIDIA, Oracle, and all the other companies in the AI supply chain. They would be in for a really tough time.
I’m watching for three things that could be the “uh-oh” moment:
* AI progress hits a wall. We realize AGI isn’t coming next year, and companies ask, “Why are we spending $10 billion for a model that’s only 5% better?”
* Supply finally overtakes demand. For two years we’ve heard “we can’t get enough chips!” What happens when everyone has built their data centers and suddenly there’s more than enough supply to go around? That’s when the panic stops.
* The free money dries up. A few of those AI startups with cool demos but no actual business model run out of cash, and investors get spooked.
Podcast & YouTube Recommendations🎙
Casey Handmer on the Dwarkesh Patel Podcast - I’ve listened 5 times.
Ray Dalio at GEF:
Best Links of The Week🔮
* “OpenAI’s short-form artificial intelligence video app Sora hit 1 million downloads less than five days after its launch in late September... Bill Peebles, head of Sora at OpenAI, shared the milestone in a post on X late Wednesday. He said Sora reached 1 million downloads even faster than ChatGPT, the company’s popular AI chatbot that supports 800 million weekly active users. Sora allows users to generate short videos for free by typing in a prompt.” Source: CNBC
* “China has tightened its controls on rare earth exports ahead of an expected meeting between President Xi Jinping and President Donald Trump. Shares of U.S. rare earth and critical mineral miners surged as the market speculates on further investment in the industry by the White House. The Trump administration has taken equity stakes in several miners this year to stand up a domestic supply chain against China.” Source: CNBC
* Last week saw some notable developments on the SLM side that is worth noting. Venturebeat explains in “Samsung AI researcher’s new, open reasoning model TRM outperforms models 10,000X larger — on specific problems”: “The trend of AI researchers developing new, small open source generative models that outperform far larger, proprietary peers continued this week with yet another staggering advancement.”
“Alexia Jolicoeur-Martineau, Senior AI Researcher at Samsung’s Advanced Institute of Technology (SAIT) in Montreal, Canada, has introduced the Tiny Recursion Model (TRM) — a neural network so small it contains just 7 million parameters (internal model settings), yet it competes with or surpasses cutting-edge language models 10,000 times larger in terms of their parameter count, including OpenAI’s o3-mini and Google’s Gemini 2.5 Pro, on some of the toughest reasoning benchmarks in AI research.” - Venturebeat
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit reformedmillennials.substack.com
72 episodes