Hey, it's Lucy,

In today's issue:

🔓  Claude's source code leak is leveling the playing field

💸  Token limits, computing power, and the real cost of AI projects

🎬  OpenAI pulls the plug on Sora

📰  The New Yorker's Sam Altman exposé isn’t anything new

🧠 THIS WEEK'S FIX

Last week, Anthropic's source code was accidentally leaked on the internet. And while it was brief and quickly fixed, what matters is that competitors and the tools you already use every day like ClickUp could now replicate what's made Claude the only AI lab generating serious revenue. We're talking about a company projected to hit $30B this year, up from $9B last year.

The leak handed over Anthropic harness, which is the way a company instructs a model to solve problems. Claude's approach makes its model effective and easy to use as a coding tool even if the user doesn’t know how to write code. That's the secret sauce, and now the world has it.

So if Anthropic accidentally went open source (for a moment) does that mean a rising tide raises all boats? 

Anthropic is rumored to be eyeing an IPO this year, and having their most valuable IP exposed does damage to their reputation as a safety-first brand. But it’s also important to remember that what was leaked is likely an older version of their product and Anthropic is likely sitting on cutting edge AI-tech soon to be released. 

Looking for the juiciest AI scoop this week? Subscribe to Spiral TV for my team’s weekly HyperFix Breakdown.

💸 HOT TAKES: THE REAL COST OF AI PROJECTS

Token limits aren't a bug. They're the business model.

Token limits are a business model and if you’ve noticed you are hitting usage limits sooner, you aren’t alone. A lot of us are used to "paid plan" meaning unlimited. Netflix charges you more for more screens, not for more binge watching. AI is different. Your $20/month isn't a subscription to unlimited access; instead it's essentially $20 worth of tokens preloaded onto your account. It’s important to keep this in mind when planning big projects and asking yourself: How much is this really going to cost? 

This week on the HyperFix Breakdown, we got into the weeds on what actually eats your tokens and what you can do about it.

  • Use plan mode first. Before you let an agent run wild on a big project in Claude, put it in plan mode. It reasons through the steps, gives you the roadmap, and lets you course-correct before you've burned anything.

  • Build your context file once, use it everywhere. A strong markdown instructions file means you're not re-explaining the project every time you hit a limit or switch accounts.

  • Computing power matters locally. If your prompts are running slower than a colleague's using the same instructions, it might be your machine. RAM and GPU horsepower affect how fast Claude Code, Claude Work, and similar tools run on your end.

  • Zapier and Make still work. Save your AI budget for complex, judgment-heavy tasks. Simple automations don't need an agent. Keep what works and deploy AI where it makes financial sense. 

Thanks to AI our job isn't task-based anymore. It's architecting the work by scoping, planning, documenting and focusing on high-leverage tasks. 

🔥 HOT TAKES

Sora is dead. $15M/day was apparently not sustainable.

OpenAI is shutting down Sora. They were reportedly burning $15 million a day keeping it alive for content that's still pretty easy to clock as AI slop. Throw in copyright infringement lawsuits and a just-announced billion-dollar Disney partnership that apparently didn’'t save it, and here we are.

It's a good reminder: the sheer cost of running these systems is enormous, and not every AI bet pays off. $15M/day is a lot of resources to spend on a product that isn't landing.

📰 WORTH READING

The New Yorker's Sam Altman piece and what's actually new

If you've read Karen Hao's Empire of AI, a lot of The New Yorker's exposé published this week will feel familiar. It's a long read (budget about 90 minutes) that revisits the 2023 ouster and reinstatement, the Thrive investment that had employees' equity on the line, and Sam Altman's well-documented pattern of half-truths.

But here's the newer part of the story that stood out to me, and that I wish they'd gone deeper on: subpoenas targeting AI safety workers.

The piece highlights a 29-year-old lawyer who helped draft California's AI safety bill. OpenAI served him a subpoena demanding access to his private communications, framed as looking for evidence of Elon Musk funding critics. But demanding someone's full private communications goes well beyond that. Other supporters of the bill and critics of OpenAI's nonprofit-to-for-profit restructuring reportedly received similar subpoenas.

Whether this is a legal strategy, intimidation, or both, it's worth watching. It flies directly in the face of OpenAI's founding mission. And if you're not familiar with that backstory, Karen Hao's book is the place to start.

🔥 EXTRA HYPE

Open AI proposes to shift the tax burden to companies benefiting from AI, but is it realistic? 

• Thousands of 2025 scientific papers got peer reviewed with fake citations 

• MIT challenges AI job disruption narrative 

• Study shows workers are more likely to adopt shadow AI than forced policies 

• Perplexity will give you $1M for your $1B business idea 

• Is Claude getting dumber

• Pure content monetization is dead, but community experiences are thriving

📨 P.S. If you are building a newsletter and want to eliminate the weekly scramble, that is exactly what we help our customers do.

And if this sparked something, forward it to an AI-curious friend.

Keep Reading