Hey it’s Lucy,

In today’s issue:

  • How much control should you give your agent?

  • What we can learn from big tech’s balance sheets 

  • The human-in-loop framework for your newsletter writing prompts 

  • The real strategy for getting ahead with AI (it’s not what you think)

  • Become a better storyteller with AI, for free

🧠 THIS WEEK’S FIX

Over the weekend, the internet lost its mind over Moltbook, a Reddit-style platform where thousands of agents began posting, taking on roles, and interacting without human oversight. It sparked conversations about how much access and control we should give agents and whether AGI has arrived.

This is because tools like Clawdbot require unfettered access to local machines, which opens the door to prompt injection, data leakage, and unintended exposure. Right now this technology is designed for people who know how to use these tools safely are doing so with strong technical controls: separate computing systems like Mac Minis, throwaway hardware and strict isolation. 

Until Clawbot can operate with scoped permissions and clear containment, I’m choosing agents with browser-based access only, stronger security hygiene such as two-factor authentication across all platforms, and slower adoption over hype-driven experimentation.

🤖 HOT TAKES

Speaking to a Bloomberg reporter at Davos, Signal president Meredith Whittaker was asked what mistakes we’ll look back on in AI ten years from now. Her answer: money.

She argues that in any other industry, today’s AI balance sheets would be torn apart. But with AI, few people are asking basic questions. What productivity is actually increasing? And how is it being measured?

Whittaker also called out the hype around AI agents. By accessing things like calendars, credit cards, and sensitive data agents introduce real security risks. She places responsibility on companies to reckon with those risks before shipping.

My take: AI always involves trade-offs. Use should be guided by values, with a clear-eyed view of risks and benefits. I wouldn’t use an agent to plan something low-stakes, like a birthday party. But I would use one for research and curation if it frees time for higher-leverage work.

🤿 DEEP DIVE

Tired of hearing marketers say you’re going to be left behind if you don’t jump on board with the latest AI tool, agent or framework?

Every week it seems like my team and I are discussing the pros and cons of adding a new AI tool or function to our workflows. 

Ultimately we want to simplify and create a seamless experience that supports our ideas rather than outsources it.

While it’s good to experiment, we are much more interested in creating solid workflows, systems and safeguarding our data before outsourcing more access and decision making to agents. 

If you’re a founder or operator, sorting through the hype can create a heavy cognitive load. Rather than getting lost in a cycle of experimenting with and abandoning new tools, I suggest getting clear on:

Your end goal & productivity gains
Your values
Your systems and business infrastructure

This is especially important if you're creating a newsletter. To fully leverage AI you need to have a foundation. That means understanding your audience, the problem your newsletter solves, your newsletter format, idea generation, publishing and optimizing for growth.

AI KOOL-AID
On February 9, 2026, from 6–7 PM EST, I’ll be tuning into The New Landscape of AI-Driven Communication, a free online event hosted by New York University. 

The panel brings together faculty and AI practitioners Stephen McConnell, Boris Kievsky, Libby Clarke, and Ching-Fu Lan, who will share concrete insights on how AI is reshaping media, communication, and storytelling and how to work with these tools responsibly and creatively.

What I’m most looking forward to is the focus on practical takeaways: how to fine-tune Human + AI workflows, design thoughtful agentic systems, and use AI to become better storytellers, not noisier ones. Register here. 

🗞️NEWSLETTER BIZ LAB
Your reader is savvy. Researchers have found that reader trust drops by 50 percent if content is perceived to be AI generated.

However, if you train LLMs with your content and voice the results are much more authentic. I still suggest proceeding with care and doing your own drafting and editing.  Developed by Harvard Medical School, this framework keeps humans at the center of AI work and includes the following steps: 

Self-Reflect: Before using AI, clarify your goals, teaching philosophy, and perspective.

Prompt with a Model: Use a structured prompting method to provide clear 

instructions to the AI.

Academic Requirements: Ground the AI with relevant standards and requirements from your field.

Research on Pedagogy: Incorporate established theories and best practices to guide the AI's output.

Critique: Critically evaluate the AI's output against your initial goals and standards. This is the most crucial step.

🔥 EXTRA HYPE

• ChatGPT’s language feedback loop is changing how we speak 

• Developers using AI scored 17 percent lower on comprehension tests

• 72 percent of teens report adopting AI companions 

• X acquires Elon’s xAI,  creating world’s most valuable private company 

• Ring unveils AI “lost dog” feature to find your pup

• Substack launches video streaming app for your TV

•Researchers use AI to stop algae booms from taking over coastlines

•Google’s AI system perplexes its own CEO

•Best tools for building your newsletter’s tech stack 

📨 P.S. If you are building a newsletter and want to eliminate the weekly scramble, that is exactly what we help our customers do.

 👉 Book a consult with our team

And if this sparked something, forward it to an AI-curious friend.

Keep Reading