Hey, it's Lucy,

In today's issue:

  • The inputs you need to stop creating AI slop

  • AI is taking AND making new jobs 

  • AI water usage data gets challenged 

  • Cornell Professor takes on AI with typewriters

As people become more hypersensitive to the perception of AI slop, I’m witnessing a trend of more companies and business owners asking: Can you use AI but not make it sound like AI?

That brings us to an interesting intersection. I’m all for outsourcing tedious tasks to AI but there’s still no shortcut for how to write well online without doing at least some heavy lifting at the onset. 

That means actually taking the time to define a brand voice, tone and value proposition. I often say that copywriting is 80 percent research and 20 percent writing. AI is great for assisting with that research. The brand research that used to take me weeks can now be done in a few days.

But that research still requires inputs from stakeholders, deep thinking and iterating. Actually knowing what you want to say and standing behind it takes conviction and depth. So when I hear founders say “I don’t want it to sound like AI slop,” what they are really saying is: I want to create something that is authentic and real. I want to be respected in my industry and by my peers. I want to create thought leadership that creates change, not noise. 

At a minimum, that foundation includes:

  • A clear audience (not broad—real people, real examples)

  • The core problem you solve (what’s actually at stake)

  • Use cases and transformation (before → after)

  • Competitor awareness (how you’re different)

  • Values that show up in decisions, not just words

  • A defined tone and voice that can be replicated

I don’t believe brand messaging is dead. I think it’s more important than ever. AI is only as good as your inputs. So if you aren’t able to provide indepth, insightful and rich ideas you’ll end up sounding generic.

We’re diving into the components of brand messaging and how to create inputs that get better results in this week’s breakdown of the HyperFix. 

🧠 THIS WEEK’S FIX

Yesterday’s news that the tech company Oracle laid off between 10,000 and 30,000 employees via e-mail has me revisiting the whole “AI is going to take all our jobs" debate. I’m skeptical when companies themselves cite AI as a reason for layoffs (or a “change in business model” after employees have trained AI systems). Considering the fact that Oracle and OpenAI have ramped up spending for a $300 billion deal to expand OpenAI’s infrastructure next year, it’s clear the company’s priorities are going in a different direction.

It’s not entirely clear yet what roles these employees had, but it is clear that pressure from investors + stock prices dipping + debt expenditures for AI future gains = a deprioritization of human workers.

I believe we can hold two opposing truths: Yes AI is taking jobs AND it’s creating jobs.

Let’s call them the fixers. The software developers who have to clean up broken vibe-coded apps. Or the copywriters who are turning AI slop into something worth reading. There’s a need to keep humans in the loop especially when the AI glow fades and things break.

I’ve pulled a handy list of jobs that will likely be in demand in the near future in case you want to beef up your skills or consider how you might adapt to the current job market.

Emerging Roles:

  1. AI Prompt Engineers: Specialists who know how to effectively instruct AI models to produce quality output

  1.  AI Content Editors: Professionals who humanize and refine AI-generated content

  1. Brand Strategy Consultants: Strategic thinkers who develop authentic messaging (not just writers)

  1. AI Implementation Specialists: Help companies integrate AI tools into workflows effectively

  1. Data Analysts: Extract insights from AI-processed data

  1. AI Ethics and Compliance Officers: Ensure responsible AI use

  1. AI Training Specialists: Teach organizations how to use AI effectively

  1. Content Strategists: Develop comprehensive content strategies that leverage AI as a tool

  1. Customer Insight Researchers: Deep understanding of customer psychology and behavior

  1. AI-Human Workflow Designers: Optimize how humans and AI work together

🤖 HOT TAKES

We’ve all seen those reels on Instagram in which a person pretends to ask ChatGPT or Claude a seemingly innocuous question while pouring an entire bottle of water on the ground. But how much water is AI really using?

One of the most highlighted sources is a November 2025 report by the Brookings Institute that found average data centers use up to 300,000 gallons of water (half an Olympic-sized pool) per day and up to 5 million gallons (about a day’s usage of water of a small town with a population of 15-20k)  for larger data centers. Each query is around 500 ml of water (a small water bottle).

Recently Dr. Len Necefer (Policy Researcher, Carnegie Mellon) conducted his own study  on AI's actual resource consumption based on his usage over 11 weeks. He says the one water bottle for each AI query is a myth. 

His findings: 

•All US data centers combined use 0.3% of total water withdrawals

•100 Claude conversations over 11 weeks of daily use consumed 5-35 liters of water

•Energy usage: 2-8 kilowatt hours (same as charging a phone 160-180 times, or driving 1-6 miles)

And his outputs, he argued, resulted in things that make a real impact in the world such as A power-planning iOS app, environmental policy guides, documentary pitches, grant applications, management plans and infographics—all for the sake of environmental progress.

So here we are with two different findings and interpretations. I’m not here to convince you which one is true, only that the understanding of AI is nuanced and changing rapidly.

In this week’s HyperFix breakdown on Youtube, we talk about why this is a starting point for shifting through the data, asking more questions and being aware of the sources of certain claims and any agendas or biases that get in the way. It’s also a reminder that AI does have a cost to the environment and what we make with it matters.

🔥 EXTRA HYPE

• Cornell Professor makes students use typewriters to prevent AI use 

• OpenAI launches historical fundraising round at $122b

• More people are using AI, but they aren’t happy about it

• Stanford exposes AI’s people-pleasing problem  

• Meta’s court losses could result in lack of AI transparency 

• Judge blocks Trump from labeling Anthropic a supply chain risk

📨 P.S. If you are building a newsletter and want to eliminate the weekly scramble, that is exactly what we help our customers do.

And if this sparked something, forward it to an AI-curious friend.

Keep Reading