
Your Friends Just Downloaded Claude. They're About to Use It Wrong.
By Conny Lazo
Builder of AI orchestras. Project Manager. Shipping things with agents.
Your Friends Just Downloaded Claude. They're About to Use It Wrong.
AI models aren't interchangeable. Here's why using Claude like ChatGPT guarantees you'll get the worst of both — and what to do instead.
Last week, Anthropic told the Pentagon no. The White House retaliated. And the American public, in a fit of civic enthusiasm not seen since people lined up to buy iPhones, made Claude the number one app in the country.
Millions of people who couldn't have spelled "Anthropic" two weeks ago now have a shiny new AI assistant on their phones. And almost every single one of them is about to open it up, type in the same prompts they've been feeding ChatGPT for two years, get back something slightly different, and conclude that it's worse.
This is the technological equivalent of buying a piano, sitting on the bench, and getting angry that it won't play guitar chords.
In a recent video, Corbin Brown documented showing Claude to three people for five minutes each. All three switched. But then comes the hard part: the part after the switch, where habits reassert themselves like weeds through concrete. This article isn't about whether Claude is better than ChatGPT. It's about why using one like the other guarantees you'll get the worst of both.
AI Models Are Not Flavors of the Same Soda

There's a persistent and remarkably durable delusion in the public consciousness that AI models are interchangeable. ChatGPT, Claude, Gemini — they're all just chatbots, right? Different labels on the same can.
This is like saying a scalpel and a butter knife are interchangeable because they're both sharp-ish and fit in your hand. Technically, you could perform surgery with a butter knife. You would simply prefer not to be the patient.
ChatGPT and Claude were built with fundamentally different training philosophies. OpenAI's models are trained heavily on human feedback — thumbs up, thumbs down — which inherently rewards responses that feel satisfying in the moment. It's optimized to make you happy. Claude was trained using something called Constitutional AI, where the model learns against explicit principles: be helpful, be honest, avoid harm. It's optimized to be honest, even when honest is not what you wanted to hear.
The practical result? ChatGPT is the colleague who tells you your presentation looks great. Claude is the colleague who tells you slide seven contradicts slide three and your timeline assumes engineers ramp in half the time they actually do. Both are useful. Only one keeps you from walking into a board meeting with a plan that's held together by optimism and font choices.
This isn't marketing copy. OpenAI's own researchers acknowledged the sycophancy problem — most visibly when a GPT-4o update in April 2025 made the model so agreeable they had to roll it back within days. They've invested serious effort in fixing it since. But the tendency hasn't fully disappeared because it's structural. When you train a model to maximize human satisfaction, you get a model that's very good at telling people what they want to hear.
What Actually Changes When You Use Claude
So you've downloaded the app. Now what? Here's where the newcomers go wrong: they fire off prompts like commands. "Write me a cover letter." "Give me five marketing ideas." Claude will respond to these, sure. But you'll get back something that feels thin, and you'll blame the model when you should blame the prompt.
Claude doesn't need your commands. It needs your situation.
Instead of "write a cover letter," try: "I'm applying for a senior product role at a cybersecurity company. My background is in B2B SaaS, I've led two launches in the last year, and the job posting emphasizes cross-functional leadership. My biggest concern is that I have no direct cybersecurity experience." Now Claude has something to think about. And thinking is what Claude does with context — not just producing a more detailed version of what you asked for, but reconsidering how you framed the problem in the first place.
Multiple independent reviews — Access Intelligence, Type.ai, and others — have documented this pattern. Claude asks more clarifying questions and engages more deeply with context. It writes in a voice that reads human, while ChatGPT produces that unmistakable "AI voice" — everything sounds like it was written by a very enthusiastic intern who just discovered transition phrases.
In a blind test from February 2026 with over a hundred voters per round, Claude won four of eight writing rounds; ChatGPT won one. The other three were, presumably, still arguing in committee. Claude scored 85% on structural coherence versus ChatGPT's 78%. Not massive gaps. But they compound. Over a week of real work, the difference between "needs a rewrite" and "publishable with light editing" is the difference between AI saving you time and AI creating busywork.

Here's the counterintuitive part: Claude is actually better at editing your work than generating from scratch. If you hand it a draft and say "what's the weakest argument here and how do I fix it," you'll get structural feedback — the third paragraph undermines the first, you buried your strongest point. ChatGPT tends to polish at the sentence level. Both useful. But if you've been using AI purely as a content generator, you've barely scratched the surface of what Claude can do.
The Five-Minute Conversation Your Friends Need
When someone in your life asks "what's this Claude thing," resist the urge to give them a feature tour. Feature tours are how we ended up with a generation of people who use Excel exclusively to make to-do lists. Instead, give them three things:
First, tell them to set up a Project. Not as a filing cabinet where they dump documents and type "help me with marketing." That's like hiring a consultant, handing them a box of papers, and saying "do business." Instead, the project instructions should be operating rules: who you are, what you do, who your audience is, what your boss cares about, what documents define your positioning. Pixel Peak's 500-task comparison measured instruction compliance directly — Claude hit 94% exact compliance versus ChatGPT's 87%. When you set detailed rules in a Claude project, they stick. Every conversation inherits that context without you re-explaining your entire professional existence each time.
Second, tell them about extended thinking. Claude can show its work. On hard problems — contract analysis, debugging, strategic planning — you can watch the chain of reasoning unfold in real time. If it's heading somewhere wrong, you stop it and redirect. This is fundamentally different from ChatGPT's workflow, where you hit send, wait, and then evaluate the finished output like an executive who delegated the thinking and now has to pretend they understood the spreadsheet. Claude's approach is more like pair programming: you're both in the room, you can see the whiteboard, and you can say "wait, no, go back" before three wrong turns become thirty. It's the difference between proofreading a finished essay and watching someone think out loud — one of these lets you intervene before the mistake becomes load-bearing.
Third, tell them what they're giving up. This is the part most enthusiasts skip, and it's the part that matters most for trust. Claude doesn't generate images. No DALL-E, no Sora video. Its web search is narrower. Its mathematical reasoning is weaker. It doesn't have ChatGPT's persistent memory across conversations in the same way, and there's no equivalent to the custom GPTs marketplace. If your friend's primary use cases are image generation, real-time voice, or deep web research, Claude isn't the tool — and pretending otherwise just creates a disappointed user.
The honest pitch is: keep ChatGPT for what it does well. But try Claude for your actual work — the writing, the planning, the thinking — and see if the difference in output quality changes how much you trust AI to contribute to real decisions.
The Real Story Isn't About Claude

Here's what fascinates me about this moment. Millions of people are about to discover — many for the first time — that AI tools are not a monolith. That switching models changes the conversation. That the way you prompt one tool may be precisely the wrong way to prompt another. This is like discovering your GPS has opinions about the route.
We're moving from "I use AI" as a single, monolithic activity to "I use this specific AI for this specific task because I understand why." Which is just the ordinary progression from ignorance to competence, except this time the tool talks back and has opinions about your quarterly projections.
The people who downloaded Claude this week because they were mad at the Pentagon are going to sort themselves into two groups. The first group will poke around for ten minutes, fail to generate an image of a sunset, and go back to ChatGPT. The second group will discover that there's an AI that tells them when their ideas have holes — and they'll never fully go back. This is roughly how it works with every tool that prioritizes usefulness over pleasantness. Nobody enjoys the first honest performance review, but the people who listen tend to get promoted.
The most valuable skill in AI in 2026 is not prompt engineering. It's model literacy — knowing which tool to reach for, how to use it on its own terms, and when to switch. The people who build that fluency won't just be more productive. They'll make better decisions, because they'll have an AI that challenges their thinking instead of one that applauds their typing.
Your friends just downloaded Claude. They're going to have questions. You now have five minutes of honest answers. Use them before the old habits set in and Claude ends up as a novelty app on page three of their home screen, right next to that meditation app they opened twice.
Sources
- Brown, Corbin. "Everyone You Know Is About to Try Claude (I Showed 3 People for 5 Minutes — All 3 Switched)." YouTube, 2026. https://youtu.be/O7SSQfiPDXA
- Access Intelligence. Independent blind test comparison of AI writing models, February 2026. (Referenced in Brown's video.)
- Pixel Peak. 500-task instruction compliance comparison, 2026. (Referenced in Brown's video.)
- Type.ai. Analysis of AI writing voice characteristics across models. (Referenced in Brown's video.)
- OpenAI. GPT-4o sycophancy rollback acknowledgment, April 2025.
- Anthropic. Constitutional AI methodology and extended thinking documentation.