- AutoGPT Newsletter
- Posts
- I read the 3 biggest prompt guides so you don’t have to
I read the 3 biggest prompt guides so you don’t have to

Hey, Joey here.
Most people still treat prompting like magic.
Ask a question. Pray for a good answer. Hope it sounds smart.
But prompting well is not magic…
It’s a skill. And it’s 90% preparation.
This week, I read the official guides from OpenAI, Claude, and Gemini so you don’t have to.
Here’s what you’re getting:
📌 A dead-simple format to structure any prompt
📌 How to get better results without writing better prompts
📌 Recipes for image and video generation
Let’s make your prompts work harder than you do.
WEEKLY PICKS
🗞️ Official Prompting Guides:
👀 Video: I Do $5,000 Worth of Upwork AI Project For Free

DEEP DIVE
I Read The 3 Biggest Prompt Guides So You Don’t Have To
Anthropic. OpenAI. Google Gemini.
Three companies building the most advanced LLMs out there.
Three guides on how to get the most out of them.
Zero chance most people actually read them.
So I did.
Here’s what to know if you want to stop typing half-baked prompts and start getting work done with AI that sounds like you.
Note: if you’re in a rush, you can also simply use a prompt generator
Before you prompt: prep like a pro
Most people dump their questions into ChatGPT and hope for a miracle.
That’s fine for one-off tasks. But if you’re using AI consistently across your workflow, you need a better approach.
Here’s a smarter way to prep your prompt, straight from Claude’s docs (with a few of my own):
Define what success looks like (what makes a “good” answer?)
Know how you’ll test that output (what would you tweak?)
Bring examples (past work, tone of voice, desired formats)
Load relevant background info (briefs, notes, docs, etc.)
It’s not overkill. It’s just giving the model the same context you’d give a junior team member.
Prompt formatting: use structure, not magic words
Let me give it to you straight.
This is the best default format I’ve seen across all 3 guides:
<instructions>
You are a [role] helping me with [task].
</instructions>
<context>
Here’s everything you need to know: [paste doc or background]
</context>
<examples>
Here are 2 examples of what good output looks like: [example A, B]
</examples>
<formatting>
Respond in: [email, JSON, tweet, markdown table, etc.]
</formatting>
Notice the fancy XML tags with “<>”.
This is how you talk to the machine to break down the different things you’re giving it.
This is your best chance at getting a one-shot perfect output. Especially if we’re talking about getting a “right” answer back.
But for everything that’s more subjective, context is what will matter more than prompt formatting.
Context is everything
Google nailed it in one line:
“Generative language models work like an advanced autocomplete tool.”
Which means: it only completes as well as what you start with.
That’s why I rarely use LLM routers like OpenRouter: jumping between models means resetting context every time.
If you’re sticking with Claude, Gemini, or ChatGPT for your core workflows, here’s how to use context to your advantage:
Project memory
Claude’s “Projects” and ChatGPT’s “Custom GPTs + Memory” let you build long-term memory
You stop repeating your job, tone, goals, and constraints every time
Over time, the model starts to feel like it gets you

Document-first prompts
Paste long documents before the prompt itself.
This ensure it’s taken into account in the context window.
The best outputs usually come from great inputs, not clever phrasing.
Voice works well for context-heavy requests
Voice prompts shine when you’re not after a perfectly formatted output, you’re just explaining context fast, like you would to a team member.

Give constraints, not just instructions
You’ve probably done this before:
Prompt: “Write a social media post about my coaching program.”
That’s not a prompt. That’s a guess and pray.
Instead, shape the boundaries:
Prompt: “Write a LinkedIn post under 280 words. Plain English. No hashtags. No bro speak. Don’t start with a question.”
Constraints are underrated. All 3 guides emphasize this.
Tell it what not to do.
Prompting for images or videos (Gemini + GPT-4o)
Prompting visuals isn’t about word count. It’s about precision.
Here’s how the pros do it:
Image prompt structure:
Style (analog photo, Pixar 3D, line art, etc.)
Subject (what you want to see)
Context (where it’s happening, mood)
Background (ambiance, colors)

Video prompt structure:
Basics: Subject, Context, Style, Action
Optional: Camera motion, Composition, Ambiance
Prompt: “Close-up of melting icicles on a cliff, zoomed in as droplets fall, with cool blue tones and slow-motion playback.”
Most tools that support these (RunwayML, GPT-4o, Gemini Pro Vision) understand these structures.
After prompting: how to refine
Don’t delete and start over when a response is off.
Use the “quote and improve” method:
Reply Prompt: “Can you rewrite this part to sound less formal?”
Reply Prompt: “Our mission is to enable optimal client outcomes…”
This is done in most LLM by highlighting the part you want to rework:

The Canvas tool is also a good way to do this.
You can do this multiple times in a thread. It’s especially useful in long copy, proposals, or anything where tone needs work.
Then: Save the thread.
Reuse it next time by saying “do this again for [X client / Y campaign].”
TL;DR Prompting Cheat Sheet
✅ Use XML-style tags to organize complex prompts
✅ Paste docs before your question
✅ Include role, tone, and formatting rules
✅ Use memory features like Projects and CustomGPTs
✅ Add “what NOT to do” constraints
✅ Use structure for visuals: subject + style + action
✅ Save and reuse your best threads

THAT‘S A WRAP
Before you go: Here’s how I can help
1) Sponsor Us — Reach 250,000+ AI enthusiasts, developers, and entrepreneurs monthly. Let’s collaborate →
2) The AI Content Machine Challenge — Join our 28-Day Generative AI Mastery Course. Master ChatGPT, Midjourney, and more with 5-minute daily challenges. Start creating 10x faster →
See you next week,
— Joey Mazars, Online Education & AI Expert 🥐
PS: Forward this to a friend who’s curious about AI. They’ll thank you (and so will I).
What'd you think of today's email? |
Reply