ICYMI - Here’s the July Wildcard recap

How to approach AI the right way, plus see how to mix models for better outcomes.

Everything you missed at our FIFTH Wildcard Wednesday!

Hi friends,

This month we cracked open the black box behind AI. We looked at how today’s large language models (LLMs) really work (spoiler: it’s not magic), the importance of prompt engineering, how to pick the right model for the job, and the risks of trusting AI too much.

Here’s your full recap, plus real tools and tricks you can use today.

🎥The Video Replay (available for a limited time)

Autogenerated English subtitles are available

🧠 How LLMs Actually Work

What we discussed:

LLMs don’t “know” facts ,they spot patterns. These systems digest enormous amounts of content and respond by predicting what text statistically should come next. It’s all math, not magic.

Why it matters:

Understanding the limits of these tools is key to avoiding hallucinations, bad decisions, or unintended bias.

What you can do:

  • Don’t confuse confidence with correctness. Always fact-check

  • Remember: outside-the-norm requests = higher risk of gibberish

  • Think of LLMs like interns with perfect memory but no common sense

💬 Prompting = Telling Your AI to Dress for the Job

What we discussed:

Good prompts are like giving your AI instructions before it starts work. You can get wildly different results by changing how you phrase a request, even working within the same model and conversation.

Why it matters:

The difference between a boring answer and a breakthrough often comes down to the prompt.

What you can do:

  • Don’t just “ask a question” - assign AI a role: “act as a tutor,” “act as a journalist,” etc.

  • Let one LLM help you write prompts for another (“prompt the prompt”)

  • Use separators like """ or +++ when pasting content to clarify what's input vs instruction

🧪 Try Multiple Models: Claude, GPT, Gemini & Perplexity

What we discussed:

Each AI has strengths. GPT is great for chat. Claude excels at long, human-style reasoning. Gemini offers powerful integrations. Perplexity is your go-to for sourced research.

Why it matters:

You wouldn’t use a rocket scientist to fetch the mail or vice versa. Choose the right tool for the job.

What you can do:

  • Use Claude for rewriting dry content into more human language

  • Use GPT to brainstorm images, outline content, or draft letters

  • Use Perplexity to generate annotated research reports

  • Test different models when you get weird or generic answers

🧰 LLM Toolchain: Playing Them Off Each Other

What we discussed:

We explored workflows where one LLM improves the output of another. Use this technique when writing prompts, rewriting summaries, generating visuals, or even composing from scratch with a given tone or goal.

Why it matters:

Combining models (e.g., GPT > Claude > Flux) gives you better results than sticking with just one.

What you can do:

  • Use GPT to draft content → Claude to improve tone → Flux to generate visuals

  • If you’re stuck, ask one LLM to write the prompt for another

  • Try voice input on mobile for natural conversation-style prompting

🚨 Security: Jailbreaks, Poisoning, and Deepfakes

What we discussed:

We covered growing concerns around AI misuse: from LLMs being tricked into producing harmful content to nation-states flooding the internet with fake data to bias future models.

Why it matters:

AI isn’t inherently evil, however it can easily be misled. Understanding these risks is step one to protecting yourself.

What you can do:

  • Never paste sensitive info into cloud-based AIs

  • Be cautious of AI-generated “facts” without source citations

  • Watch for subtle shifts in bias. Fake content gets embedded into future model weights.

  • Avoid being part of the training data by limiting what you share publicly

📚 Prompt Libraries & System Prompts

What we discussed:

We looked at great examples of “super prompts” from Anthropic and Wharton. These show how to set up detailed roles like tutors, coaches, or research assistants.

Why it matters:

Having a strong system prompt upfront leads to more useful, less generic results.

What you can do:

  • Bookmark the Anthropic prompt engineering guide

  • Steal the tutor prompts from Wharton’s business school research

  • Ask any LLM: “Generate a system prompt to act as a ___”

🧰 Tools & Mentions from the Session

Here’s what came up in June’s conversation:

🔁 Wildcard Wednesday returns next month

No slides, no sales pitch. Just real talk on what’s changing in tech, what you can do about it, and how to stay sane and secure.

📆 Mark your calendars for high noon Pacific, the second Wednesday of every month!

You never know what we’ll get into next. But you will walk away smarter.

👉 Got a topic or question you want to bring up next time? Just reply and let me know.

In the meantime, if you need a hand or want to explore any of these topics further, you know where to reach me. 😉

Founder, Provocateur of Prompts