AI, Revisited: what changed in four months and what actually matters now
Four months is an eternity in AI time. In this Wildcard Wednesday session, we reset the board on artificial intelligence: what's new in 2026, which models are worth your attention, and how agentic workflows are changing what AI can do on your behalf.
All the tips, plus the video replay (for a limited time)

🎥The Video Replay (available for a limited time)
Autogenerated English subtitles are available
Key takeaways - your TL;DR checklist
AI is probabilistic, not deterministic. Verify critical outputs.
Agents can act autonomously but carry significant security risks.
Paid tiers ($20/month) unlock model selection, memory, and deeper thinking.
Confer offers encrypted AI for sensitive conversations.
Use Claude for writing and code, Perplexity for research, GPT for general tasks.
System instructions and temperature settings improve prompt quality.
AI-written content has tells: em dashes, uniform sentences, repetitive structures.
Your digital exhaust is more dangerous now that AI can analyze it at scale.
Tools and links mentioned
Confer (encrypted AI by Moxie Marlinspike): https://confer.to
Perplexity (AI-first search with citations): https://www.perplexity.ai
Claude (Anthropic): https://claude.ai
ChatGPT (OpenAI): https://chat.openai.com
Gemini (Google): https://gemini.google.com
Flux (image generation by Black Forest Labs): https://blackforestlabs.ai
Pangram (AI detection tool): https://pangram.com
OpenClaw (agentic AI framework for Mac)
🤖 LLM fundamentals: how they work and where they fail What we discussed
What we discussed: Large language models are probabilistic, not deterministic. They predict what comes next based on statistical patterns, not logic. This means they excel at typical cases but struggle with edge cases, leading to hallucinations when they lack sufficient data. With trillions of parameters and billions of activations per query, even engineers cannot fully trace how answers are generated.
Why it matters: Understanding this limits over-reliance and helps you know when to verify output.
What you can do:
Treat AI as a research assistant, not an oracle
Verify critical outputs before sharing or acting on them
Use AI for brainstorming, drafting, and pattern recognition, not final decisions
🧠 Agentic AI: when AI acts on your behalf
What we discussed: Agents go beyond chat. They plan, execute, and can act autonomously. OpenClaw and similar frameworks let AI access your system to complete multi-step tasks like booking travel or writing code. However, security risks are significant. Hidden instructions in files or images could redirect agent behavior, potentially exposing sensitive data.
Why it matters: Agents multiply productivity but also multiply risk if given too much autonomy.
What you can do:
Use agents for contained tasks in virtual environments, not on your main system
Set clear spending limits and permissions before enabling autonomous actions
Monitor agent activity closely and review all outputs before finalizing
💳 Paid vs free models: the $20/month difference
What we discussed: Free versions of AI tools offer less than 10% of what paid tiers provide. Paid models unlock model selection, deeper thinking modes, memory features, and saved conversations. The going rate is around $20/month across most platforms.
Why it matters: Paid tiers dramatically improve reliability, speed, and customization.
What you can do:
Try a paid tier for one month on your most-used platform
Use memory features to let the AI learn your preferences over time
Save useful chats as reusable workbenches for recurring tasks
🔐 Confer: private, encrypted AI
What we discussed: Confer is built by Moxie Marlinspike, founder of Signal. It offers end-to-end encrypted AI conversations where each chat is individually encrypted. This addresses privacy concerns with cloud-based models like GPT and Gemini, where conversations may be stored or used for training.
Why it matters: Sensitive topics deserve private channels, even with AI.
What you can do:
Consider Confer for confidential or sensitive conversations
Understand trade-offs: privacy features may limit some capabilities like image analysis
Migrate important chats from public platforms to encrypted alternatives
🎯 Model landscape: who leads in 2026
What we discussed:
Claude 3.5 Sonnet: Best for writing, reasoning, and coding
GPT-4o/4.5: Strong for general use and multimodal tasks, but performance varies at lower tiers
Gemini: Improved Google Workspace integration, still inconsistent
Perplexity: Best for sourced research with annotated results
Confer: Best for private, encrypted conversations
Grok: Tightly integrated with X, limited access, ethical concerns noted
Why it matters: Different models excel at different tasks. Choosing wisely saves time and improves results.
What you can do:
Use Claude for long-form writing and code
Use Perplexity for research that requires citations
Use Confer for sensitive topics
Play models off each other: draft in one, refine in another
✍️ Prompting in 2026: system instructions and temperature
What we discussed: Prompting has evolved beyond simple questions. System instructions set behavior before a conversation starts. Temperature controls creativity: low for accuracy, high for brainstorming. You can also ask an AI to write its own optimal prompt for a given task.
Why it matters: Better prompts mean better outputs with less iteration.
What you can do:
Ask your AI: "How should I prompt you for best results?"
Use system instructions to lock in roles or workflows
Adjust temperature based on task: low for facts, high for ideas
🔍 Perplexity demo: AI-first research
What we discussed: Perplexity combines AI with live web search, returning sourced answers with citations. Deep Research mode runs multi-step investigations and returns executive-level reports. You can also compare models side by side within Perplexity, testing GPT, Claude, Gemini, and others on the same query.
Why it matters: Research that used to take hours now takes minutes with verifiable sources.
What you can do:
Use Perplexity for customer support queries instead of digging through help docs
Try Deep Research for complex topics requiring multiple sources
Compare model outputs within Perplexity to find the best fit for your task
🕵️ Can you spot AI-written content?
What we discussed: Yes. Common tells include em dashes (—), uniform sentence length, and repetitive structures like "not only this, but also this." Detection tools can estimate AI probability, but humans can also learn to spot patterns. You can also ask AI to rewrite content more naturally.
Why it matters: Transparency matters. Flooding the world with AI slop reduces trust and value.
What you can do:
Review AI output for tells before publishing
Adjust sentence length and structure to sound more human
Avoid publishing copious low-value content just because AI made it fast
📡 Digital exhaust: surveillance in the age of AI
What we discussed: Years of tracking data (clicks, location, device IDs) now sit in databases. AI excels at finding patterns in unstructured data. This combination makes surveillance easier and doxxing more accessible. Your digital exhaust trail is more dangerous now than ever.
Why it matters: AI amplifies existing privacy risks from the surveillance economy.
What you can do:
Delete unused apps that send tracking pings
Opt out of data brokers where possible
Use private DNS and router-level ad blockers to reduce data leakage
Wildcard Wednesday returns next month
Second Wednesday at 12:00 PM Pacific. No slides. No fluff. Just real talk on what’s changing - and what you can do about it.
📆 Mark your calendars for high noon Pacific, the second Wednesday of every month!
You never know what we’ll get into next. But you will walk away smarter.
👉 Got a topic or question you want to bring up next time? Just reply and let me know.
In the meantime, if you need a hand or want to explore any of these topics further, you know where to reach me. 😉

Founder, Minister of Model Clarity 🪄

