← All products
🔀

AI Model Switcher

Stop juggling Claude and Codex by hand. One file holds your current preference, your agent reads it, you switch in plain English. Hand the kit to your AI assistant — Claude Code, Codex, Cursor, anything — and it installs and adapts to your harness, your binaries, your workflow. The current session keeps running on its model; the change applies on next launch (intentional, mid-session switches break cache). After setup you type "wiz" in any terminal to land in your current preferred model, type "wiz-switch codex" to change preference, or just say "switch to opus" inside any agent session.

The same pattern running my own setup since Opus 4.7 shipped — distilled, stripped of project-specific paths, packaged so it works on a fresh machine in under five minutes.

Paid Digital Thoughts subscriber?

Yearly subscribers: all products free ($1072.95+ value)

Monthly subscribers: 1 free product per month — subscribe from $29.99/mo

Claim your free copy →
$19.99One-time purchase. Lifetime updates. MIT license.

What you get

+Hand it to your AI agent: AGENT-INSTRUCTIONS.md is self-contained, the agent runs discovery, asks 5 questions about your setup, installs only what fits
+Single-file Python switcher (no dependencies, ~250 lines) — read it in 10 minutes, modify it for your setup
+Natural-language parser: "switch to codex", "use opus from now on", "set codex model to gpt-5-codex" all just work
+Wrappers around claude and codex auto-read the preference and launch with the right flags
+Claude Code skill + optional UserPromptSubmit hook so switching works inside any session
+Codex AGENTS.md snippet so Codex reads/writes the same preference file
+"any harness" pattern — same two-command contract works with Cursor, Aider, custom CLIs
+Optional multi-machine sync via iCloud / Dropbox / Syncthing (one-line setup) or git
+No daemon, no cloud account, no proxy — your traffic still goes direct to Anthropic / OpenAI / wherever

Package includes

  • START-HERE.html: pretty entry-point file, four numbered steps, for non-technical users
  • AGENT-INSTRUCTIONS.md (11 KB): self-contained brief telling your AI agent how to discover, install, test, and adapt for your machine
  • README.md: human-facing pitch and architecture overview
  • QUICKSTART.md: 10-minute manual install for the DIY path
  • guide.md: full guide with sync, harness wiring, routing patterns, troubleshooting
  • cheatsheet.md: one-page command reference + common fixes
  • scripts/switcher.py: the core (no deps, single file)
  • scripts/route.sh: the launcher router that picks the right binary
  • scripts/install.sh: idempotent installer (drops scripts in ~/.local/bin)
  • scripts/claude-prompt-hook.sh: Claude Code UserPromptSubmit hook
  • harness/claude-code-skill.md: drop-in skill for ~/.claude/skills/switch-model/
  • harness/codex-AGENTS.md.snippet: append to your Codex AGENTS.md
  • harness/any-harness.md: pattern for wiring into Cursor, Aider, anything
  • examples/routing-rules.md: my actual task → model routing rules from April 2026
  • examples/cost-notes.md: what shifted in monthly spend after splitting providers
  • templates/preferences.json: schema reference

FAQ

Do I need to be a developer to use this?

No. The whole point is that you hand the zip to your AI agent (Claude Code, Codex, Cursor) and tell it to read AGENT-INSTRUCTIONS.md. The agent does discovery, asks you a few plain-English questions about your setup, installs and tests. You never touch the code unless you want to.

Why does the model not change immediately when I say "switch"?

Because mid-session switches break your prompt cache and confuse the agent. The preference file updates instantly, but it applies on the next launch. The 30 seconds you save not relaunching costs you 10 minutes of re-explaining context to a confused model. After running this for months, durable-but-deferred is the right tradeoff.

Does this work with Cursor / Aider / my custom CLI?

Yes. The contract is two shell commands ("apply-text" to update from natural language, "resolve --shell" to read current preference). harness/any-harness.md shows the pattern. Your agent will adapt it — usually 10 lines of glue.

How does it sync across machines?

The preference is a single JSON file. Easiest sync: symlink ~/.config/ai-model-switcher to an iCloud / Dropbox / Syncthing folder. Done in 30 seconds. For audit trails, drop it in a git dotfiles repo with a 5-minute cron pull. Real-time sync needs ~80 lines on top. Pattern provided in the guide.

Is this a proxy or does it route my traffic?

Neither. It does not see your prompts and does not route your API traffic. It is purely a preference store + launcher wrapper. Your traffic goes direct to Anthropic / OpenAI / your local model exactly as it would without the kit.

How is this different from just typing "claude --model opus"?

Because you are juggling two providers, multiple harnesses, and you keep forgetting which terminal is running what. This kit makes the preference durable across sessions and visible to every harness, so you can switch by thinking instead of by remembering CLI flags.

Secure checkout by Stripe. Instant download + guided Claude Code setup.