When your AI needs a bigger brain

When AI stalls or loops, give it an elegant exit. BigBrain extracts the necessary context for your current task and offloads the heavy thinking to another model.

Works with any model but shines with ChatGPT 5 Pro, Grok Heavy, and Claude Opus

Choose Your Workflow

πŸ‘€

Human-in-the-Loop Mode

What it is: A manual workflow where you consult external AI models (ChatGPT 5 Pro, Grok Heavy, Claude Opus) for fresh insights when your current AI gets stuck.

How it works: BigBrain extracts relevant code β†’ Copies to clipboard β†’ You paste into external AI β†’ Get solution β†’ Return to original chat.

βœ“ Best for: Complex debugging that needs a fresh perspective
βœ“ Advantage: Choose the right AI model for each specific problem
βœ“ Control: You decide what to share and which AI to consult
1

Install BigBrain

Select your setup method and copy the command to add BigBrain as an MCP server.

claude mcp add -- npx -y @probelabs/big-brain@latest
{
  "mcpServers": {
    "big-brain": {
      "command": "npx",
      "args": [
        "-y", 
        "@probelabs/big-brain@latest"
      ]
    }
  }
}
2

Ask AI to use BigBrain

When your AI gets stuck, loops, or loses context - ask it to use BigBrain to extract relevant code and context.

Example:

"Ask BigBrain to help me with this React component issue"

πŸ’‘ Optional: Automate this step

Add this to your CLAUDE.md file so your AI automatically knows when to use BigBrain:

# BigBrain Integration
Whenever you get stuck and can't find a solution - just ask BigBrain MCP for advice.
3

Get External Advice

Your AI prepares the question with context. BigBrain extracts code and copies everything to clipboard. Open fresh AI chat, paste, and get the solution.

πŸ’‘ Pro Tip: Use advanced models like ChatGPT 5 Pro, Grok Heavy, or Claude Opus for best results with complex analysis.
4

Return with Solution

Copy the external AI's response and return to your original chat. Paste the solution to continue with fresh insights.

πŸ€–

Multi-Agent Mode

What it is: Automated AI-to-AI communication for systems like Claude Code where multiple agents collaborate without human intervention.

How it works: Configure with --loop flag β†’ AI calls BigBrain β†’ Context saved to file β†’ Next agent reads file β†’ Continues processing automatically.

βœ“ Best for: Claude Code and other multi-agent environments
βœ“ Automation: No clipboard, no manual steps, pure AI collaboration
βœ“ Efficiency: Direct file-based handoffs between AI agents
1

Configure Loop Mode

Set up BigBrain with the --loop flag for automated agent communication.

claude mcp add -- npx -y @probelabs/big-brain@latest --loop "Now call @agent-general-purpose to investigate this issue"
{
  "mcpServers": {
    "big-brain": {
      "command": "npx",
      "args": [
        "-y", 
        "@probelabs/big-brain@latest",
        "--loop", 
        "Now call @agent-general-purpose to investigate this issue"
      ]
    }
  }
}
πŸ’‘ Best Practice: For optimal results, create a specialized agent configured with Probe MCP for intelligent code search. This gives your agent powerful code discovery capabilities beyond basic file reading. Learn about Claude Code subagents β†’
2

AI Agent Communication

Your AI system automatically calls BigBrain with loop mode when stuck. BigBrain extracts code and provides instructions for the next agent.

Automated Flow:

AI β†’ BigBrain (--loop) β†’ File + Agent Instructions β†’ Next Agent

3

Direct Agent Handoff

The designated agent reads the context file and continues processing automatically. No human intervention required.

πŸ€– Perfect for: Claude Code environments with multiple AI agents working together seamlessly.
πŸš€

Native ChatGPT Desktop Mode Experimental

What it is: Agent-to-Agent communication using ChatGPT Desktop app for fully automated AI collaboration, leveraging the full power of the world's best model.

How it works: Configure with --chatgpt flag β†’ Your AI agent calls BigBrain β†’ Automatically opens ChatGPT Desktop β†’ Gets response β†’ Returns to your agent.

⚠️ macOS only: Requires ChatGPT Desktop app and accessibility permissions for Terminal.

βœ“ Best for: Complex reasoning tasks requiring ChatGPT 5 Pro's thinking capabilities
βœ“ Agent-to-Agent: Fully automated communication between your AI and ChatGPT
βœ“ Thinking Models: Optimized for ChatGPT 5 Pro's advanced thinking capabilities
1

Setup Requirements

Install ChatGPT Desktop and grant Terminal accessibility permissions on macOS.

Prerequisites:

  • βœ… ChatGPT Desktop app: Download here
  • βœ… macOS 11.0 or later
  • βœ… Terminal accessibility permissions enabled
πŸ”§ Setup Tip: Grant Terminal permissions in System Settings β†’ Privacy & Security β†’ Accessibility β†’ Terminal
2

Configure ChatGPT Mode

Set up BigBrain with the --chatgpt flag for automatic ChatGPT Desktop integration.

claude mcp add -- npx -y @probelabs/big-brain@latest --chatgpt
{
  "mcpServers": {
    "big-brain": {
      "command": "npx",
      "args": [
        "-y", 
        "@probelabs/big-brain@latest",
        "--chatgpt"
      ]
    }
  }
}
3

Automatic AI Consultation

Your AI agent automatically opens ChatGPT Desktop, sends the query, and waits for the response. Everything happens seamlessly in the background.

Agent-to-Agent Flow:

Your Agent β†’ BigBrain β†’ ChatGPT Desktop Agent β†’ Response β†’ Back to Your Agent

Response time: 30 seconds to 20 minutes (ChatGPT Pro thinking can take time)

4

Seamless Integration

ChatGPT's response is automatically returned to your AI agent. The entire process is hands-free and fully automated.

πŸš€ Experimental Feature: This mode is actively being improved. Report issues on GitHub.

The Problem

Your AI hits the same error 5 times. It keeps suggesting code that doesn't compile. It loses track of what it was doing 10 messages ago. You know the fix, but explaining it through the existing conversation is like teaching through a broken telephone. You need a fresh start without losing context.

The Solution

BigBrain lets your AI call for backup. It packages the exact code context, your specific problem, and nothing else. Copy to clipboard, paste into a fresh chat with a stronger model (ChatGPT 5 Pro, Grok Heavy, Claude Opus, or any "thinking" model), get the answer, paste it back. Or in multi-agent setups, it hands off directly to another agent. No manual file hunting, no context reconstruction.

How it works:

  1. Request: Ask your AI to use BigBrain when it gets stuck or needs fresh context.
  2. Extract: BigBrain automatically finds relevant code using Probe and formats it perfectly.
  3. Transfer: Get a notification with the context pack ready to paste into any AI model.

Part of the Probe Ecosystem

Built on Probe's Foundation

BigBrain uses Probe to intelligently extract relevant code. When your AI mentions specific files or functions in its BigBrain request, Probe automatically discovers and includes all related dependencies, types, and context.

This smart extraction means the AI doesn't need to manually specify every related file. It just describes what it's analyzing, and Probe ensures it gets all the code context needed to provide accurate solutions.

Questions or feedback? Reach out to us at hello@probeai.dev