Understanding Dual-Transport MCP
Why AuditSwarm supports both HTTP and STDIO transports for the MCP server.
The Challenge
Different AI platforms require different connection methods:
| Platform | Required Transport | Location |
|---|---|---|
| ChatGPT | HTTP (remote server) | Cloud |
| Claude Desktop | STDIO (local process) | Local machine |
| Microsoft Copilot | HTTP (remote server) | Cloud |
| Google Agent Builder | HTTP (remote server) | Cloud |
Problem: Building separate MCP servers for each platform would duplicate code and create maintenance burden.
The Solution: Dual-Transport Architecture
AuditSwarm's MCP server supports both transports using shared code:
┌─────────────────────────────────────────┐
│ MCPServer (Core Logic) │
│ • Tools (suggest_change, query_data) │
│ • GraphQL client │
│ • Suggestions enforcement │
└──────────────┬──────────────────────────┘
│
┌───────┴───────┐
│ │
HTTP Transport STDIO Transport
(Cloud Agents) (Claude Desktop)
│ │
┌───▼───┐ ┌───▼───┐
│ChatGPT│ │Claude │
│Copilot│ │Desktop│
│ Gemini│ └───────┘
└───────┘
Transport Comparison
HTTP Transport (Remote)
Use Case: Web-based AI agents (ChatGPT, Copilot, Gemini)
Protocol: HTTP + JSON-RPC over POST requests
Authentication: OAuth 2.1 (JWT tokens)
Deployment: Cloud Run (GCP), App Service (Azure), ECS (AWS)
Pros:
- ✅ Works with any web-based agent
- ✅ Centralized deployment
- ✅ Standard OAuth security
- ✅ Easy to scale
Cons:
- ❌ Requires internet connection
- ❌ Small latency overhead (~50-100ms)
- ❌ Hosting costs (~$15-30/month)
Example Request:
curl -X POST https://mcp.auditswarm.com/mcp \
-H "Authorization: Bearer JWT_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "suggest_change",
"arguments": { ... }
},
"id": 1
}'
STDIO Transport (Local)
Use Case: Desktop applications (Claude Desktop, local AI tools)
Protocol: stdin/stdout + JSON-RPC over newline-delimited JSON
Authentication: Local user (trusted process)
Deployment: Local machine (spawned by AI app)
Pros:
- ✅ Zero hosting cost (runs locally)
- ✅ No network latency
- ✅ Works offline
- ✅ Simpler auth (local user)
Cons:
- ❌ Only works on local machine
- ❌ Requires local database access
- ❌ User must install/configure
- ❌ Not suitable for team deployments
Example Configuration:
// ~/.config/claude/claude_desktop_config.json
{
"mcpServers": {
"auditswarms": {
"command": "npx",
"args": ["tsx", "/path/to/stdio-server.ts"],
"env": {
"DATABASE_URL": "postgresql://localhost/auditswarms",
"MCP_LOCAL_USER_EMAIL": "user@example.com"
}
}
}
}
Implementation Strategy
Shared Core Logic
Both transports use the same MCPServer class:
// src/lib/mcp/server.ts
export class MCPServer {
constructor(
private graphqlClient: GraphQLClient,
private userId: string
) {}
async handleToolCall(toolName: string, args: any) {
// Shared logic for all transports
switch (toolName) {
case 'suggest_change':
return this.suggestChange(args);
case 'query_data':
return this.queryData(args);
// ...
}
}
}
HTTP Transport Wrapper
// src/lib/mcp/start-server.ts
import express from 'express';
import { MCPServer } from './server';
const app = express();
app.post('/mcp', async (req, res) => {
const userId = await authenticateJWT(req.headers.authorization);
const server = new MCPServer(graphqlClient, userId);
const result = await server.handleToolCall(
req.body.params.name,
req.body.params.arguments
);
res.json({ jsonrpc: '2.0', result, id: req.body.id });
});
app.listen(3001);
STDIO Transport Wrapper
// src/lib/mcp/stdio-server.ts
import { MCPServer } from './server';
const userId = process.env.MCP_LOCAL_USER_EMAIL;
const server = new MCPServer(graphqlClient, userId);
process.stdin.on('line', async (line) => {
const request = JSON.parse(line);
const result = await server.handleToolCall(
request.params.name,
request.params.arguments
);
process.stdout.write(
JSON.stringify({ jsonrpc: '2.0', result, id: request.id }) + '\n'
);
});
Code Reuse Metrics
Total Lines of Code:
- Shared core: ~400 lines
- HTTP wrapper: ~50 lines
- STDIO wrapper: ~30 lines
Code Reuse: 83% shared, 17% transport-specific
Maintenance: Single codebase, minimal duplication
When to Use Each Transport
Use HTTP Transport When:
- ✅ Deploying for a team (multiple users)
- ✅ Using web-based AI agents (ChatGPT, Copilot)
- ✅ Need centralized deployment
- ✅ Want standard OAuth security
Use STDIO Transport When:
- ✅ Single-user development environment
- ✅ Using Claude Desktop locally
- ✅ Want zero hosting costs
- ✅ Working offline
Use Both When:
- ✅ Development (STDIO) + Production (HTTP)
- ✅ Supporting both web agents and desktop apps
Real-World Example
Scenario: Developer using AuditSwarm
Development Setup (STDIO):
# Local database on laptop
npm run dev # App on http://localhost:3000
npm run mcp:stdio # MCP server via STDIO
# Claude Desktop connects via STDIO
# → Zero latency, zero cost
Production Setup (HTTP):
# Deployed to GCP Cloud Run
# MCP server: https://mcp.auditswarm.com
# ChatGPT connects via HTTP + OAuth
# → Team access, centralized data
Result: Best of both worlds!
Design Decision: Why Not Just HTTP?
Could we use only HTTP?
Yes, but:
- ❌ Claude Desktop would need to proxy through a remote server (added latency)
- ❌ Development would require deploying MCP server (slower iteration)
- ❌ Offline development impossible
Why not just STDIO?
- ❌ Web-based agents (ChatGPT, Copilot) don't support STDIO
- ❌ Not suitable for team deployments
- ❌ Each user needs local installation
Dual-transport gives us:
- ✅ Fast local development (STDIO)
- ✅ Production team deployment (HTTP)
- ✅ Support for all AI platforms
- ✅ Minimal code duplication
Trade-offs
Complexity
Added: Transport abstraction layer (+80 lines) Saved: No duplicate MCP implementations (-1000s of lines) Net: Significant reduction in complexity
Maintenance
Added: Test both transports Saved: Single codebase, shared bug fixes Net: Easier maintenance
Performance
HTTP: +50-100ms network latency STDIO: <5ms local latency Trade-off: Acceptable for use case
Conclusion
Dual-transport architecture provides:
- Universal Compatibility - Works with all AI platforms
- Developer Experience - Fast local development
- Production Ready - Scalable HTTP deployment
- Code Reuse - 83% shared code
- Future Proof - Easy to add new transports
Result: One MCP server, all AI agents supported.