Give Your AI Agents Code Execution Powers
Launch agents that write and execute code in dedicated, isolated runtimes. Real-time streaming, persistent state, per-agent isolation. Build autonomous AI systems safely.
AI Agents Need Safe Execution
Building AI agents that can execute code is powerful — but risky. Without proper isolation:
- ✗Agents can access your filesystem, env vars, and secrets
- ✗Runaway loops or memory leaks crash your system
- ✗Multi-agent systems interfere with each other
- ✗No easy way to sandbox without heavy infrastructure
Hopx for AI Agents
- ✓One sandbox per agent — complete isolation
- ✓Resource limits prevent runaway execution
- ✓Persistent state for multi-step agent workflows
- ✓Stream outputs in real-time for interactive UX
Build Any Type of Agent
Code Interpreter Agents
Agents that write and execute code to solve problems
Data Analysis Agents
Agents that explore datasets and generate insights
Research Agents
Autonomous agents that gather and synthesize information
Tool-Using Agents
Agents that interact with external APIs and services
Multi-Agent Systems
Coordinated teams of specialized agents
Self-Improving Agents
Agents that iterate and refine their outputs
Works With Your Favorite Frameworks
Hopx integrates seamlessly with popular agent frameworks and LLM providers.
Why Hopx for AI Agents
Per-Agent Isolation
Each agent runs in its own micro-VM. No cross-contamination, no shared state, complete security isolation.
Real-Time Streaming
Stream stdout, stderr, and execution results in real-time via WebSocket. Perfect for interactive agents.
Persistent State
Filesystem and IPython kernel persist across executions. Agents can build on previous work.
Multi-Agent Ready
Spin up sandboxes per agent, coordinate via APIs, snapshot for branching. Build complex agent meshes.
Build Agents in Minutes
Simple pattern: LLM generates code, Hopx executes it safely, results feed back to the model. Works with any LLM provider.
Instant Sandboxes
~100ms to spin up a new agent runtime
Streaming Execution
Show users what the agent is doing in real-time
Persistent Context
Agent remembers files and state across executions
1from hopx_ai import Sandbox
2from openai import OpenAI
3
4client = OpenAI()
5sandbox = Sandbox.create(template="code-interpreter")
6
7def run_agent(task: str):
8 """AI agent with code execution capabilities"""
9
10 messages = [
11 {"role": "system", "content": """You are an AI agent with code execution.
12When you need to compute something, write Python code.
13Wrap code in ```python blocks. I'll execute it and show results."""},
14 {"role": "user", "content": task}
15 ]
16
17 while True:
18 response = client.chat.completions.create(
19 model="gpt-4",
20 messages=messages
21 )
22
23 assistant_message = response.choices[0].message.content
24 messages.append({"role": "assistant", "content": assistant_message})
25
26 # Extract and execute code blocks
27 if "```python" in assistant_message:
28 code = extract_code(assistant_message)
29
30 # Execute in isolated sandbox
31 result = sandbox.run_code(code)
32
33 # Feed results back to agent
34 execution_result = f"""
35Code executed. Results:
36stdout: {result.stdout}
37stderr: {result.stderr}
38exit_code: {result.exit_code}
39"""
40 messages.append({"role": "user", "content": execution_result})
41
42 # Continue if agent wants to iterate
43 if result.exit_code != 0:
44 continue
45
46 # Check if agent is done
47 if "[DONE]" in assistant_message or not "```python" in assistant_message:
48 break
49
50 return assistant_message
51
52# Run the agent
53result = run_agent(
54 "Analyze the top 10 most starred Python repos on GitHub. "
55 "Fetch the data, calculate statistics, and create a chart."
56)
57
58print(result)
59sandbox.kill()