Run Jobs for Hours or Days — No Timeouts
Data processing, scraping, ML training, batch jobs — execute workloads that run for as long as they need. Full state persistence, real-time monitoring, zero server management.
Tired of Timeout Limits?
Serverless platforms are great — until your job needs more than 15 minutes. Then you're stuck with painful workarounds:
- ✗Breaking jobs into tiny chunks and managing state yourself
- ✗Spinning up EC2 instances and dealing with DevOps overhead
- ✗Paying for idle time while waiting for jobs to complete
- ✗Complex orchestration with Step Functions or Airflow
With Hopx
- ✓Start a sandbox and let your job run — for minutes, hours, or days
- ✓Full filesystem for checkpoints and intermediate results
- ✓Stream logs and monitor progress in real-time
- ✓Pay only for actual compute time — per second billing
Perfect For Any Long-Running Workload
Data Processing
ETL pipelines, data transformations, batch analytics
Web Scraping
Large-scale crawling, data extraction, monitoring
ML Training
Model training, hyperparameter tuning, experiments
Video Processing
Transcoding, analysis, thumbnail generation
Report Generation
Complex reports, PDF generation, data exports
Database Migrations
Schema changes, data backfills, cleanup jobs
Built for Long-Running Workloads
No Timeout Limits
Run jobs for hours, days, or weeks. No 15-minute Lambda limits, no forced shutdowns, no arbitrary timeouts.
Full State Persistence
Filesystem persists across the job lifetime. Write checkpoints, resume from failures, store intermediate results.
Real-Time Monitoring
Stream logs via WebSocket, track CPU/memory usage, get health metrics — all through simple APIs.
No Server Management
No EC2 instances to provision, no containers to orchestrate. Just start your job and let it run.
Hopx vs. Alternatives
| Feature | Hopx | AWS Lambda | Fargate |
|---|---|---|---|
| Max Runtime | Unlimited (days/weeks) | 15 minutes | Hours (with config) |
| Cold Start | ~100ms | 100ms-10s | 30s-2min |
| State Persistence | Full filesystem | None | EFS (extra cost) |
| Pricing Model | Per-second | Per-request + duration | Per-hour |
| Setup Complexity | Zero | Medium | High |
Simple API for Complex Jobs
Start a background job, monitor progress, download results. No infrastructure code, no DevOps complexity.
Background Execution
Start jobs that run independently while you do other work
Live Monitoring
Stream logs and check status without interrupting the job
Checkpoint & Resume
Save progress to filesystem, resume from failures
1from hopx_ai import Sandbox
2
3# Create sandbox for long-running job
4sandbox = Sandbox.create(
5 template="code-interpreter",
6 timeout=None # No timeout - run indefinitely
7)
8
9# Start background job
10process = sandbox.run_code_background("""
11import time
12
13# Your long-running job
14for batch in range(1000):
15 process_batch(batch)
16
17 # Checkpoint progress
18 with open('/workspace/checkpoint.json', 'w') as f:
19 json.dump({'batch': batch}, f)
20
21 print(f"Processed batch {batch}/1000")
22""")
23
24# Monitor progress
25while True:
26 status = sandbox.get_process_status(process.id)
27 logs = sandbox.get_logs()
28 print(logs[-10:]) # Last 10 log lines
29
30 if status.completed:
31 break
32 time.sleep(60)
33
34# Download results
35results = sandbox.files.read("/workspace/results.csv")
36sandbox.kill()