Background Workers Without the Ops
Run workers, schedulers, and recurring jobs that keep running reliably. Live logs, health monitoring, no server management. Just start your worker and go.
Workers Shouldn't Require DevOps
Running background workers traditionally means infrastructure headaches:
- ✗Provisioning and managing EC2 instances or Kubernetes
- ✗Setting up process supervisors (systemd, supervisord)
- ✗Building monitoring, alerting, and log aggregation
- ✗Handling restarts, health checks, and recovery
With Hopx
- ✓Start a worker with one API call — no servers to provision
- ✓Built-in process management and health monitoring
- ✓Stream logs and metrics in real-time via API
- ✓Pay only when worker is running — per-second billing
Any Type of Background Work
Queue Processors
Process jobs from SQS, Redis, RabbitMQ queues
Scheduled Tasks
Cron-like jobs that run on schedule
Event Handlers
React to webhooks and events in real-time
Data Pipelines
Continuous data ingestion and transformation
Monitoring Agents
Health checks, alerting, log aggregation
Sync Workers
Keep systems in sync with periodic updates
Why Hopx for Background Workers
No Timeout Limits
Workers run as long as needed — hours, days, or weeks. No forced shutdowns or arbitrary limits.
Live Monitoring
Stream logs via WebSocket, check process status, monitor CPU/memory in real-time.
Automatic Recovery
Restart workers automatically on failure. Resume from checkpoints with persistent filesystem.
Zero Server Management
No EC2 instances, no Kubernetes, no container orchestration. Just start your worker.
Start Workers in Seconds
Launch background processes, monitor their health, stream logs — all through simple API calls. No infrastructure to manage.
Instant Start
Workers start in ~100ms, ready to process immediately
Live Monitoring
Stream logs, check status, monitor resources in real-time
Fault Tolerance
Detect failures and restart workers automatically
1from hopx_ai import Sandbox
2
3# Create sandbox for background worker
4sandbox = Sandbox.create(
5 template="code-interpreter",
6 timeout=None # No timeout
7)
8
9# Start background worker process
10process = sandbox.run_code_background("""
11import time
12import json
13import redis
14
15# Connect to your queue
16r = redis.Redis(host='your-redis-host', port=6379)
17
18print("Worker started, listening for jobs...")
19
20while True:
21 # Pop job from queue
22 job = r.blpop('job_queue', timeout=30)
23
24 if job:
25 job_data = json.loads(job[1])
26 print(f"Processing job: {job_data['id']}")
27
28 try:
29 # Process the job
30 result = process_job(job_data)
31
32 # Store result
33 r.set(f"result:{job_data['id']}", json.dumps(result))
34 print(f"Job {job_data['id']} completed")
35
36 except Exception as e:
37 print(f"Job {job_data['id']} failed: {e}")
38 # Retry logic here
39
40 # Heartbeat
41 r.set('worker:heartbeat', time.time())
42""")
43
44print(f"Worker started with process ID: {process.id}")
45
46# Monitor worker health
47while True:
48 # Get worker status
49 status = sandbox.get_process_status(process.id)
50 metrics = sandbox.get_metrics_snapshot()
51
52 print(f"Status: {status.state}")
53 print(f"CPU: {metrics['cpu_percent']}%")
54 print(f"Memory: {metrics['memory_mb']}MB")
55
56 # Stream recent logs
57 logs = sandbox.get_logs(tail=10)
58 for log in logs:
59 print(log)
60
61 if not status.running:
62 print("Worker stopped, restarting...")
63 # Restart logic here
64 break
65
66 time.sleep(60)
67
68# Cleanup when done
69sandbox.kill()