
How to Run a Full OpenClaw Setup 24/7 for Under £15/Month: Hetzner CX32 + GitHub Copilot Pro
App Web Dev Ltd
23 March 2026
A practical cost breakdown showing how to run OpenClaw cheaply 24/7 using a Hetzner CX32 VPS and GitHub Copilot Pro — full Claude and GPT-5 access for under £15/month.
Most AI assistant platforms want you to pay per token. Every question you ask, every document you summarise, every bit of automation you run — it all ticks quietly up on a meter. At low usage it feels fine. Then you give your assistant real work to do, let it run pipelines overnight, have it monitor emails and draft responses, and suddenly you're staring at a £200 invoice wondering what happened.
There's a better model. OpenClaw cheap hosting on a bare VPS, paired with a flat-rate subscription for model access, completely sidesteps the per-token trap. And once you've done it, the economics are almost embarrassing: a full 24/7 OpenClaw setup with access to Claude Opus 4.6 and GPT-5 for under £15 a month. This post walks through exactly how.
TL;DR — The Monthly Bill
Before diving in, here's the honest summary:
- Hetzner CX32 (4 vCPU, 8 GB RAM, 80 GB SSD, 20 TB traffic): ~£5–£6/month
- GitHub Copilot Pro (individual plan): ~£8/month
- Domain (if you want one): ~£1/month amortised
- Total: roughly £14–£15/month
That's it. No per-token billing. No surprise charges when your agent runs overnight. No throttling at peak hours. You get a Linux box in Hetzner's Finnish or German datacentre, OpenClaw running as a systemd service, and GitHub Copilot Pro routing all model calls through its flat subscription — so Claude and GPT-5 usage costs you nothing extra beyond the £8/month.
For comparison: running Claude Opus 4.6 at moderate usage via direct API (say 2 million input tokens and 500k output per month) would cost well over £50. GPT-5 at similar throughput pushes even higher. The subscription model doesn't just save you a bit — it changes the calculation entirely.
What You Actually Need
The stack is simpler than most tutorials make it look. At its core you need three things: a server, a way to run OpenClaw on it persistently, and a model subscription that doesn't charge per token.
For the server, a small VPS is perfect. OpenClaw itself is not especially resource-hungry — it's a Node.js process handling webhooks and scheduling jobs, not running local inference. 4 vCPUs and 8 GB RAM is more than sufficient for a personal setup or a small agency deployment. You're not serving thousands of concurrent users; you're running a personal AI assistant that happens to never sleep.
For persistence, Docker Compose and systemd are the right tools. Docker keeps the OpenClaw process and its dependencies isolated. Systemd makes sure everything comes back up after a reboot, a crash, or a kernel update without you having to touch anything.
For model access, GitHub Copilot Pro is the key unlock. OpenClaw supports GitHub Copilot as a model provider — meaning all API calls go through Copilot's backend rather than directly to Anthropic or OpenAI. Since Copilot Pro is a flat subscription, you get access to Claude Opus 4.6, GPT-5, and a range of other models without a separate API key or per-token billing.
A domain is optional but worth it if you want proper webhook URLs, Telegram integration with HTTPS callbacks, or a clean URL to reference. Namecheap and Porkbun both have .co.uk domains for under £10/year.
Why Hetzner CX32
The Hetzner CX32 has become the de facto choice for this kind of workload in the European developer community for good reason.
The specs are genuinely solid for the price: 4 AMD vCPUs, 8 GB RAM, 80 GB NVMe SSD, and 20 TB of included monthly traffic. That 20 TB figure matters more than it might seem. Most UK and German VPS providers include 1–2 TB before charging overages. Hetzner's bandwidth is so generous that you would have to be doing something extraordinary to exceed it on an OpenClaw deployment — even if you're processing documents, receiving image uploads, or running agents that fetch web content all day.
Latency from the UK to Hetzner's Finnish (fsn1) or German (nbg1) datacentres is typically 15–35ms. That's fast enough that you won't notice it. Compared to Hostinger's managed OpenClaw plans or BoostedHost's premium tiers, Hetzner gives you better hardware, better network, and lower cost — at the price of setting it up yourself. That's exactly the trade-off worth making if you're comfortable with a Linux terminal.
The alternative UK-based providers (Mythic Beasts, Fasthosts, OVHcloud UK) are fine, but they're more expensive at equivalent specs and their included traffic is far lower. There's no particular advantage to keeping the server in the UK if Hetzner's German and Finnish options are 40% cheaper and faster on benchmarks.
One thing to check before signing up: Hetzner invoices in euros, so your exact GBP cost fluctuates slightly with exchange rates. At the time of writing, the CX32 comes to around £5–£6/month after conversion and VAT.

Setting Up the Server
Create the Hetzner account, add a project, and spin up a CX32 instance running Ubuntu 24.04. Choose the Finnish or German datacentre — both are excellent. During creation, upload your SSH public key so you never have to deal with a password.
Once the server is up, SSH in and run your initial hardening pass:
# Update everything first
apt update && apt upgrade -y
# Install essentials
apt install -y ufw fail2ban docker.io docker-compose-v2 unattended-upgrades
# Allow SSH through the firewall, then enable
ufw allow OpenSSH
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
# Enable automatic security updates
dpkg-reconfigure -plow unattended-upgrades
That's the foundation. Fail2ban will automatically block IPs that try to brute-force SSH. Unattended-upgrades handles security patches without you needing to log in. UFW keeps the surface area small — only ports 22, 80, and 443 open.
For SSH, make sure you've disabled password authentication before you close the terminal session:
# In /etc/ssh/sshd_config, set:
# PasswordAuthentication no
# PubkeyAuthentication yes
systemctl restart sshd
Now pull down the OpenClaw configuration and set up Docker Compose. The official docs have a Hetzner-specific guide, and there's also a community Terraform repo (openclaw-terraform-hetzner on GitHub) if you want infrastructure-as-code from the start. For a single-server personal setup, manually running the commands is straightforward enough and gives you a better feel for what's actually running.
Create a directory for your OpenClaw config:
mkdir -p /opt/openclaw
cd /opt/openclaw
Follow the OpenClaw install docs to pull the official Docker Compose file and create your .env with your Telegram bot token, GitHub Copilot credentials, and workspace path. The key environment variable to set correctly is OPENCLAW_MODEL_PROVIDER=github-copilot — this is what routes all model calls through Copilot rather than direct API keys.
Once your config is ready, bring up the stack:
docker compose up -d
Check that it's running:
docker compose logs -f
You should see OpenClaw start, connect to Telegram, and begin polling for messages.
Making It Persistent with systemd
Docker Compose's restart: always policy handles container restarts, but if the Docker daemon itself has a problem or the machine reboots, you want systemd to bring everything back up cleanly.
Create a systemd service file at /etc/systemd/system/openclaw.service:
[Unit]
Description=OpenClaw AI Assistant
Requires=docker.service
After=docker.service network-online.target
Wants=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/opt/openclaw
ExecStart=/usr/bin/docker compose up -d
ExecStop=/usr/bin/docker compose down
TimeoutStartSec=120
[Install]
WantedBy=multi-user.target
Enable and start it:
systemctl daemon-reload
systemctl enable openclaw
systemctl start openclaw
Now OpenClaw survives reboots, kernel updates, and unexpected crashes. You can check its status at any time with systemctl status openclaw and view logs with journalctl -u openclaw -f.
Configuring GitHub Copilot Pro as Your Model Provider
This is the part most tutorials skip over. GitHub Copilot Pro gives you access to the underlying models (Claude Opus 4.6, GPT-5, and others) through a single £8/month subscription. OpenClaw's GitHub Copilot integration handles authentication and routes your agent's model calls through Copilot's API rather than Anthropic's or OpenAI's direct endpoints.
The practical implication: you can run OpenClaw all day, every day, with agents doing real work — summarising emails, writing code, managing pipelines, running blog automation — and your model costs are fixed. There's no meter running.
To set this up, you need a GitHub account with Copilot Pro enabled. In your OpenClaw .env, set:
OPENCLAW_MODEL_PROVIDER=github-copilot
GITHUB_TOKEN=your_github_pat_here
The GitHub Personal Access Token needs the copilot scope. Generate one at github.com/settings/tokens, tick the Copilot scope, and paste it in. OpenClaw will use this token to authenticate all model requests through Copilot's backend.
One nuance worth knowing: GitHub Copilot does apply some rate limiting at high throughput, which is different from per-token billing but still a consideration if you're planning to run very heavy automated pipelines. For a typical personal or small business setup — agents checking email, running workflows, handling customer queries — you'll never hit it.

Security and Ongoing Maintenance
Running a server properly means spending a little time on hygiene. Nothing complicated, but worth doing right the first time.
Backups: Hetzner offers automated snapshots for a small additional cost (typically €0.01/GB/month). For a 80 GB disk that's under £1/month. Set up a daily snapshot and keep three rolling copies. This protects you against catastrophic mistakes — accidentally wiping your workspace, corrupted configuration, or worse. You can also use rsync to periodically back up your OpenClaw workspace to a local machine or an S3-compatible bucket.
Monitoring: UptimeRobot's free tier monitors your server's HTTP endpoint every 5 minutes and emails you if it goes down. Set it up against your OpenClaw health endpoint. Takes two minutes to configure and you'll know within five minutes if something has gone wrong.
Updates: With unattended-upgrades configured, your server handles security patches automatically. You should still log in monthly to do a full apt upgrade and check for any packages held back. Keep an eye on the OpenClaw release notes for application updates too.
Firewall review: Periodically check ufw status verbose to make sure no unexpected ports have been opened. If you add services later (a monitoring dashboard, a web interface), be intentional about what you expose.
The honest truth is that a hardened Hetzner VPS with fail2ban, UFW, and automatic updates running requires less active management than most people expect. Once it's set up correctly, you can largely leave it alone.
The Full Cost Breakdown
Here's the line-item view for a typical month:
| Item | Monthly Cost |
|---|---|
| Hetzner CX32 VPS | ~£5.50 |
| GitHub Copilot Pro | £8.00 |
| Domain (amortised) | ~£0.80 |
| Hetzner snapshots (optional) | ~£0.50 |
| Total | ~£14.80 |
This assumes you stay within Hetzner's 20 TB bandwidth (almost certain) and that your domain is with a budget registrar. There's no per-token billing, no AI API charges, no separate Anthropic or OpenAI subscription needed.
For comparison, a self-managed setup with direct API access at moderate usage:
| Item | Monthly Cost |
|---|---|
| Hetzner CX32 VPS | ~£5.50 |
| Anthropic Claude Opus 4.6 (moderate usage) | £50–£120 |
| OpenAI GPT-5 (moderate usage) | £30–£80 |
| Total | £85–£205 |
The GitHub Copilot Pro substitution doesn't just save money — it removes the unpredictability entirely. You know what you're spending every month.
A few edge cases to be aware of: if you add Hetzner's load balancer, additional volumes, or floating IPs, costs increase. If your agent processes large files or does heavy web scraping, bandwidth can edge higher. And if GitHub changes Copilot Pro pricing (always possible), your model costs change. But for standard configurations, the under-£15 figure is stable and realistic.
Performance Tips and When to Upgrade
The CX32 is more than capable for most OpenClaw deployments, but there are ways to get more out of it.
Adding swap extends your effective memory headroom for bursty workloads:
fallocate -l 4G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
4 GB of swap on top of 8 GB RAM gives you breathing room when agents are doing heavy parallel work.
For Docker, tuning log rotation prevents your disk from slowly filling with container logs:
# In your docker-compose.yml, add to each service:
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "3"
When should you upgrade? If you're running multiple concurrent agents, processing large document volumes, or serving multiple users, the CX32 will eventually show CPU contention. The Hetzner CX42 (8 vCPU, 16 GB RAM) doubles the resources for roughly double the cost — still very reasonable. Hetzner also makes vertical scaling easy: you can snapshot your CX32 and restore it to a larger instance in under ten minutes.
The honest assessment: most individuals and small teams never need to upgrade. The workloads that justify a larger instance are usually workloads that justify a business-tier setup with multiple nodes, at which point the architecture conversation changes entirely.

Terraform Option: Infrastructure as Code
If you want a reproducible setup — useful if you're deploying for clients or want to be able to rebuild from scratch quickly — the community openclaw-terraform-hetzner repository on GitHub gives you a solid starting point. It provisions the CX32, configures the firewall, uploads your SSH key, and outputs the server IP ready for your Ansible playbook or manual Docker Compose setup.
The basic usage once you've cloned the repo and filled in your terraform.tfvars:
terraform init
terraform plan
terraform apply
Three minutes later you have a configured server. This is the professional approach for anyone managing more than one deployment or who wants to treat their infrastructure as code from the start.
Putting It All Together
Running OpenClaw cheaply 24/7 is not a compromise. A Hetzner CX32 with GitHub Copilot Pro gives you the same Claude Opus 4.6 and GPT-5 access you'd get from direct API subscriptions — with better reliability, predictable billing, and a server you fully control.
The setup process takes an afternoon the first time. After that, your AI assistant runs itself: automatically restarting on crash, applying security updates, processing tasks around the clock, without ticking up a bill every time it does something useful.
This is the infrastructure we use at App Web Dev Ltd for our own OpenClaw deployment — the same system that manages our blog pipeline, monitors our clients' projects, and handles a chunk of our outreach. It costs less per month than a decent lunch in Manchester, and it never calls in sick.
If you're thinking about setting up OpenClaw for your business — whether that's a personal productivity stack, an automated customer engagement system, or something more ambitious — we'd be glad to talk through the right setup for your use case. Get in touch through appwebdev.co.uk and we can work through the architecture together. The system that wrote this post is the same one we'd help you build.
About App Web Dev Ltd
UK-based AI agency specialising in business automation and intelligent AI solutions
Related Articles

The Road to AGI: How OpenClaw Is Redefining What AI Assistants Can Become
Exploring how persistent, tool-equipped AI agents like OpenClaw represent a meaningful step toward AGI — and what that means for businesses adopting AI today.

Complete Guide to Artificial Intelligence for UK Businesses
Discover how artificial intelligence transforms UK businesses in 2025. Learn about intelligent AI solutions, implementation strategies, real-world applications, and business process automation with practical examples.

Startup AI Idea Development Guide
Complete guide to AI chatbots for UK businesses in 2025. Learn implementation strategies, ROI analysis, platform comparisons, and real case studies. Transform your customer engagement today.