Last updated: February 2, 2026

I've been running self-hosted services for a while now. What started as "can I host my own stuff?" turned into a setup that works for my needs.

The motivation isn't saving money (though that's a bonus). It's about making things work with whatever hardware I can get. I like understanding how the pieces fit together and seeing what I can squeeze out of limited resources.

The Hardware Foundation

My self-hosted setup is spread across four machines, and each one has its own role:

Raspberry Pi 5

Specs: 8GB RAM, 128GB MicroSD
Role: Running way more stuff than it probably should

This handles all the lightweight services I use daily:

  • Mealie - Cooking recipe management
  • Linkding - Bookmark management with tagging
  • Miniflux - RSS feed aggregation
  • Wakapi - Coding time tracking (WakaTime alternative)
  • Gogs - Lightweight Git hosting
  • Ghostfolio - Portfolio tracking
  • Szurubooru - Imageboard

The Pi runs all of these services. Power consumption is minimal, and it's useful for testing new services before I decide if they're worth keeping.

Bmax B2 - The Media Server

Specs: 4GB RAM, 512GB SSD (purchased second hand) Role: All the media stuff

This budget mini PC handles everything media-related.

  • Servarr Stack (Radarr, Sonarr, Prowlarr) - Automated media acquisition
  • Jellyfin - Media streaming server
  • Navidrome - Music streaming server
  • Transmission - BitTorrent client

The SSD handles file operations faster than an SD card would. No dealing with SD card corruption.

STB TV for Network Services

Specs: 2GB RAM, 8GB SD card, running Armbian (came pre-flashed) Role: Network monitoring (for now)

This second-hand TV set top box cost me about $20 and came already flashed with Armbian by the seller. Currently it handles:

  • AdGuard Home - Network-wide DNS filtering and ad blocking
  • Prometheus Exporter - Collecting metrics from my router

I'm planning to either retire this device or move it to my office for remote network access, since the small storage and outdated OS are becoming a maintenance headache.

DigitalOcean Droplet for Public Services

Specs: 2GB RAM, SGP1 region
Role: Public-facing services

This handles anything that needs to be reachable from outside my network:

  • Traefik - Load balancer and reverse proxy
  • Beszel - System monitoring (way simpler than Prometheus/Grafana)
  • Prometheus + Grafana - Still around for experiments and custom metrics
  • Certbot + Let's Encrypt - SSL certificates because security
  • VaultWarden - Password manager (Bitwarden but self-hosted)

How It All Talks To Each Other

Tailscale connects all the machines. I use the free tier to create a mesh network where my home hardware can talk to the cloud droplet securely, without port forwarding or VPN configs.

Here's how the traffic flows:

  1. Local stuff runs on my home hardware (Pi, Bmax, and STB TV)
  2. Traefik on the droplet handles all the public internet requests
  3. Tailscale creates secure tunnels so everything can talk
  4. Cloudflare does DNS and extra TLS stuff

This setup means I can access my home services from anywhere and see how reverse proxies, mesh networks, and DNS routing work together.

Infrastructure as Code

I don't have everything fully automated, but I do use some IaC tools to keep things manageable:

Terraform

One repo that handles the cloud infrastructure:

  • DigitalOcean droplet provisioning
  • Cloudflare DNS records
  • DigitalOcean Spaces storage

Ansible

Separate repo for service deployment:

  • Installing Docker on machines
  • Deploying services via docker-compose files

I used to have Atlantis set up for automated Terraform runs, but nowadays I just experiment and apply changes locally. I make sure to commit and push code frequently to keep everything in version control, even if the deployment process is more manual.

Storage Strategy

I use a hybrid storage approach:

  • Local storage for frequently accessed data and databases
  • DigitalOcean Spaces for backups and large files
  • Kopia for automated backups, using DigitalOcean Spaces as the storage backend via its S3-compatible API

This ensures I have both performance and durability while keeping costs reasonable. Kopia handles deduplication and encryption, while DigitalOcean Spaces provides reliable cloud storage at a reasonable cost.

Lessons Learned

Here's what I've learned along the way:

Start small: I started with just the Pi running one service. Building everything at once would have been overwhelming.

Tailscale is simple: It creates secure tunnels between machines without needing to configure port forwarding or VPNs.

Monitor, but keep it simple: I used to run Prometheus/Grafana for monitoring, but it was overkill. Switched to Beszel and it's simpler. I still keep Prometheus around for custom metrics, but for basic system monitoring, Beszel does the job.

Git helps: All my configs live in git repos. I'm not great about committing regularly, but having the history means I won't forget how things were configured.

Why I Do This

This isn't about saving money. The droplet costs $17/month, plus power for the home hardware. Managed alternatives would cost more, but that's not the point.

The value is in tinkering and learning. When you work with limited hardware, you figure out how things work. You learn about networking, storage, deployment, and system administration by doing it yourself.

It's not production-grade. It's a playground. If something breaks, I fix it when I get around to it. The point is experimenting with what's possible on minimal resources.

Final Thoughts

I like running my own services and seeing how they work. My bookmarks, RSS feeds, and password manager run on my own hardware.

That said, privacy and de-googling aren't my main concerns, especially for data I can't afford to lose. I still use Google Drive for important documents, Gmail for email, GitHub for code. The point isn't to replace everything. It's to experiment and learn with stuff that's fun to tinker with.

The hybrid setup (local + cloud) works for me. My services run locally, and I can access them remotely. I've picked up networking and infrastructure knowledge along the way.

If you're thinking about self-hosting, just start. Grab a Raspberry Pi, pick one service, and try it. You'll make mistakes, but that's how you learn.

What's Coming Next

Here's what I'm thinking about next:

Hardware Upgrades:

  • Add SSD to the Raspberry Pi - The MicroSD card is the bottleneck now
  • More ARM devices - Maybe another Pi or SBC to run a k3s cluster
  • Dedicated storage - A small NAS setup, maybe with RAID for the media server
  • UPS - Power outages in Indonesia are common, and I'm tired of everything going down randomly

Software Experiments:

  • Kubernetes cluster (k3s) - I've run production Kubernetes at work, but I'm curious about running it on a budget with ARM devices
  • Local LLM - If I get a device with a decent GPU, running local language models could be interesting

None of this needs to work perfectly. It's about figuring out how to make it work and learning from what doesn't.