So I've been running self-hosted services for a while now, and honestly? It's been a blast. What started as "hey, I wonder if I can host my own stuff" has turned into this pretty solid setup that's been serving me well.

The real motivation here isn't saving money (though that's a nice bonus). It's the pure satisfaction of making things work with whatever hardware I can get my hands on. There's something deeply satisfying about squeezing maximum value out of minimal resources and understanding every piece of the puzzle.

The Hardware Foundation

My self-hosted setup is spread across four machines, and each one has its own role:

Raspberry Pi 5

Specs: 8GB RAM, 128GB MicroSD
Role: Running way more stuff than it probably should

This little guy is my workhorse for all the lightweight services I actually use daily:

  • Mealie - Cooking recipe management
  • Linkding - Bookmark management with tagging
  • Miniflux - RSS feed aggregation
  • Wakapi - Coding time tracking (WakaTime alternative)
  • Gogs - Lightweight Git hosting
  • Ghost - Blog publishing platform
  • Ghostfolio - Portfolio tracking
  • Szurubooru - Imageboard

It's pretty impressive that this tiny computer can handle all of this smoothly. The power consumption is minimal, and it's become my go-to for testing new services before I decide if they're worth keeping around.

Bmax B2 - The Media Server

Specs: 4GB RAM, 512GB SSD (purchased second hand) Role: All the media stuff

This budget mini PC handles everything media-related quite well for its specs.

  • Servarr Stack (Radarr, Sonarr, Prowlarr) - Automated media acquisition
  • Jellyfin - Media streaming server
  • Navidrome - Music streaming server
  • Transmission - BitTorrent client

The SSD makes such a huge difference here. No more waiting forever for file operations or dealing with SD card corruption. Plus, when Jellyfin needs to transcode something, this box actually has the muscle for it.

STB TV for Network Services

Specs: 2GB RAM, 8GB SD card, running Armbian (came pre-flashed) Role: Network monitoring (for now)

This second-hand TV set top box cost me about $20 and came already flashed with Armbian by the seller. Currently it handles:

  • AdGuard Home - Network-wide DNS filtering and ad blocking
  • Prometheus Exporter - Collecting metrics from my router

I'm planning to either retire this device or move it to my office for remote network access, since the small storage and outdated OS are becoming a maintenance headache.

DigitalOcean Droplet for Public Services

Specs: 2GB RAM, SGP1 region
Role: The stuff that needs to be accessible from the scary internet

The cloud piece handles anything that needs to be reachable from outside my network:

  • Traefik - Load balancer and reverse proxy
  • Beszel - System monitoring (way simpler than Prometheus/Grafana)
  • Prometheus + Grafana - Still around for experiments and custom metrics
  • Certbot + Let's Encrypt - SSL certificates because security
  • VaultWarden - Password manager (Bitwarden but self-hosted)

How It All Talks To Each Other

The magic that makes this whole thing work is Tailscale. Seriously, this thing is a game-changer. I'm using their free tier to create this mesh network where all my machines can talk to each other securely, without me having to mess around with port forwarding or VPN configs.

Here's how the traffic flows:

  1. Local stuff runs on my home hardware (Pi, Bmax, and STB TV)
  2. Traefik on the droplet handles all the public internet requests
  3. Tailscale creates secure tunnels so everything can talk
  4. Cloudflare does DNS and extra TLS stuff

It's basically like having the convenience of cloud services but with the control (and cost savings) of running things at home. Best of both worlds, really.

Infrastructure as Code

I don't have everything fully automated, but I do use some IaC tools to keep things manageable:

Terraform

One repo that handles the cloud infrastructure:

  • DigitalOcean droplet provisioning
  • Cloudflare DNS records
  • DigitalOcean Spaces storage

Ansible

Separate repo for service deployment:

  • Installing Docker on machines
  • Deploying services via docker-compose files

I used to have Atlantis set up for automated Terraform runs, but nowadays I just experiment and apply changes locally. I make sure to commit and push code frequently to keep everything in version control, even if the deployment process is more manual.

Storage Strategy

I use a hybrid storage approach:

  • Local storage for frequently accessed data and databases
  • DigitalOcean Spaces for backups and large files
  • Kopia for automated backups, using DigitalOcean Spaces as the storage backend via its S3-compatible API

This ensures I have both performance and durability while keeping costs reasonable. Kopia handles deduplication and encryption, while DigitalOcean Spaces provides reliable cloud storage at a reasonable cost.

Lessons Learned

I learned this stuff the hard way. Here's what I wish someone had told me:

Start small and grow: I started with just the Pi running one service. Thank god I didn't try to build everything at once. I would've given up after the first networking issue.

Tailscale is excellent: Seriously, before this I was doing the whole port forwarding dance and it was painful. Tailscale just works and makes everything more secure with minimal fuss.

Monitor everything, but keep it simple: I used to run this whole Prometheus/Grafana stack for monitoring, but honestly? It was overkill for my setup. Switched to Beszel and it's so much simpler. I still keep Prometheus around for custom metrics and experiments, but for basic system monitoring, Beszel is perfect.

Git is your friend: All my configs live in git repos. When I inevitably break something (usually on a Friday night), I can just roll back to a working state instead of crying into my keyboard.

Why I Actually Do This

Let me be honest, this isn't really about saving money. Sure, the whole setup costs me about $17/month versus what would probably be $100+ for equivalent managed services, but that's not the main point.

The real value is in the tinkering, the learning, and the satisfaction of making things work. When you're constrained by a tiny budget and limited hardware, you get creative. You learn how things actually work under the hood. You develop a deeper understanding of networking, storage, deployment, and system administration.

It's not production-grade or mission-critical, it's a playground. If something breaks, I fix it when I get around to it. The whole point is experimenting with what's possible on minimal resources and having fun doing it.

Wrapping Up

Building this self-hosted setup has been really enjoyable. It's not just about the tech (though that part is fun), it's about having more control over my digital services. My bookmarks, my RSS feeds, even my password manager are all mine, running on hardware I can touch.

That said, privacy and de-googling aren't my main concerns here, especially for data I cannot afford to lose. I still use cloud services whenever it makes sense. Google Drive for important documents, Gmail for email, GitHub for code repositories. The point isn't to replace everything, but to experiment and learn while having control over the stuff that's fun to tinker with.

The hybrid setup (local + cloud) strikes a good balance between "I'm in control" and "I can actually access my stuff when I'm not home." Plus, I've learned a lot that makes me better at my day job.

If you're thinking about starting your own self-hosting journey, here's my advice: just start. Grab a Raspberry Pi, pick one service you want to self-host, and go for it. You'll make mistakes (I certainly did), but that's how you learn. Before you know it, you'll be running a dozen services and wondering how you ever lived without them.

What's Coming Next

Because apparently I can't leave well enough alone, I'm always tinkering with new stuff. Here's what's on my radar:

Hardware Upgrades:

  • Add SSD to the Raspberry Pi - The MicroSD card is the bottleneck now, and I want to see how much more I can squeeze out of this little computer
  • More ARM devices - Maybe another Pi or some other SBC to justify running a proper k3s cluster
  • Dedicated storage solution - Thinking about a small NAS setup, maybe with RAID for the media server
  • UPS for power backup - Living in Indonesia means power outages happen, and I'm tired of everything going down randomly

Software Experiments:

  • Kubernetes cluster (k3s) - I've run plenty of production-grade Kubernetes clusters at work, but I'm still curious about running it on a shoestring budget with ARM devices
  • Running my own LLM - If I get a more beefy device with decent GPU, experimenting with local language models could be fun

The best part? None of this needs to work perfectly. It's all about the journey of figuring out how to make it work, learning from the failures, and occasionally being pleasantly surprised when something actually runs smoothly for months.