I built my own CDN with Varnish and Nginx
Kristian Polso • September 11, 2025
Open Source CDN Varnish NginxI have previously used tons of different commercial CDN services, mostly BlazingCDN and Bunny lately. But having been frustrated with them for a multitude of reasons, I finally decided to bite the bullet and try and build my own tiny CDN service. Here's how it went for me, setting up a global CDN service with several small VPS's.
But why?
I know, making your CDN make seem like a fool's errand. My main reasons at the time were:
- I wanted more ownership of my websites, not being restricted to the whims of commercial services like Cloudflare or Bunny
- Hosting your servers can be cheaper than using the commercial services (sometimes, more on that later)
- I wanted to learn more about using the related software. I consider myself pretty knowledgeable in configuring nginx, but I only know the basics of how Varnish operates
- Hosting your own servers means you can fine-tune a ton of more things than in a commercial CDN service, like WAF, rate-limiting, etc.
And a big thing I noticed when using commercial providers (especially Bunny) was that no matter what I did, I could not get the cache hit rate to 100%, or sometimes not even close to it. I have one site up on BunnyCDN still, I have set it up on the backend to serve all content with Cache-control: public
headers and expire time of 365 days, but still the cache hit rate on BunnyCDN statistics never goes above 45%. I read online that the reason why this happens it that Bunny does not allocate enough space for every user, so their edge server's caches are getting aggressively flushed all the time. So I wanted to have a solution where this kind of thing would not be an issue.
Hardware & software
So first, I had to come up where I would host the network of servers. I looked around for different providers, and I ended up using Leaseweb as the provider for my virtual servers. I mostly chose them because they were inexpensive, and they had a global network of servers available, meaning I wouldn't need to rent servers from several different providers. Although bit frustratingly, Leaseweb seems to invoice the servers with different companies and invoices per continent, so there's bit of accounting hassle to pay for them monthly.
When picking the virtual servers, I was mostly looking for who provided fast and plentiful storage in their server offerings. CPU and RAM are a nice-to-have, but most important thing for me was the storage, because that's where the cache will mostly live. Leaseweb has fast NVMe storage on their VPS's, so it was almost a no-brainer on choosing them.
I then needed to decide on the software to use. For the OS, I decided to use Debian stable distribution (13, trixie, latest at the time). For the user-facing web server, I used nginx, since I personally know that one the best. I also looked into using caddy, but the main problem I had with it was the certificate storage (more about that later!), which they had some distributed options, but the management would still have been a hassle.
And for the caching part, I chose to use Varnish, which is an excellent tool for the job. I configured it to use the file storage as the backend instead of RAM, so that I could store the cache in the fast NVMe disk, while the OS will handle paging the most important files in the RAM.
For now, I am only using three servers in three different regions: Frankfurt (Germany), Los Angeles (US) and Singapore. I picked those three locations mostly on my traffic needs. I am definitely considering purchasing a server on the east coast of the US, like New York or Washington, if the latency proves to be too much to LA.
Setting it up
Setting things was a breeze, at first. Getting Varnish up & running, caching to disk, parsing requests based on some cache-related headers was easy to set up. Soon I had it running and accepting requests locally.
Then I went onto configure nginx, and ran into my first problem.
SSL certificates.
All of my CDN servers should have the same certificates when serving content, since they respond to the same hostname. I looked around, and there were several different options on how to deal with this problem. There's "pull strategy" (each server pulls certificates from one central server), "push strategy" (certificates are pushed to all servers from a central server), a DNS validation strategy (each server issues and validates the certificates using a DNS method).
I ended up using the "push strategy", since it best suited my needs. I could get Let's encrypt certificates issued easily for all of my sites, since the CDN servers already proxy the content via varnish to my backend server. So on my backend server, I could just issue certbot certonly --nignx -d mydomain.com
and the CDN servers would proxy and respond to the Let's Encrypt requests like they were the backend. Then, once I had those certificates on my backend server, I wrote a script that rsync
s them daily to all the CDN servers. Not too shabby!
It's always the DNS
I kind of jumped ahead in my previous chapter, regarding DNS. For my CDN servers to serve content globally, they would need something like a geolocation-based DNS functionality. Meaning, when an user requests DNS information for mydomain.com from the US, the DNS server would respond with a different CDN server IP, than it would for a user from Finland.
There are several ways to go along with this. I could host my own DNS servers, like PowerDNS, which has support for GeoIP-based routing. But I gotta admit, I have not previously operated authoritative DNS servers, and I do not think I want that kind of responsibility right now.
So instead, I went with an easier solution (that I'm not really that proud of). Bunny has a DNS platform with a simple web UI where you can set up geographical routing. So that means I'm still somewhat tethered to a commercial provider for my CDN to function, but I'm willing to accept that, at least for now. Besides, their DNS service is not that expensive ($0.30 per million queries, first 1 mllion free).
So, how is it?
I have now been running it for a while, and things have been good, I gotta say! My websites feel snappy to use, the caches in the servers are getting used more thoroughly than in Bunny (already, the cache hit rate is around 55%, and it's climbing every day).
Now, was it cheaper to use than a commercial service? Not really. It varies a lot by the traffic and content you are serving. However, in my case, I was paying BlazingCDN and Bunny around $100 per month for their CDN service, and right now I'm paying Leaseweb around $90 for the three servers. Plus, there is, of course, the additional labor to maintain the servers, but that means you can modify the servers to your heart's content. And since my new servers have a higher cache hit rate, it means that the end users of my sites will have a better experience browsing them.
Future
In the near future, I would like to / consider:
- More servers, more locations (east coast of the US, maybe South America? EMEA?)
- Hosting my own georouting DNS (something with a web UI hopefully)
- Better WAF (only installed some basic iptables blacklists, gotta look more into filtering and ratelimiting)