Let's Go Further › Deployment and hosting
Previous · Contents · Next
Chapter 20.

Deployment and hosting

In this final section of the book we’ll look at how to deploy our API application to a production server and expose it on the internet.

Every project and project team will have different technical and business needs in terms of hosting and deployment, so it’s impossible to lay out a one-size-fits-all approach here.

To make the content in this section as widely applicable and portable as possible, we’ll focus on hosting the application on a self-managed Linux server (something provided by a myriad of hosting companies worldwide) and using standard Linux tooling to manage server configuration and deployment.

We’ll also be automating the server configuration and deployment process as much as possible, so that it’s easy to make continuous deployments and possible to replicate the server in the future if you need to.

If you’re planning to follow along, we’ll be using DigitalOcean as the hosting provider in this book. DigitalOcean isn’t free, but it’s good value and the cost of running a server starts at $4 USD per month. If you don’t want to use DigitalOcean, you should be able to follow basically the same approach we outline here with any other Linux hosting provider.

In terms of infrastructure and architecture, we’ll run everything on a single Ubuntu Linux server. Our stack will consist of a PostgreSQL database and the executable binary for our Greenlight API, operating in much the same way that we’ve seen so far in this book. But in addition to this, we’ll also run Caddy as a reverse proxy in front of the Greenlight API.

Using Caddy has a couple of benefits. It will automatically handle and terminate HTTPS connections for us — including automatically generating and managing TLS certificates via Let’s Encrypt and ZeroSSL — and we can also use Caddy to easily restrict internet access to our metrics endpoint.

In this section you’ll learn how to: