Microservices are all the rage in software architecture — and for good reason. They help break down large, complex applications into smaller, manageable, and independently deployable services. While languages like Python, Node.js, and Go often dominate this space, Rust is definitely making waves. Let’s be honest — setting up multiple services, each with their own dependencies, can be a bit of a headache. That’s where Docker comes in. Docker lets us containerize each service, ensuring consistency across development, testing, and production. And with Docker Compose, we can orchestrate these services to work together seamlessly. In this blog post, I’ll show you how to build a simple, scalable microservice architecture using Rust and Docker. We’ll create two services — a basic API service and an authentication service — and tie them together with Docker Compose. Let’s get started.
We’ll begin by setting up the project infrastructure. Let’s create a simple microservices architecture with two services:
api-service
: A basic API that handles “product” data - think of it as the backend for an e-commerce store.auth-service
: A simple authentication service, validating logins and managing sessions.The directory structure looks like this:
/rust-microservices /api-service /auth-service docker-compose.yml
Each service will be a separate Rust project with its own dependencies. The docker-compose.yml
file will tie everything together, allowing us to run all the services with a single command.
To build out services, we’re going to use Actix Web — a powerful, lightweight framework for building web applications in Rust.
Let’s build out the api-service
first. Start by creating a new rust project by running cargo new api-service
. Now, let’s add some dependencies. In api-service/Cargo.toml
, add the following:
[dependencies] actix-web = "4" serde = { version = "1.0", features = ["derive"] }
A quick explanation of those dependencies:
Now let’s write the actual service. Your api-service/src/main.rs
should look like this:
// api-service/src/main.rs use actix_web::{get, App, HttpServer, Responder, HttpResponse}; #[get("/products")] async fn get_products() -> impl Responder { HttpResponse::Ok().json(vec!["Laptop", "Smartphone", "Tablet"]) } #[actix_web::main] async fn main() -> std::io::Result<()> { HttpServer::new(|| App::new().service(get_products)) .bind("0.0.0.0:8000")? .run() .await }
This is an extremely simple service. When you hit /products
, you get a JSON response of ["Laptop", "Smartphone", "Tablet"]
. Not the fanciest service in the world, but it works well for demonstration purposes. We’re binding the server to 0.0.0.0:8000
, which means it will be accessible from outside the container when we Dockerize it. You can run this with cargo run
, then verify your results using curl: curl http://localhost:8000/products
.
Now we need to build out the basic authentication service. Because this will be very basic and naive, I shouldn’t have to tell you do not handle authentication in this way. I’m just using it for a simple example. To begin, create the project: cargo new auth-service
. Your auth-service/Cargo.toml
file should be set up like this:
[dependencies] actix-web = "4" serde = { version = "1.0", features = ["derive"] } serde_json = "1.0"
And your auth-service/src/main.rs
will looks something like this:
// auth-service/src/main.rs use actix_web::{post, App, HttpServer, Responder, HttpResponse, web}; use serde::Deserialize; #[derive(Deserialize)] struct Credentials { username: String, password: String, } #[post("/login")] async fn login(credentials: web::Json) -> impl Responder { if credentials.username == "admin" && credentials.password == "password" { HttpResponse::Ok().body("Login successful!") } else { HttpResponse::Unauthorized().body("Invalid credentials.") } } #[actix_web::main] async fn main() -> std::io::Result<()> { HttpServer::new(|| App::new().service(login)) .bind("0.0.0.0:8001")? .run() .await }
This service checks if the username and password are admin
and password
, respectively. If they are, it returns a success message; otherwise, it returns an unauthorized error. You can run this locally with cargo run
, then test it with curl by running curl -X POST http://localhost:8001/login -H "Content-Type: application/json" -d '{"username": "admin", "password": "password"}'
. With this, you should see Login successful!
.
Now that we have both services running locally, it’s time to containerize them with Docker. This will allow us to run them consistently across different environments and scale them easily. Rust produces large binaries, and the build process can pull in a lot of unnecessary files. To keep our Docker images small and efficient, we’ll use multi-stage builds. The idea is simple:
We’re going to create two Dockerfiles, one in each service.
The Dockerfile for the API service looks like this:
# api-service/Dockerfile # Stage 1: Build the application FROM rust:1.73 as builder WORKDIR /usr/src/api-service COPY . . RUN cargo build --release # Stage 2: Create a lightweight runtime image FROM debian:buster-slim COPY --from=builder /usr/src/api-service/target/release/api-service /usr/local/bin/api-service CMD ["api-service"]
And, the Dockerfile for the authentication server looks like this:
# auth-service/Dockerfile # Stage 1: Build the application FROM rust:1.73 as builder WORKDIR /usr/src/auth-service COPY . . RUN cargo build --release # Stage 2: Create a lightweight runtime image FROM debian:buster-slim COPY --from=builder /usr/src/auth-service/target/release/auth-service /usr/local/bin/auth-service CMD ["auth-service"]
Now that both services are containerized, we can tie them together using Docker Compose. Docker Compose lets us define and run multi-container applications, handling networking and dependencies for us. We’re going to create a file called docker-compose.yml
at the root-level of our project:
version: '3.8' services: api-service: build: ./api-service ports: - "8000:8000" depends_on: - auth-service auth-service: build: ./auth-service ports: - "8001:8001"
If you don’t know how to read a docker compose file, let me break this down for you. With this setup, we’re going to:
api-service
on port 8000 and auth-service
on port 8001. You can modify which ports you want by setting the “outside”, or “first” number to something different. For example, if you want to expose the auth-service
on port 8002 instead of 8001, you can change that line in the ports
area of the compose file to - “8002:8001”
api-service
doesn’t start until auth-service
is up and running (thanks to depends_on
).We can run everything at once with docker-compose up --build
. Now, you can visit http://localhost:8000/products to see the product list, and test the login at http://localhost:8001/login as we did before.
One of the biggest advantages of Docker is how easy it makes scaling services. Let’s say our api-service
is handling a lot of traffic, and we need to scale it horizontally. We can spin up multiple instances of api-service
with a single command:
docker-compose up --scale api-service=3 --build
This will run three instances of api-service
. For production, you’d use a load balancer (like NGINX) to distribute traffic across these instances.
And there you have it! We’ve built a simple microservice architecture using Rust and Docker. We containerized each service, orchestrated them with Docker Compose, and even scaled them effortlessly. There’s a lot more you can do from here — adding a database, implementing service discovery, or deploying to Kubernetes for even more scalability. But for now, you’ve got a solid foundation for building scalable Rust microservices.