Running Multiple Laravel Apps on One VPS with Docker Compose
If you’ve ever tried juggling multiple Laravel projects on the same VPS, you know how messy it can get. Different MySQL instances, conflicting PHP versions, and endless NGINX configurations quickly turn into a headache. The solution? Docker Compose. By fully containerizing each app, we can run a clean and isolated setup where every project lives in its own space, yet all share the same VPS.
In this post, I’ll walk you through a working docker-compose.yml
that runs:
- A main Laravel project (the root domain,
main
). - Three subdomain projects:
subdomain1
,subdomain2
, andsubdomain3
. - A shared NGINX container to route requests to the right project.
- Separate MySQL databases for each app, ensuring full isolation.
- A queue worker for background jobs (emails, tasks, etc.).
Architecture Overview
The idea is simple: one VPS, multiple apps. Here’s the breakdown:
- Main app:
main
runs on the root domain. - Subdomains:
subdomain1.main.dev
subdomain2.main.dev
subdomain3.main.dev
- Databases: Each app has its own MySQL container and persistent volume.
- Networking: A custom Docker network (
main-net
) ties everything together.
This way, you don’t need to clutter your host system with multiple MySQL instances or mess with global PHP versions. Everything runs inside containers, fully isolated.
Main Laravel App (main)
The main project has two services: the application container and its database.
main-admin:
build:
args:
user: admin
uid: 1000
context: ./main
dockerfile: Dockerfile
container_name: main-admin
restart: unless-stopped
working_dir: /var/www/main
volumes:
- ./main:/var/www/main
networks:
- main-net
This container builds the Laravel app from the local main
directory and mounts it as a volume.
That means you can edit your code locally and see changes instantly inside the container. The working_dir
is set to /var/www/main
, which is the common place Laravel apps live inside Docker.
Next, the database container:
main-db:
image: mysql:8-oracle
container_name: main-db
restart: unless-stopped
env_file: ./main/.env
ports:
- "3312:3306"
environment:
MYSQL_DATABASE: "${MAIN_DB_DATABASE}"
MYSQL_USER: "${MAIN_DB_USERNAME}"
MYSQL_PASSWORD: "${MAIN_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${MAIN_DB_PASSWORD}"
volumes:
- main-db-data:/var/lib/mysql
networks:
- main-net
Notice that it uses a dedicated port (3312
) and its own volume (main-db-data
) for persistence.
By separating the data volume, even if you destroy and recreate the container, your database stays intact.
Shared NGINX for All Apps
Instead of spinning up a separate NGINX per app, there’s one shared NGINX container that serves all projects:
main-nginx:
image: nginx:1.23-alpine
container_name: main-nginx
restart: unless-stopped
ports:
- "8092:80"
volumes:
- ./main:/var/www/main
- ./subdomain1:/var/www/subdomain1
- ./subdomain2:/var/www/subdomain2
- ./subdomain3:/var/www/subdomain3
- ./docker-compose/nginx/http.conf:/etc/nginx/conf.d/default.conf
networks:
- main-net
The trick here is the mounted config file: http.conf
. Inside this NGINX config, you define server blocks
for each subdomain (subdomain1.main.dev
, subdomain2.main.dev
, etc.) and point them to their respective
root
directories inside the container. With this approach, NGINX becomes the traffic director for all apps.
Queue Worker
For background jobs, such as sending emails or processing queued tasks, there’s a dedicated worker container:
main-queue-worker:
build:
args:
user: admin
uid: 1000
context: ./main
dockerfile: Dockerfile
container_name: main-queue-worker
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./main:/var/www
networks:
- main-net
depends_on:
- main-admin
command: php artisan queue:work --tries=3 --sleep=3
Instead of running php artisan queue:work
on your host, this container runs it continuously in the background.
That means queued jobs will always be processed, even if you’re not SSH’d into the server.
Subdomain Apps
Each subdomain follows the same pattern: one -admin
container for the Laravel app,
and one -db
container for its database. For example, here’s Subdomain1:
subdomain1-admin:
build:
args:
user: admin
uid: 1000
context: ./subdomain1
dockerfile: Dockerfile
container_name: subdomain1-admin
working_dir: /var/www/subdomain1
volumes:
- ./subdomain1:/var/www/subdomain1
networks:
- main-net
subdomain1-db:
image: mysql:8-oracle
container_name: subdomain1-db
restart: unless-stopped
env_file: ./subdomain1/.env
ports:
- "3313:3306"
environment:
MYSQL_DATABASE: "${SUBDOMAIN1_DB_DATABASE}"
MYSQL_USER: "${SUBDOMAIN1_DB_USERNAME}"
MYSQL_PASSWORD: "${SUBDOMAIN1_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${SUBDOMAIN1_DB_PASSWORD}"
volumes:
- subdomain1-db-data:/var/lib/mysql
networks:
- main-net
The same pattern is repeated for subdomain2 and subdomain3, each with its own codebase, environment variables, ports, and volumes. This ensures true isolation: no database collisions, no port conflicts, no shared state.
Persistent Volumes and Networking
At the bottom of the file, Docker Compose defines all volumes and the shared network:
volumes:
main-db-data:
subdomain1-db-data:
subdomain2-db-data:
subdomain3-db-data:
networks:
main-net:
driver: bridge
Volumes guarantee that database data persists even if you remove a container. Networks ensure that all containers can talk to each other, but remain isolated from the host machine’s default network unless explicitly exposed.
Some notes on this Setup
- Clean isolation: Each app and DB is its own container — no more global conflicts.
- Easy management: Spin everything up with
docker compose up -d
. - Perfect for side projects: Run multiple Laravel apps on a single affordable VPS.
- Future-proof: Scaling to production just means tweaking configs, not rebuilding from scratch.
Full Config files + Github repo
You can find all project files here
docker-compose.yml
services:
main-admin:
build:
args:
user: admin
uid: 1000
context: ./main
dockerfile: Dockerfile
container_name: main-admin
restart: unless-stopped
working_dir: /var/www/main
volumes:
- ./main:/var/www/main
networks:
- main-net
# Shared NGINX
main-nginx:
image: nginx:1.23-alpine
container_name: main-nginx
restart: unless-stopped
ports:
- "8092:80"
volumes:
- ./main:/var/www/main
- ./subdomain1:/var/www/subdomain1
- ./subdomain2:/var/www/subdomain2
- ./subdomain3:/var/www/subdomain3
- ./docker-compose/nginx/http.conf:/etc/nginx/conf.d/default.conf
networks:
- main-net
main-queue-worker:
build:
args:
user: admin
uid: 1000
context: ./main
dockerfile: Dockerfile
container_name: main-queue-worker
restart: unless-stopped
working_dir: /var/www/
volumes:
- ./main:/var/www
networks:
- main-net
depends_on:
- main-admin
command: php artisan queue:work --tries=3 --sleep=3
main-db:
image: mysql:8-oracle
container_name: main-db
restart: unless-stopped
env_file: ./main/.env
ports:
- "3312:3306"
environment:
MYSQL_DATABASE: "${MAIN_DB_DATABASE}"
MYSQL_USER: "${MAIN_DB_USERNAME}"
MYSQL_PASSWORD: "${MAIN_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${MAIN_DB_PASSWORD}"
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- main-db-data:/var/lib/mysql
networks:
- main-net
##### Subdomain App (subdomain1.main.dev) #####
subdomain1-admin:
build:
args:
user: admin
uid: 1000
context: ./subdomain1
dockerfile: Dockerfile
container_name: subdomain1-admin
working_dir: /var/www/subdomain1
volumes:
- ./subdomain1:/var/www/subdomain1
networks:
- main-net
subdomain1-db:
image: mysql:8-oracle
container_name: subdomain1-db
restart: unless-stopped
env_file: ./subdomain1/.env
ports:
- "3313:3306"
environment:
MYSQL_DATABASE: "${SUBDOMAIN1_DB_DATABASE}"
MYSQL_USER: "${SUBDOMAIN1_DB_USERNAME}"
MYSQL_PASSWORD: "${SUBDOMAIN1_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${SUBDOMAIN1_DB_PASSWORD}"
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- subdomain1-db-data:/var/lib/mysql
networks:
- main-net
##### Subdomain App (subdomain2.main.dev) #####
subdomain2-admin:
build:
args:
user: admin
uid: 1000
context: "${PWD}/subdomain2"
dockerfile: Dockerfile
container_name: subdomain2-admin
working_dir: /var/www/subdomain2
volumes:
- ./subdomain2:/var/www/subdomain2
networks:
- main-net
subdomain2-db:
image: mysql:8-oracle
container_name: subdomain2-db
restart: unless-stopped
env_file: ./subdomain2/.env
ports:
- "3314:3306"
environment:
MYSQL_DATABASE: "${SUBDOMAIN2_DB_DATABASE}"
MYSQL_USER: "${SUBDOMAIN2_DB_USERNAME}"
MYSQL_PASSWORD: "${SUBDOMAIN2_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${SUBDOMAIN2_DB_PASSWORD}"
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- subdomain2-db-data:/var/lib/mysql
networks:
- main-net
##### Subdomain App (subdomain3.main.dev) #####
subdomain3-admin:
build:
args:
user: admin
uid: 1000
context: ./subdomain3
dockerfile: Dockerfile
container_name: subdomain3-admin
working_dir: /var/www/subdomain3
volumes:
- ./subdomain3:/var/www/subdomain3
networks:
- main-net
subdomain3-db:
image: mysql:8-oracle
container_name: subdomain3-db
restart: unless-stopped
env_file: ./subdomain3/.env
ports:
- "3315:3306"
environment:
MYSQL_DATABASE: "${SUBDOMAIN3_DB_DATABASE}"
MYSQL_USER: "${SUBDOMAIN3_DB_USERNAME}"
MYSQL_PASSWORD: "${SUBDOMAIN3_DB_PASSWORD}"
MYSQL_ROOT_PASSWORD: "${SUBDOMAIN3_DB_PASSWORD}"
SERVICE_TAGS: dev
SERVICE_NAME: mysql
volumes:
- subdomain3-db-data:/var/lib/mysql
networks:
- main-net
volumes:
main-db-data:
subdomain1-db-data:
subdomain2-db-data:
subdomain3-db-data:
networks:
main-net:
driver: bridge
http.conf
# Main Domain
server {
listen 80;
server_name main.local;
root /var/www/main/public;
index index.php;
# Allow larger file uploads (adjust size as needed)
client_max_body_size 10M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass main-admin:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
# Subdomain1
server {
listen 80;
server_name subdomain1.main.local;
root /var/www/subdomain1/public;
index index.php;
# Allow larger file uploads (adjust size as needed)
client_max_body_size 10M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass subdomain1-admin:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
# Subdomain
server {
listen 80;
server_name subdomain2.main.local;
root /var/www/subdomain2/public;
index index.php;
# Allow larger file uploads (adjust size as needed)
client_max_body_size 10M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass subdomain2-admin:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
# Subdomain
server {
listen 80;
server_name subdomain3.main.local;
root /var/www/subdomain3/public;
index index.php;
# Allow larger file uploads (adjust size as needed)
client_max_body_size 10M;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
fastcgi_pass subdomain3-admin:9000;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
Dockerfile
Note: this file is the same for all apps, listed here for reference
FROM "php:8.2.0-fpm"
# Arguments defined in docker-compose.yml
ARG user=admin
ARG uid=1001
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
vim \
procps \
unzip \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
# Get Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user
RUN useradd -G www-data,root -u $uid -d /home/$user $user \
&& mkdir -p /home/$user/.composer \
&& chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
RUN chown -R $user:$user /var/www
# Use the non-root user
USER $user
How do I run this?
First lets take a look at the project directory structure
Project Directory Structure
laravel-app/
├── .env # see docker-compose.yml, Docker will pickup first this file when all containers are started
├── docker-compose.yml
├── docker-compose/
├── nginx/
├── http.conf
├── main/
│ ├── Laravel project files ....
│ ├── .env
│ ├── Dockerfile # for reference, add or remove packages as needed for each specific use case
├── subdomain1/
│ ├── Laravel project files ....
│ ├── .env
│ ├── Dockerfile
├── subdomain2/
│ ├── Laravel project files ....
│ ├── .env
│ ├── Dockerfile
├── subdomain3/
│ ├── Laravel project files ....
│ ├── .env
│ ├── Dockerfile
This is the main .env
file which is read when all the containers start. (separate from each .env config file in each app
MAIN_DB_DATABASE=main
MAIN_DB_USERNAME=admin
MAIN_DB_PASSWORD=secret
SUBDOMAIN1_DB_DATABASE=subdomain1
SUBDOMAIN1_DB_USERNAME=admin
SUBDOMAIN1_DB_PASSWORD=secret
SUBDOMAIN2_DB_DATABASE=subdomain2
SUBDOMAIN2_DB_USERNAME=admin
SUBDOMAIN2_DB_PASSWORD=secret
SUBDOMAIN3_DB_DATABASE=subdomain3
SUBDOMAIN3_DB_USERNAME=admin
SUBDOMAIN3_DB_PASSWORD=secret
Project Diagram

For more user friendly URLs for this project , modify your /etc/hosts
file and add the following:
127.0.0.1 main.local
127.0.0.1 subdomain1.main.local
127.0.0.1 subdomain2.main.local
127.0.0.1 subdomain3.main.local
Build + Run containers
Start first by checking any running containers
docker ps
Build containers
docker compose build
Start in detached mode
docker compose up -d
Open a interactive bash shell inside a running container, where container-name
is a running docker container
docker exec -it container-name bash
You can find more useful docker and general aliases for development here
After you enter into a interactive bash shell, you might want to run your migrations, keys and database seeders(if any)
You will then be able to see each of your apps live at the following urls(check your hosts file and see the port number used at docker-compose.yml 8092)
http://main.local:8092/
http://subdomain1.main.local:8092/
http://subdomain2.main.local:8092/
http://subdomain3.main.local:8092/
What’s Next: Moving to Production
This Compose setup works well for local development and testing, but going live requires a little more muscle. In production, you’ll want things like
- A Certbot container for automated SSL certificates.
- A dedicated Laravel queue container for handling emails and background tasks reliably.
- Optimized NGINX configs with caching and security hardening.
I’ll cover all of that in the next blog post where we build the production-ready Docker setup.