Dockerize your Laravel app - part 2 : GitLab and multi-stage build

Jul 17, 2020 by Thibault | 743 views

Laravel PHP Docker GitLab

https://cylab.be/blog/85/dockerize-your-laravel-app-part-2-gitlab-and-multi-stage-build

When dockerizing an application, the main goal is to keep images small. Hence the build process should be split in 2 steps:

  1. build the app itself (with composer and npm for example) and
  2. build the Docker image (using the Dockerfile)

To keep your image small, the first step should not be part of the Dockerfile, so you can avoid that the final image contains unnecessary development and compilation tools. In our previous blog post we described how to build the Docker image. For this blog post, we will focus on the first step. Generally speaking, there are two ways to build the app:

  • on the host system
  • using multi-stage build

If you wish to use GitLab to automatically build your docker images, you also have two possibilities to build the app: using GitLab jobs or using Docker multi-stage build. In this blog post we will briefly describe the "gitlab jobs" approach to explain its advantages and drawback, then we will focus on the multi-stage build method.

GitLab jobs

To build you app with GitLab jobs, the idea is to:

  • create one or more gitlabs job to build the app
  • define the produced code as artifacts (namely the directories vendor, public/js and public/css) so they can be used by other jobs
  • create a gitlab job to build the docker image, using the produced artifacts

The main advantage of this method is that you can use GitLab cache for the downloaded composer and npm libraries. So once it is working correctly, it's efficient and fast.

The drawback is that you will end up with quite a lot of commands in your gitlab-ci.yml, which can be quite complicated to test and debug locally.

Docker multi-stage build

The idea of a docker multi-stage build is that you can define multiple steps in your Dockerfile. Here is a typical example for a Laravel application:

#### Step 1 : composer

FROM cylab/php72 AS composer

COPY . /var/www/html
WORKDIR /var/www/html
RUN composer install --no-dev --optimize-autoloader

#### Step 2 : node

FROM node AS node

COPY . /var/www/html
WORKDIR /var/www/html
RUN npm --version && npm install && npm run prod

#### Step 3 : the actual docker image

FROM php:7.4-apache

### PHP

# we may need some other php modules, but we can first check the enabled 
# modules with
# docker run -it --rm php:7.4-apache php -m
# RUN docker-php-ext-install mbstring 

### Apache

# change the document root to /var/www/html/public
RUN sed -i -e "s/html/html\/public/g" \
    /etc/apache2/sites-enabled/000-default.conf

# enable apache mod_rewrite
RUN a2enmod rewrite

### Laravel application

# copy source files
COPY . /var/www/html
COPY --from=composer /var/www/html/vendor /var/www/html/vendor
COPY --from=node /var/www/html/public/css /var/www/html/public/css
COPY --from=node /var/www/html/public/js /var/www/html/public/js
COPY --from=node /var/www/html/public/fonts /var/www/html/public/fonts

# these directories need to be writable by Apache
RUN chown -R www-data:www-data /var/www/html/storage \
    /var/www/html/bootstrap/cache

# copy env file for our Docker image
COPY env.docker /var/www/html/.env

# create sqlite db structure
RUN mkdir -p storage/app \
    && touch storage/app/db.sqlite \
    && php artisan migrate

# clear config cache
RUN php artisan cache:clear

### Docker image metadata

VOLUME ["/var/www/html/storage", "/var/www/html/bootstrap/cache"]

Each step can use its own base docker image (with the FROM keyword). In the final step (to build our real image), we can copy any file from the previous steps.

Building the image

We can now build the image as usual:

docker build -t <myapp> ./

and test the resulting image:

docker run -rm -p 8080:80 <myapp>

.dockerignore

As usual, we can use .dockerignore to exclude some file from the image:

.git
node_modules
vendor

bootstrap/cache/*

storage/app
storage/logs
storage/framework/cache/data
storage/framework/sessions/*
storage/framework/testing

## if you are using debugbar
storage/debugbar

GitLab automation

Once your multi-stage build procedure is in place, you can let GitLab run the process for you by adding a single job to your .gitlab-ci.yml for this job, you will have to use the docker image, and the Docker-in-Docker (dind) service:

build:
  image: docker:19.03.1
  stage: build
  services:
    - docker:19.03.1-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  before_script:
    - docker info
  script:
    - docker build -t <myapp> .

Finally, you can ask GitLab to create tagged docker images, based on the tags from your repository, and to upload your images automatically to Docker hub. You must first define your docker hub account username and password in Settings > CI/CD > Variables.

Then you can add a job that will only run when you push a new git tag to GitLab:

build:tagged:
  only:
    - tags
  image: docker:19.03.1
  stage: build
  services:
    - docker:19.03.1-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
  before_script:
    - docker info
  script:
    - docker build -t <myapp>:$CI_COMMIT_TAG .
    - docker tag <myapp>:$CI_COMMIT <myapp>:latest
    - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD  
    - docker push <myapp>:$CI_COMMIT
    - docker push <myapp>:latest

You might also like...