Jul 17, 2020 by Thibault Debatty | 2147 views
When dockerizing an application, the main goal is to keep images small. Hence the build process should be split in 2 steps:
To keep your image small, the first step should not be part of the Dockerfile, so you can avoid that the final image contains unnecessary development and compilation tools. In our previous blog post we described how to build the Docker image. For this blog post, we will focus on the first step. Generally speaking, there are two ways to build the app:
If you wish to use GitLab to automatically build your docker images, you also have two possibilities to build the app: using GitLab jobs or using Docker multi-stage build. In this blog post we will briefly describe the "gitlab jobs" approach to explain its advantages and drawback, then we will focus on the multi-stage build method.
To build you app with GitLab jobs, the idea is to:
public/css) so they can be used by other jobs
The main advantage of this method is that you can use GitLab cache for the downloaded composer and npm libraries. So once it is working correctly, it's efficient and fast.
The drawback is that you will end up with quite a lot of commands in your
gitlab-ci.yml, which can be quite complicated to test and debug locally.
The idea of a docker multi-stage build is that you can define multiple steps in your Dockerfile. Here is a typical example for a Laravel application:
#### Step 1 : composer FROM cylab/php74 AS composer # copy source files to /var/www/html COPY . /var/www/html WORKDIR /var/www/html RUN composer install --no-dev --optimize-autoloader #### Step 2 : node FROM node AS node COPY . /var/www/html WORKDIR /var/www/html RUN npm --version && npm install && npm run prod #### Step 3 : the actual docker image FROM php:7.4-apache ### PHP # we may need some other php modules, but we can first check the enabled # modules with # docker run -it --rm php:7.4-apache php -m # RUN docker-php-ext-install mbstring # if we want to use MySQL database to run the production app # and opcache for performance RUN docker-php-ext-install mysqli pdo pdo_mysql opcache # if we want to use Redis as cache or sessions server RUN pecl install -o -f redis \ && rm -rf /tmp/pear \ && docker-php-ext-enable redis # from Docker PHP documentation # https://hub.docker.com/_/php # use production php.ini RUN mv "$PHP_INI_DIR/php.ini-production" "$PHP_INI_DIR/php.ini" ### Apache # change the document root to /var/www/html/public RUN sed -i -e "s/html/html\/public/g" \ /etc/apache2/sites-enabled/000-default.conf # enable apache mod_rewrite RUN a2enmod rewrite ### Laravel application # copy source files COPY . /var/www/html COPY --from=composer /var/www/html/vendor /var/www/html/vendor COPY --from=node /var/www/html/public/css /var/www/html/public/css COPY --from=node /var/www/html/public/js /var/www/html/public/js COPY --from=node /var/www/html/public/fonts /var/www/html/public/fonts # copy env file for our Docker image COPY env.docker /var/www/html/.env # create sqlite db structure (in case we will use sqlite) RUN mkdir -p storage/app \ && touch storage/app/db.sqlite \ && php artisan migrate # clear config cache RUN php artisan cache:clear # these directories need to be writable by Apache RUN chown -R www-data:www-data /var/www/html/storage \ /var/www/html/bootstrap/cache ### Docker image metadata VOLUME ["/var/www/html/storage", "/var/www/html/bootstrap/cache"]
Each step can use its own base docker image (with the
FROM keyword). In the final step (to build our real image), we can copy any file from the previous steps.
We can now build the image as usual:
docker build -t <myapp> ./
and test the resulting image:
docker run -rm -p 8080:80 <myapp>
As usual, we can use
.dockerignore to exclude some file from the image:
.git node_modules vendor bootstrap/cache/* storage/app storage/logs storage/framework/cache/data storage/framework/sessions/* storage/framework/testing ## if you are using debugbar storage/debugbar
Once your multi-stage build procedure is in place, you can let GitLab run the process for you by adding a single job to your .gitlab-ci.yml for this job, you will have to use the docker image, and the Docker-in-Docker (dind) service:
build: image: docker:19.03.12 stage: build services: - docker:19.03.12-dind before_script: - docker info script: - docker build -t <myapp> .
Finally, you can ask GitLab to create tagged docker images, based on the tags from your repository, and to upload your images automatically to Docker hub. You must first define your docker hub account username and password in
Settings > CI/CD > Variables.
Then you can add a job that will only run when you push a new git tag to GitLab:
build:tagged: only: - tags image: docker:19.03.12 stage: build services: - docker:19.03.12-dind before_script: - docker info script: - docker build -t <myapp>:$CI_COMMIT_TAG . - docker tag <myapp>:$CI_COMMIT_TAG <myapp>:latest - docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD - docker push <myapp>:$CI_COMMIT_TAG - docker push <myapp>:latest