Docker setting up Docker setting up docker docker

Docker setting up


docker forces you to think really hard about what are the immutable and mutable parts of your application. the immutable parts are built as base images while the mutable parts are built as containers (and possibly persisted as images).for instance you may decide to lock down the OS version and Java version for a particular development release cycle. this is the immutable part so you construct the base image of your application based on this. your application code is added to the base image and run as a container.

later, when development and testing is completed, and you are ready to go into production, you may need to retest the application against the latest OS patches and Java updates. At this point, you may start with a new version of the base image and rerun the tests. if the test is successful this becomes the new baseline for your builds.

on similar lines, if your database contains a pre-defined schema and/ or pre-loaded data (immutable) this can be designed as a data-only volume and mounted read-only on the container. any updates made to the database during the application test run, will remain part of the container's file system layer.


There are a lot of queries above.From what i've understood you are trying to create an environment for the developers and also you are trying to integrate jenkins and docker.

Here is what i've followed to deal with the same situation

1)To start off with we create an image(say myimage) which include all the dependency db,java,etc.This image is our base image and can be used multiple number of times by a developer.

2)A developer can create his code and merger it in git.

3)Create a jenkins job which creates a snapshot file(eg:.zip) which includes all the dependencies like jars and package.

4)This zip is moved into the destined server using ssh plugin in docker.

5)Then Jenkins should trigger Dockerfile which moves the file(.zip) into the myimage container and make your web app up and running.

6)Include all kinds of your tests in a directory inside docker and make the Dockerfile trigger them for you.

7)Make sure when you are triggering a new build in Jenkins the previous Docker container is stopped.

You can make use of mount points in docker -v to move in and out your files.

Hope this answer helped you in whatever you are looking for.This worked for me.Let us know if it worked for you also.All the best


Crash Course

Conceptually you can think of a docker container as a newly created VM containing the bare essentials for an OS.

The docker image is like a VM template. Containers are the live instances of the image. We specify how to build an image using a Dockerfile, much like a vagrantfile. It contains the libraries, programs, and configuration we need to run what ever application we would like to run in a container.

Consider this simplified example from nginx:

# Choose base image (in this case ubuntu OS)FROM dockerfile/ubuntu# Install nginxRUN apt-get update && apt-get install nginx# Define default command to run when the container starts.# i.e the nginx webserverCMD ["nginx"]# Expose ports. Allowing our webserver to be accessible outside the container.EXPOSE 80EXPOSE 443

The dockerfile is really simple - a quick installation and some minor configuration. The real nginx dockerfile has a few more optimizations, and configuration steps like setting permissions, environment variables, etc.

Why Are Images Useful?

The usefulness of images/containers is that they can be shared around and deployed on any machine with a running docker daemon. This is really useful for development workflow. Instead of trying to replicate production, staging, dev environments to reproduce bugs etc. we can save the container as an image and pass it around.

JVM stuff

Docker images are like building blocks sharing parts that are the same and only adding on bits that are new (which means less disk space usage for us!). If you have multiple applications that require a JVM you would use a java base image. It does mean multiple instances of the JVM are running but that is a tradeoff/design issue you would make when choosing docker.

Data Containers

These are abit confusing, they basically allow your data to become portable just like your application containers. They aren't necessary, simply another design decision. You can still export DB data to CSV and all the usual methods of moving it around from within your application container. I personally don't use data containers in my workflow as I'm dealing with TBs of data and data portability is not a huge concern. I use volumes instead, you can tell docker to use a host file system directory to store its data in. This way the data is stored persistently on the host irrespective of the lifetime of the docker container or image.

Build

We'll discuss this first then developer workflow will make more sense.There really are 2 main ways of going about this:

If continuous integration is your goal, I find volumes are the way to go. Your docker containers would use volumes to mount their application source code on the host filesystem. This way all you'd have to do is pull the source code, restart the container (to ensure changes to the source code are picked up), then run your tests. The build process would really be no different to without docker. I prefer this approach because its fast and secondly the application's dependencies, environment etc. often don't change so rebuilding the image is overkill. Mounting source code also means you can make changes in place if times are desperate

The slower alternative, like the one you described, is to 'bake' source code into the image at build time. You would pull new source code, build the image, (optional - push to private docker registry), deploy the container, and then run your tests. This has the advantage of being totally portable but the turnaround time to rebuild and distribute image for every small code change can be painstaking.

Workflow

Docker's purpose is for specifying the environment for applications to run in. From this perspective developers should continue to work on application code as normal. If a developer would like to test code in a container they'd build an image locally and deploy a container from it. If they wanted to test in a production or staging image you could distribute that to them.

Lastly, the most simple pro tip for working with containers :)To login to a container and explore whats going on you can rundocker exec -it container-name bash

Disclaimer

I'm aware of some over simplifications in my explanations. My goal was to add as little confusion and new terms as possible. I find this only complicates things tasking away from the core ideas, use cases, etc. which the OP seemed most concerned with.