Packaging your work in a box to ship safely anywhere aka Containers
In shipping industry it used to take weeks to load and unload the ships to best utilise the space because of uneven shape and different handling instructions of shipping goods. This required ship and goods to be immobile for weeks and require lot of man power to achieve the work, hence adding additional costs to the industry.
Packaging goods in evenly shaped boxes made loading and unloading very faster and reduced damaged during shipping and this is called containerisation before shipping.
Concept of containerisation in software industry is beautifully loaned from shipping industry. Packaging software in similar kinds of boxes which can be shipped anywhere to run. When you box software you will also package all required libraries, and runtime environments with it so it can run as independent and isolated unit.
Benefits with containerisation
- Portability : Containers make deployment easy in different platform. Deploying is as simple as running a new container, routing users to the new one. It can even be automated by orchestration tools. As container gets packaged with all required dependency it is easy to run in different operating system as well. If we don’t use containers, we need to handle hosting environment: runtimes, libraries, and OS needed by your application. On the other hand, when using containers, they need one single methodology that can handle the containers you provide no matter what’s inside them. You may as well use .NET Core, Java, Node.JS, PHP, Python, or another development tool: it doesn’t matter as long as your code is containerised.
- Solves dependency conflicts: When code gets deployed using containers it is isolated from other applications running on same host. So if there are applications requiring different version on JRE or python both can be hosted on same machine. It gives great speed and benefit when applications needs to use different run time environments.
- Allows Easy scaling up: Once images is created for code it can be deployed on same host as multiple instances to distribute the load or it can also be hosted on different machines. Just load balancer needs to be configured to route to another instance.
- Allow seamless upgrades: When we need to upgrade the runtime, it can be done in phased manner, where we can deploy upgraded images for few hosts, route the traffic there and then upgrade rest of them. Every application can be upgraded independently.
- Enhanced security: Every application running in their containers will have their own layer of security.
- Faster experimentation: If we need to explore new component which is available as docker image then it becomes very easy to experiment. It can be run as isolated docker component which has all its prerequisite also available.
Phases and tools involved in process
As per above diagram there are 3 main phases from packaging till running:
- Creating images: Using docker commands code gets packaged into docker images.
- Publishing images: Before these images can be used anywhere it needs to be published to central repository, so that it can be pulled from there to run anywhere.
- Running images: There are 2 ways images can be run, one way images can be run independently without orchestrator, where scaling of system will not happen by itself. Other was is to use orchestrator tools like K8s or docker swarm in which we can configure once about when to scale up or down the system and criteria for system health and these orchestrator takes care of scaling up/down the instances as required.
When not to consider Containerisation
- Do Not Use Containers if You Need to Use Different Operating Systems or Kernels: Images built for Linux will not work on windows, hence if host OS is different same image will not work.
- Do Not Use Docker if You Have a Lot of Valuable Data to Store: By design, all Docker files are created inside a container and stored on a writable container layer. It may be difficult to retrieve the data out of the container if a different process needs it. Also, the writable layer of a container is connected to the host machine which the container is running on. If you need to move the data elsewhere, you cannot do it easily. More than that, all the data stored inside a container will be lost forever once the container shuts down.You have to think of ways to save your data somewhere else first. To keep data safe in Docker, you need to employ an additional tool — Docker Data Volumes. Yet, this solution is still quite clumsy and needs to be improved.
If you know more scenarios where docker/container might not be very useful please add in comments.