Please log in to watch this conference skillscast.
Let’s say you just started at a new company or you discovered a handy new open source library and you’re excited to get running. You git clone the code, search for install instructions, and come up empty. You ask your co-workers where you can find documentation, and they laugh. “We’re agile, we don’t waste time on documentation.” Everyone remembers that setting things up the first time was painful, a hazing ritual for new hires, but no one really remembers all the steps, and besides, the code has changed and the process is probably different now anyways.
Docker containers start and stop so quickly, and are so lightweight, that you could easily run a dozen of them on your developer work station (e.g. one for a front-end service, one for a back-end service, one for a database, and so on). But what makes Docker even more powerful is that a Docker image will run exactly the same way no matter where you run them. So once you’ve put in the time to make your code work in a Docker image on your local computer, you can ship that image to any other computer and you can be confident that your code will still work when it gets there.
Once you get your Docker image working locally, you can share it with others. You can run docker push to publish your Docker images to the public Docker registry or to a private registry within your company. Or better yet, you can check your Dockerfile into source control and let your continuous integration environment build, test, and push the images automatically. Once the image is published, you can use the docker run command to run that image on any computer, such as another developer’s workstation or in test or in production, and you can be sure that app will work exactly the same way everywhere without anyone having to fuss around with dependencies or configuration. Many hosting providers have first class support for Docker, such as Amazon’s EC2 Container Service and Google’s Container (GKE) Engine.
Once you start using Docker, it’s addictive — it’s liberating to be able to monkey around with different Linux flavors, dependencies, libraries, and configurations, all without leaving your development workstation in a messy state. You can quickly and easily switch from one Docker image to another (e.g. when switching from one project to another), throw an image away if it isn’t working, or use Docker Compose to work with multiple images at the same time (e.g. connect an image that contains a Go app to another image that contains a MySQL database). And you can leverage the thousands of open source images in the Docker Public Registry. For example, instead of building the my-go-app image from scratch and trying to figure out exactly which combination of libraries make Go happy, you could use the pre-built go image which is maintained and tested by the Docker community.
The tutorial serves two purposes. Once you are using Docker containers the next question is how to scale and start containers across multiple Docker hosts, balancing the containers across them. So enters Kubernetes it adds a higher level API to define how containers are logically grouped, allowing to define pools of containers, load balancing and affinity.
Kubernetes is an open source project to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery and replication control. It was started by Google and now it is supported by Kismatic, Mesosphere, Microsoft, RedHat, IBM and Docker amongst many others.
Google has been using container technology for over ten years, starting over 2 billion containers per week. With Kubernetes it shares its container expertise creating an open platform to run containers at scale.
Kubernetes is an amazing project, and highly promising to manage Docker deployments across multiple servers and simplify the execution of long running and distributed Docker containers. By abstracting infrastructure concepts and working on states instead of processes, it provides easy definition of clusters, including self healing capabilities out of the box. In short, Kubernetes makes management of Docker fleets easier.
Patrick hopes that in the future, more and more companies will package their tech stacks as Docker images so that the on-boarding process for new-hires will be reduced to a single docker run or docker-compose up command. Similarly, he hopes that more and more open source projects will be packaged as Docker images so instead of a long series of install instructions in the README, you just use docker run, and have the code working in minutes.
YOU MAY ALSO LIKE:
- Brian Egan's Flutter and Dart Workshop (in London on 22nd - 23rd October 2019)
- Fast Track to Chaos Engineering with Russ Miles (in London on 6th - 8th November 2019)
- µCon London 2020 - The Conference on Microservices, DDD & Software Architecture (in London on 27th - 29th May 2020)
- ProgNET London 2020 (in London on 16th - 18th September 2020)
- Keynote by Brian Ketelsen on Going Multicloud with Serverless (in London on 17th October 2019)
- Keynote Evening with Matt Saunders! (in London on 26th November 2019)
- Build run and manage composite apps on Kubernetes with Cellery (SkillsCast recorded in October 2019)
- It's Not Too Late To Learn About Kubernetes (SkillsCast recorded in September 2019)