Working Spirit > Nieuws > Alain en Gino bezoeken DockerCon in Barcelona
11 december 2018

Alain en Gino bezoeken DockerCon in Barcelona

Conference Report: DockerCon Europe 2018, 3-5 December 2018, CCIB Barcelona, Spain

Alain van Hoof - Senior IT Infrastructuur Specialist - Working Spirit ICT BV

Barcelona, Spain, that sounded sweet and sunny when I heard the location of DockerCon Europe 2018. After an easy flight form Rotterdam/The Hague Airport, where waiting lines are unknown, the plane was able to leave 10 minutes early because everyone was boarded. 

I arrived Sunday afternoon in Barcelona and it was sunny indeed. Using Public transport to reach the Hotel took some time but again almost no effort. And there I was in my Hotel, looking at a huge DockerCon Sign and behind that the sea. Without hesitation I put on my running shoes and enjoyed a 5.6K run next to the beach and the setting sun. After using advanced internet techniques (WhatsApp) I met with the Working Spirit colleague also visiting DockerCon to do the unavoidable stroll on The Ramblas. Taking some extra turns to avoid the tourist traps we ended up in an Israeli restaurant eating a kosher hamburger while the live Spanish guitar music played הבהנגילה (Havah Nagilah) and Bamboléo. That last song was composed by the Gipsy Kings, who were born in France. Their parents were mostly gitanos, Spanish gypsies who fled Catalonia during the 1930s Spanish Civil War (Wikipedia). After these multi-cultural impressions, it was time to go to sleep. But not before watching a part of an American Western on the huge hotel room TV. Luckily my colleague had explained how to disable the Spanish lip-sync and enable the original American using the remote.

IMG 20181203 173046klein4

- 3 December 2018

09:00 - 10:00 Registration

Being only a one-minute walk from the Hotel I'm staying, I was able to arrive at exactly 09:00 at the conference entrance. After the electronic registration I was given a badge and a Docker backpack. I soon found out that it was a very good idea to pre-register in advance for the workshops. Because I did, I was able to attend three fully packed workshops.  Others that didn't, had to wait in line to get last minute access when some seat where free, most of them didn't get access.

IMG 20181205 174653klein

10:00 -12:00 Workshop: Docker Application Package

Workshop URL: https://github.com/silvin-lubecki/dap-workshop-dceu18

Just before the workshop started, the displays already had some instructions on how to setup the workshop environment using play-with-docker.com and what slack docker channel to use for questions. That way I was already up-and-running when the workshop started. In English but presented by two French and one English docker employees it was sometimes a bit hard to immediately understand what was said. The instructions on GitHub where very clear so that was no problem. Each exercise started with an introduction and explanation of the concepts.

To create docker application packages the presenters have created "docker-app" (https://github.com/docker/app) which basically adds variables to a Docker compose file. This way a general docker-compose.yml can be used to create different deployments by changing some parameters/variables values. To add some versioning and a description the third part of a docker-app app is the metadata. Those three parts can either be in one file or in 3 separate files. The one file being ideally suited for sharing a docker-app app the three separate ones for versioning and keeping large setups readable/manageable. Using the options of the tool it easy to create a docker-app app from a single compose file, create a running stack using a rendered compose file (with filled-in variables) or push the app to a registry. The more I learned about docker-app I was asking myself, is this docker trying to build helm charms? After reading the documentation I found the confirmation: "docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications."

13:00 - 15:00 Workshop: Container Storage Concepts and How to Use Them

Workshop URL: https://github.com/donmstewart/docker-storage-workshop

Unlike the previous workshop we used a Docker EE (Enterprise Edition) play-with-docker.com for this workshop. Again, the basics of Docker/Swarm/Kubernetes are skipped for this is a workshop for attendees with an intermediate docker knowledge. What surprised me most in this workshop is the use of NFS. As a 20 years plus UNIX admin I would have guessed NFS was at its ends, but Docker is making it trendy again.

We start simple before we work with a shared storage solution. An example of two containers running on one host with both containers accessing the same locally defined storage volume. Creating a file on the one is immediately visible on the other. Then on two hosts a local NFS volume pointing to the NFS-server running on the master node is defined. When each host runs a container using that volume creating a file on the one is immediately visible on the other. Creating volumes on every host by hand is not a scalable solution, using swarm or Kubernetes to deploy the volumes is. Using a docker-compose file that defines both the container(s) and the volume(s) it needs, four web server replicas are deployed sharing the same NFS server and file path on the NFS server for the index.html. But when the exposed load balanced service is accessed via http, an error is displayed because no index.html file exists. This is solved by starting a container on a host running a deploy webserver and mounting the NFS path in that container. Now we can create/edit the index.html file and yes, a refresh in the browser shows the created web-page. Using the Docker EE UCP (Universal Control Plane) the same should be possible using Kubernetes as the scheduler. Unfortunately, when everything is setup including the volume and the volume claim, which is the main difference regarding volumes in Swarm and Kubernetes, the webservers fail to spin up. Something to do with tainted nodes I found out, but the workshop was ending, so no time to bug-fix.

15:30 - 17:15 Workshop: Kubernetes Security Workshop

Workshop URL: https://github.com/spiddy/kubernetes-security-workshop

The setup of this workshop went less smooth than the previous two, to say the least. It makes you wonder on what infrastructure play-with-docker.com is built as it has some scaling issues. The workshop documentation was also explaining how to do the exercises with minikube. I had already used minikube on my laptop so the VirtualBox image was already downloaded, because of this I was setup fast and able to type along with the presented exercises. The basics of Kubernetes where not skipped but explained. I hoped hearing that again was because of the security (or lack of) insights we would get from it, but for me it did not at anything extra. The also given introduction to Istio and Knative on the other hand made a lot more sense to me.

Both kubeadm on play-with-docker.com and minikube create a Kubenetes cluster using secure communication between the various components using Certificates. Within the filesystem these certificates can be found and examined. Next a simple webservice deployment was created showing a funny Animated GIF. Executing a shell inside the container and changing the URL showed how easy it is to "hack" this deployment. Using the yaml path spec.containers.securityContext it is possible to make things more secure. Like using a user with id 1000 instead if root to run things. Not mentioned during the workshop but what I think is also a good idea: Who needs a shell in a container running a webserver, just remove it to lower the attack surface. And make the debug possibilities lower of course. Using Knative as a layer between the containers and Istio extra security is added. 

- 4 December 2018

A small walk in the morning sun to the conference center just around the corner made me remember the 5.6K run I did Sunday Evening when the sun was setting and that made me wonder, is it always sunny in Barcelona? The exhibition area was now open, so I could look at the eco system around Docker and get my first Stickers.

09:00 -11:00 General Session I

In a room with more than 2000 people Steve Singh the CEO of Docker Inc. welcomes us to DockerCon Europe 2018 with an "Olla Barcelona". Next the obligatory slides with the big numbers appear. 5+ Billion downloads 3+ Million Developers and more impressive numbers. The marketing doesn't stop there, because off course we want to know how Docker can make your Business ready for the challenges of today. Software eats everything, Fast time to market, user central products and Agile your possibilities are endless when you use Docker Enterprise. Guest Garlos Concalves, CEO of Société Générale tells how Docker helped his company to accelerate. Then the more technical part starts. 2 Employees of Docker play a small scenario on stage but behind their laptops. Using Docker a Windows .NET 3.5 running on Windows Server 2008 is Containerized and running on Windows Server 2016 as a result. The new partner of Docker Inc., Mulesoft is introduced and another scenario is play by 2 technical chiefs. This time an API is build using Mulesoft products and connected to the .NET application. An App on an Apple watch is then connected. Chief Product Officer Scott Johnston shows a video of how Desigual was able to build a new Point of Sale System in a few months, using Docker EE (Enterprise Edition). The next scenario played is the creation of a Docker App using the tool I was introduced to in my first workshop at DockerCon 2018 yesterday. As on all major Docker events something new is announced. If you want to allow your developers on their workstations/laptop to use Docker Desktop in a Controlled way. For example, the used proxy is already filled in and can't be changed. Now there is Docker Enterprise Desktop. The scenario preceding the announcement involves a developer who spend a whole morning installing Docker on het company laptop and then finds out it is already installed by the IT department, clearly the communication between Ops and Dev is not optimal here. The final scenario is about security. Using the Docker Trusted Registry (DTR), an Docker Enterprise feature, a security issue is fixed on stage. Fixed? no, unfortunately the demo gods are not pleased by the Grapes (Las doce uvas de la suerte, https://en.wikipedia.org/wiki/Twelve_Grapes) and the new image does not get deployed. The General Session is closed with an award ceremony. Nice to see that the Dutch Ministry of Justice and Safety is one of the runners up. 

IMG 20181204 091110klein

11:30 - 12:00 Monitoring Containers in Docker Engine with Swarm

A walk around the exhibition area earlier this morning made me find the Community Theatre but now it was fully packet with people and no place to sit. That would have not been an issue, but the noise of the exhibition area was overwhelming the Speakers voice from where I was standing, I gave up after 5 minutes, considered that the screens where also unreadable from there. So, I missed this talk.

12:00 - 12:40 Container observability with eBPF

The employee of Sysdig who was presenting this made some statements about monitoring the behavior of containers. He opted against an agent in the container or using a Sidecar in the POD. Each container (on a host) shares at least one thing: The Kernel. Monitoring the kernel is something already available. Started as the Berkley Packet Filter, BPF was mainly good at monitoring (and filtering) network packets. Enhanced Berkeley Packet Filter (eBPF) changed that and a lot more can be monitored. The Sysdig tool controls the eBPF program that outputs a stream of monitored system calls and monitors those. The last part of the talk introduces us to Falco (http://falco.org/) a Container Native Container Foundation sandbox project for container security that can also inspect Kubernetes audit events.

12:40 - 13:30 Lunch

Very well organized, so only very small waiting lines, and with a great variation of food. This lunch was great and prepared me for my next workshops.

IMG 20181205 131151 888klein 

13:30 - 15:30 Workshop: Logging and Monitoring

Workshop URL: https://github.com/56kcloud/Training/blob/master/DockerCon/readme.md

During the introduction of the workshop an overview of monitoring was given, where the big take-away for was the statement that users don't care about monitoring, they care about Availability, Latency and Reliability. And indeed, true those things can be monitored, no, must be monitored to know the state of those properties. An eye opener was the fact that on top of a snowy mountain a git push /pull is faster than below in the valley. 56KCloud the company that provided the workshop is located in Switzerland that’s why we know. The workshop has two parts as the title implies. Using the experience from the previous workshops I was able to setup a working play-with-docker environment fast and keep up with the workshop pace. In the first part we create an Elasticsearch Cluster and use Kibana to inspect the logs that are send from the containers. While Containers supposed to be ephemeral logs are not and so it is important to store the logs somewhere. The ELK stack (Elasticsearch, Logstash and Kibana) is well suited for this and can be built as Dockerized version on Docker Swarm from a Github repo. Using the Gelf (Graylog Extended Logging Format) driver in Docker, logs can be easily shipped to the ELK stack. With some simple commands that are clearly presented in the workshop material a Kibana Dashboard with information from the logs is the end result. The monitoring in the second part is done using cAdvisor, a tool originating from google. All though cAdvisor has a dashboard it only does real-time information. To have historical information the data needs to be stored somewhere. For this we build a Prometheus stack (again from a Git repo) in Docker Swarm and to get nice Dashboards Grafana is also deployed. End result is a dashboard with all the low-level information of the containers. We did all this in 2 hours, starting from scratch, clearly only possible within a container environment. Having played with Grafana and Prometheus before I was able the show something to the workshop presenters they didn't now: Grafana also has a /metrics URL so Prometheus can scrape the performance of the dashboard tool, how cool is that: You can monitor the dashboard. 

16:00 - 18:00 Workshop: Using Istio

Workshop URL: https://github.com/leecalcote/istio-service-mesh-workshop

I heard about Istio before but never played with before. The first time I heard about it, was about a year ago when someone said: "Keep an eye on this thing Istio, it will become more important". That clearly happened as there are workshops about it on many DockerCon in 2018 including the one in Europe I'm attending. Kubernetes has a lot build in, but networking is not one of them. First, we need to install an overlay network, so nodes and Pods can communicate. Weave is used for that. On top of this Istio is deployed, it creates a service Mesh including benefits as observability, security, load balancing and traffic shaping. Like kubectl can be used to control Kubernetes, istioctl can be used to control Istio.  And because of its available plugins it allows for extensive monitoring with for example Prometheus. In the workshop we start building a complete stack with a lot of plugins enabled. Soon a Grafana dashboard was up-and-running monitoring the Istio included microservices app: Bookinfo. Istio is a sidecar that is added to the Pods that needed to be monitored, a different approach from for example Sysdig who advised to avoid using sidecars (see "Container observability with eBPF"). But because Istio is more than just monitoring it needs to be in control of the network traffic in and out of a Pod, which is easily done with a sidecar. For the two hours it was a bit to much that the workshop wanted to cover. When I was able to see the APM (Application Performance Monitoring) traces generated by Jeager time was up. So, the workshop had to end while there were still 4 of the 10 assignments to do. Luckily like all workshops, the workshop is available on Github so "do try this at home" is possible.

IMG 20181204 090205klein

19:00 - 22:30 DockerCon Party

Four Barcelona beach clubs next to each other where reserved for the "DockerCon Party". Each club had its own type of (live) music and type of food that was served. Besides a Dutch group with someone I knew, I talked with other participants from Sweden, Germany and Switzerland. Watched some live performances and enjoyed the food. And indeed, for those who are DNS (DevOps Needs Sushi) there was Sushi somewhere. Time flies when having fun so the party ended way to soon. And before I knew it, I was back at the Hotel using the conference provided busses. 

- 5 December 2018

A great idea by the organizers of DockerCon to start the Day after the Party I bit later. This allowed me to check the Gym in the Hotel and work on my core-stability, before taking my time at the breakfast in the Hotel. The Jamon Iberico which was available for breakfast was again a major ingredient of my plate. And after the small walk to the CCIB (Centre de Convencions Internacional de Barcelona) I was ready for the last day of DockerCon.

09:30 - 11:00 General Session II

The VP of Product Development at Docker Inc. Banjot Chanana gives a recap of the general session of the previous day and talk about the freedom of choice that Docker is offering. To me it starts to sound like you can choose any color, as long as it uses Docker. Off course the more technical details of docker are not ignored in this second General Session.  To Engineers of Docker extent on the scenarios played on the previous day. They are going to deploy to production using the CI/CD possibilities of Docker. And because Docker Enterprise now supports both Swarm and Kubernetes on the same stack, either can be used. Unfortunately, the Demo is not progressing as it should be. It looks like the environment variables used are wrong and the app fails to deploy. I remember about twenty years ago when using environment variables could have unexpected outcomes when compiling software. Why is this still an issue? Using a prerecorded session of the scenario the two technicians saved the presentation with a successful ending. Next two Employees of Tele2 tell about their journey to a much faster rollout using Docker. Docker would not have been where it is now without the Docker community. A Senior Program Manager and a Docker Developers Relations are giving away an award to a Docker Captain chosen by other Docker Captains. Again, they emphasize on the importance of the community around Docker. Kal De, CTO and EVP of Product Development appears on stage to explain how the roots of Docker lie in open source. The picture shown of the open source products and initiatives that make the history of Docker clearly show that though the roots lie in open source, the future is maybe different in my opinion. Especially when on the timeline of the history, the Enterprise Products are added. Docker Enterprise can provide the simplicity, scale and trust that companies need. The final part of the general session is a scenario that demonstrates the possibilities of Docker App to deploy a cloud native application in a few clicks. 

11:05 - 11:20 Transparent Execution of Scientific Workflows in Docker Containers  

This time a was able to get a good seat in the Community theatre where I could read the slides and hear the presentation well. When running applications on a Supercomputer like the one in Barcelona, which is placed in a Church, it is not an easy task. Experiments are difficult to reproduce and reuse, especially on a different supercomputer. Using containers could solve this. For this the workflows and distributed computing team of the Barcelona Supercomputing Center created the COMPSs programming framework. After analyzing the workflow of the application and detecting the parts that execute serial and parallel the experiment can be deployed on a supercomputer using Docker Swarm. The framework is not limited to supercomputers, GRID and cloud are options to. This short but impressive talk left me with some questions. Like all talks (besides the general sessions) the speakers are very accessible after the talks. So, my MPI and GPU questions I could ask directly to Jorge Ejarque. No Docker is not the answer to those questions that, but Sigularity (https://www.sylabs.io/singularity/).      

12:00 - 12:50 Use Cases and Practical Solutions for Docker Container Storage on Swarm and Kubernetes

The traditional bearded storage guru gives an overview of storage: file, block and object, Spinning Disk, SSD and NVMe. What are the best uses cases for these? there is a lot of fake news about that. Containers a are ephemeral and don't have persistent storage, indeed not in the container, but outside there needs to be, take for example a database running in a container. Again, I'm surprised by the extensive use of NFS in the solutions. Docker Swarm and Kubernetes have a different way of connecting to the storage. Docker uses the Docker Volume Plugin, Kubernetes has various options. Using the Microsoft Azure a Docker volume mount is demonstrated on a container running MS-SQL. Then using AWS a Kubernetes volume mount is demonstrated. Note the extra step for Kubernetes pods that need to have a Volume Claim on a Persistent Volume.

12:50 - 14:00 Lunch

Again, very well organized and with great food which included vegetables. The photo I took of the huge number of tables with eating people ended up as a small printout in the Snap-Tag-Grab booth in the exhibition space. 

14:00 - 14:40 Mission Critical Migration to Multi-Cluster Kubernetes

The Citizens Bank started using one Docker Enterprise Cluster to deploy the containers created on the developer desktops. To separate the various environments more clusters where needed and implemented. In the end also, the services supporting the environments where running in their own Docker Cluster. Because of the support for Kubernetes in Docker Enterprise the transition of applications to the Kubernetes platform could be realized. Using Helm charts (https://helm.sh/) made adding extra services very easy. The Elasticsearch stack for example can be deployed in a single command, while a complex docker-compose file would be needed to deploy the same in docker swarm. For monitoring on both OS and application level Instana (www.instana.com) is used.  

15:00 - 15:30 Extending Kubernetes, Moving Compose on Kubernetes from a CDR to API aggregation

Docker compose for Kubernetes, allows a docker-compose file to be deployed on a kubernetes cluster. There are two ways to extent Kubernetes functionality one of them is a Custom Resource Definition (CDR) and that is what docker compose for Kubernetes was using. The preferred way is aggregate an API with the Kubernetes cluster API. The transition to this is described by the presenters. Being developed within Docker Inc, docker compose for Kubernetes is now open source https://github.com/docker/compose-on-kubernetes. So now everyone can take part in the development of docker compose for Kubernetes. But the previous talk I attended made me wonder, do you need a docker compose for Kubernetes when Helm charts are a more easy alternative. 

16:00 - 18:00 Workshop: Networking in Swarm and Kubernetes for Docker Enterprise

Workshop URL: https://github.com/GuillaumeMorini/docker-networking-workshop

Starting with an introduction to container networking with very clean and clear diagrams for the first time it became clear to me how Docker Swarm and Kubernetes differ in the way they approach networking, beside the fact that Kubernetes has no networking by default. And while the hands-on part of the workshop started with exploring the basics, soon I was building Swarm overlay networks and ping from container to container over different nodes. The Kubernetes part focused on the Docker Enterprise Edition implementation that is using Calligo (https://www.projectcalico.org/). Using Kubernetes NetworkPolicy resources complete control over the communication within and in and out of the cluster is possible. This was nicely demonstrated by installing an app that showed the communication paths between various microservices. Using the kubectl command and simple yaml files, the communication rules can be implemented. When doing so, the app showed the new or removed network communication paths between de microservices. Using my previous experience with workshops I was able to finish this workshop on the play-with-docker Enterprise environment. This workshop was a great way to end DockerCon Europe 2018. It gave me technical insights and allowed me to "see things for myself".  

Goodbye Barcelona.

While walking back to Hotel I could see DockerCon had really ended, the Huge DockerCon Europe Barcelona 2018 was already being removed. The exhibition area was being moved into trucks. And although the sun was already set, I decided to take another run, 6K this time, along the boulevard. While running I remembered the three days of interesting workshops I had done, the session I had attended and the people I had met. When I turned to go back after 3K I recognized the beach club that where in the DockerCon party of the night before. The bus ride then made me think it was quite far, the run now made me realize it much closer than expected.

The next day my flight took off around two o'clock in the afternoon. I had an easy, lazy morning before taking the Metro and the Bus to the airport that let me take look into the life of the average Barcelonan before taking off an seeing sunny Barcelona from above for the last time.

 

Disclaimer: The views and opinions expressed in this conference report are those mine alone and do not necessarily reflect the official policy or position of any company.