Devoxx UK 2018 Takeaways – Part 1
Last month, I had the opportunity to attend the Devoxx UK conference in London on 9th — 11th May. Attracting around 1,200 developers each year, Devoxx is one of the most popular Java-focused conferences in the UK, but also focuses on multiple tracks including serverless/cloud, containers and infrastructure technologies, architecture, modern web, big data and AI, security, and future technologies.
Launched in 2001, Devoxx is also spread across Belgium, France, Poland, Morocco, Ukraine, and organised by local developer groups, truly making it a series of tech events “from developers, for developers”.
I got a great insight into the latest technologies used in the industry, learn and experience particular topics which interested me the most, and network with various developers and leaders in the Java community.
The event itself is split across three days.
The first day was the “Deep Dive Day”, where a number of experts run practical hands-on sessions, giving me the chance to delve more into a specific technology than is possible during regular conference sessions. I will dive into more detail on the two sessions I attended.
The second and third days were the main conference days, providing a variety of talks across the various tracks I mentioned before. I will give a summary, as well as the important points I learned, from each talk I attended.
In the morning, I attended Antonio Goncalves and Roberto Cortez’s session on building, managing and deploying microservices using Java and different frameworks. This involved going through a journey of a microservices architecture, discovering the various problems that can arise, and eventually finding the solution.
Microservices are small, separate autonomous services which communicate together synchronously or asynchronously. A microservice architecture follows the single responsibility principle.
This type of architecture is useful for two main reasons:
Antonio was building an entire microservice architecture based on a book store, where you can create, view, edit and delete books. We used MicroProfile to develop our microservices, but there are many other tools you can use to develop microservices, including:
So, you’re probably wondering, ‘Why is this session titled “Baking a Microservice PI(e)”?’
We were writing and running this microservice architecture locally. But rather than using the “expensive cloud” to deploy it to, this was deployed to a cluster of Raspberry Pi (11 in total). This was done by packaging each jar file into a Docker image (built using Maven), and then pushing each Docker image to a Raspberry Pi using Ansible where the Docker image was then ran.
Each Raspberry Pi and Antonio’s MacBook Pro were connected to a router through three switches.
However, I quickly learned how difficult it can be working with distributed systems, as the Raspberry Pis were losing connection through the router.
The first microservice generates random ISBN numbers (Number API), and the second microservice creates and deletes books (Book API). The Book API microservice injects and then calls the Numbers API microservice (in order to create a random ISBN for the created book).
The frontend is an Angular application using Bootstrap, which invokes the Number API through HTTP.
H2 database is used for storing the book data, which is a small, in-memory, very fast, open source implementation of a Java SQL database.
Monitoring is important to view any issues your environments are having, and so your WebOps team do not have to ssh into every node on an environment having problems.
This was demonstrated through a humorous scenario between Antonio (the developer) and Roberto (the web operations engineer).
Antonio tested the application locally, which was then deployed to the Raspberry Pi.
Roberto then tested the application on the Raspberry Pi, where he noticed an issue and was unable to create a book. He used ssh to connect to the Book API node, and noticed in the logs there was a Java connection exception trying to do a GET request on a hardcoded localhost URL, which obviously will not work on the Raspberry Pi environment.
“Never trust a developer who says it works on their machine.”
The ELK stack can be used to centralise and transform your logs in a single location, which are being sent from all the different nodes. Kibana is used to easily visualise the logs, which uses Elasticsearch to search and store the logs from Logstash.
This is one of the most common monitoring systems used, and I have experience of using in the past, though there are other ways to do monitoring, including:
We have two microservices deployed to a number of specific Raspberry Pis, therefore they are able to communicate with each other.
However, if we deployed our microservices to the cloud, we do not know what server they will be deployed to. So, how do the microservices discover each other?
This is where I learned about the importance of registering. We have a Raspberry Pi on the system running Consul, which is used to connect and configure services. So, when we deploy a microservice, or scale any of our microservices, first it will communicate with Consul and register itself by name, so the other services can discover it by name. (You can think of Consul as DNS resolution).
We also set up Consul to run a health check, to ping our services every x mins to see if they are still there and running (returns HTTP status code 200).
Other tools which can be used for registering:
I learned about the Open API Initiative (OAI), whose role is to standardize how REST APIs are described/documented. Throughout the codebase, we used Swagger for documenting our microservices on what endpoints we can call, what are the parameters we can pass, and what are the status codes/data returned to us.
In summary, Swagger is the implementation, Open API is the specification.
We add Swagger annotations to our Java API code, which generates a Swagger contract in JSON format. Then, by using Swagger Code Gen, we can generate code from our Swagger contract including client stubs.
You can find out more information about the Swagger Ecosystem here: https://swagger.io/community/
We have a Raspberry Pi on the system running an API Gateway.
The API Gateway sits between the Angular frontend layer and Java API layer, and is the single entry point for all client requests. It is similar to a Proxy; you expose the service (our Book API) endpoints in the Proxy, and your client (our Angular application) calls the service through the Proxy.
It is safe to make the Number API a public API, and the HTTP GET endpoint in the Book API public. But the API Gateway allows us to implement security (by verifying the client is authorised to perform the request) for the Book API HTTP POST / PUT / DELETE endpoints.
Examples of API Gateways:
For the demo, we used the Tribestream API Gateway. It uses OAuth 2, JWT, and HTTP signatures. It also acts as a load balancer, and provides rate limiting.
So, for users who require access to create and delete books, we create a username and password for them through the API Gateway UI.
Then, when a user logs into the Angular application, an authentication HTTP header is created (containing a JSON web token / Bearer token) by calling the authentication endpoint in the API Gateway.
When this user creates a book, a POST request is made, so the token is passed through the API Gateway, passed to the Book API, and then to the Number API for authentication.
Things I learned from this session:
If you are interested in playing around with this demo on microservices, check out the source code here: https://github.com/agoncal/baking-microservice-pie
To keep with the theme of microservices, in the afternoon I thought it would be useful to attend Alex Soto and Andy Gumbrecht’s talk on testing strategies for a microservice architecture.
To start, I learned about the anatomy of a microservice (which I obviously had an understanding of at this point), and the evolution of testing.
With any software we develop, testing is a crucial stage, and we generally follow the same testing plan:
But since a microservice architecture is more complex, it introduces an additional series of testing strategies, which I will explain below.
This was a very hands-on lab on testing microservices. We used two microservices, one called villains and one called crimes, which were both developed using Vert.x. The idea was the Villains service was a consumer of (invokes) the Crimes service, therefore we needed to write a test for the Villains service that verifies this interaction with the Crimes service.
With microservices, this can be complex, since a producer (in our case, the Crimes service) can have many dependencies (such as a database etc), which we need to ensure has started for our testing.
To get around this, service virtualization can emulate these dependencies.
This allows you to simulate an API, i.e. capture, modify, and playback responses from an API. This makes testing much faster, and helps simulate hard to reproduce situations when dealing with a real API, e.g. simulate an API being down, or an API sending you bad responses (without taking down the real API).
It also allows you to write tests before the actual service has been built, which follows TDD.
For this lab, we used Hoverfly to isolate the Crimes service using service virtualization, specifying the service endpoint to react to, and what response to return.
Contract testing is a way to ensure that services, such as an API (our consumer microservice, Villains) and a client (our provider microservice, Crimes), can communicate with each other. Without contract testing, the only way to know that services can communicate is by using expensive and brittle integration tests, which is even more difficult in a microservice architecture when you have multiple consumers and providers communicating with each other.
So how does Contract testing work?
A contract between a consumer and provider is called a pact. Each pact is a collection of interactions. Each interaction describes:
I gained experience of how to write contract tests using Pact and Arquillian Algeron for the next two labs.
The first part of the lab I gained experience on writing a consumer contract test, which generated a contract to be used by the provider to validate the consumer/provider interaction. For simplicity for this lab, I stored these contracts locally, but in a real-world case you would store these in a Git repository or Pact Broker.
The second part of the lab involved writing the provider contract test, which will get the contract we generated before, and replay all the interactions with the provider. If the response from the provider service matches the expected valid response from the contract, then we can be sure that both the consumer and provider can communicate properly.
With Continuous Delivery, traditionally we develop our feature, verify it with QA, deploy it to a staging environment for further testing, and then finally deploy it to production. Unfortunately, we do not live in a perfect world, and production then explodes. What can we do to avoid this in future?
We can solve this by using a technique called Blue-Green Deployment.
This is where you run two identical production environments, both called Blue and Green.
At any time, only one of the environments is live (in this example, Blue), which serves all the production traffic, and the other environment is idle (in this example, Green).
So next time we are doing a software release, we will deploy our new version to the environment that is not live (Green). Once the application is fully tested in Green, and we have ensured there are no problems, we can switch all incoming production traffic to point to Green instead of Blue.
This also eliminates downtime, and reduces risk with production problems (e.g., if there is an unexpected problem with your new version on Green, you can immediately roll back to the last version by switching the production traffic back to Blue).
Things I learned from these labs:
If you want to check out these techniques for testing microservices, you can find the lab notes here: https://github.com/lordofthejars/devoxx_uk_testing
This post originally appeared on Medium.
Check out “Devoxx UK 2018 Takeaways – Part 2” here.
Check out “Devoxx UK 2018 Takeaways – Part 3” here.
Sign up to the Kainos newsletter