Since 2003, I have used IntelliJ as my primary tool for developing applications and services. Nineteen years ago, I was impressed both by the small amount of RAM required to use the IDE and the refactoring abilities included with the 1.x release.
Developing in Java does not require me to use the IntelliJ IDEA product. Other developers on my current project use Eclipse or VS Code, but these tools are not required. When you can write your components, services, and applications with a simple text editor and terminal session, you’ll end up with the same compiled Java code.
So, why do I spend the money on IntelliJ every year? Because IntelliJ IDEA is designed to make things easier for developers—which equates to me being far more productive. For example, right-clicking on a class provides me with the option to relocate the class in a matter of seconds. When IntelliJ finds an opportunity to remove duplicate code, it creates the new shared code and correctly updates the locations that depend on the centralized method. IntelliJ is an excellent source of validation during code reviews too.
While this example might seem a little elementary, I wonder why more software engineers are not taking a similar approach when building their services.
Spring Boot Services and Kubernetes
Let’s assume your feature team has standardized the use of Spring Boot for API services. As a result of your team’s hard work and detailed design, your APIs are viewed as successful by public consumers. Your organizations decides to use Kubernetes for these services.
For each Spring Boot service your team develops, this is what the high-level lifecycle looks like:
A Spring Boot service is initialized, and custom code is added. That service is containerized into a Docker image, which is ultimately into Kubernetes.
By using Kubernetes, we get the following advantages:
A collection of Spring Boot-base Docker containers are placed into a “Pod” to act as a single application. This allows each Spring Boot service to be laser-focused on a given aspect of the resulting API.
One or more Pods can be grouped to form the resulting API service, which can be configured forability, observability, horizontal scaling, and load balancing.
Rolling updates and canary deployments are utilized to lead to a stable consumer experience.
If you are interested in using Kubernetes with Spring Boot, check out the following URL:
Spring Boot Kubernetes
Based on the success of these popular APIs, let’s imagine that a leadership executive would like to monetize the services to gain revenue from the most active consumers. A free tier will still be available, but limits on the API usage will be introduced.
While the team could implement some custom logic at the Spring Boot level, this does not make sense. What we need is a centralized way to handle this new requirement.
Kong Ingress Controller to the Rescue
Last year, I started getting familiar with the Kong product suite as part of a long-term solution state for one of my clients. The scenario I described above mirrors situations I have encountered over the last five years: Centralizing common components is key for a successful microservices implementation.
If you want to read more about Kong, check out my publication from last May:
How I Stopped Coding Repetitive Service Components with Kong
For the use case mentioned above, we could handle the following components at the API gateway level:
Since Kong Gateway is open source and a “leader” in the 2021 Gartner Magic Quadrant for Full Lifecycle API Management, Kong Gateway is a safe way to handle these common components. Knowing that Kong also provides an ingress controller for use within Kubernetes further validates the product decision.
You can find everything you need to get started with Kong Ingress Controller (KIC) on this GitHub page:
Kong Ingress Controller for Kubernetes
The illustration below shows the desired design for the two services:
Requests arrive at Kubernetes for API Service #1 or #2. The Kong Ingress Controller intercepts the requests and validates the API key provided. Based on that information, the controller determines if the consumer making the request has exceeded its request limit.
If the rate limit has not been exceeded, the request is forwarded to the appropriate service. However, if the rate limit is exceeded, then a 429 (Too Many Requests) HTTP response will be returned. In all cases, the logging module can be easily configured to track all incoming requests, including all the metadata being provided by the consumer API.
The Value of This Design
When we take a step back and look at the resulting architecture and design, we quickly see the benefits:
Spring Boot services are true microservices, each laser-focused on a single aspect of the API.
Docker allows the Spring Boot services to be self-contained and distributed at any point within the development lifecycle.
Kubernetes provides the ability to group those purpose-driven Docker images into Pods, which act as a single application. Those Pods are then grouped to surface as API services in our example.
Shared components—like application registration, rate limiting, and logging—exist in one centralized location within Kong Gateway.
Kong Ingress Controller becomes the middleware layer between Kong Gateway and Kubernetes to leverage all shared components.
As a result of this model, feature team developers working on Spring Boot services only need to focus on the work provided by their product owner to improve or extend the service. These developers do not need to worry about API keys, rate limiting, or any other shared component stock elsewhere.
DevOps engineers supporting the Kubernetes implementation do not have to architect any custom design aspects to handle those shared components either. That’s because Kong Gateway is built to accommodate those needs and works well via the Kong Ingress Controller.
Since 2021, I have been trying to live by the following mission statement, which I feel can apply to any IT professional:
“Focus your time on delivering features/functionality which extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.”
– J. Vester
At the start of this post, I talked about how I prefer IntelliJ IDEA over a text editor and terminal session. In reality, the core reason ties directly to my personal mission statement. The IDEA product allows me to focus on the right things; Meanwhile, it handles the repetitive tasks related to writing original source code.
Similarly, Spring, Docker, Kubernetes, and Kong have provided solutions and frameworks to work toward the same mission. Every aspect noted above can be traced to a single source of truth for a given item. As a result, there is no duplication of services or functionality across the application landscape.
If you find yourself implementing the same process a second time,
it is certainly time to consider refactoring your design.
If your service tier falls into a similar modal as I’ve discussed here and you are not utilizing the Kong Gateway or Kong Ingress Controller, they should certainly be on your shortlist of products to review when you are ready to refine your service design.
If you’re ready to try it out, learn more about setting up KIC in this tutorial.
Have a really great day!