Migration of Microservice Applications – DZone Microservices

The need for an environment to put software applications into service is a contemporary concept of the history of software development. As the software dimension changes in the business, there are also changes and improvements in technology, CI/CD practices, usage scenarios, and operational expectations – environmental practices that allow the software to serve.

In this article, we will discuss our experiences with the seamless migration of Spring Boot microservices (version 2.5.6) from Oracle WebLogic to Red Hat OpenShift Container Platform. The practices of facilitating and ensuring parallel operation of applications in both platforms will also be discussed.

In the initial installation phase, due to operational limitations, we were unable to deploy our microservices in OpenShift. After some time, increased usage in production and the need for a horizontally scalable environment in order to respond to this growth led us to address these limitations and migrate our microservices from WebLogic to OpenShift.

Since the process is related to the migration of active production applications (at the moment we are talking about 30 microservices), parallel operation of the WebLogic and OpenShift environments was a must. For this purpose, using profiles and environment variables provides a lot of ease. During parallel operation, network traffic management has been handled by these environments in Apache and Ngnix platforms, which is beyond the scope of this article.

Parallel Playback: Using Profiles

While running our applications in parallel in both Oracle WebLogic and OpenShift, we use the following profiles;

  • Features of Spring Boot
  • MAVEN FEATURES
  • Login profiles

Spring Profiles

Since the runtime needs of our applications (such as DataSource configurations) are met in different ways on different platforms, Spring Boot profiles handle our problem very well. We use two profiles: openshift And weblogic, representing the OpenShift and WebLogic environments, respectively. In the application’s (or bootstrap) YAML files, we declare these descriptors by separating them with triple dashes (---). Here is a sample application. yaml a file:

spring:
  config:
    activate:
      on-profile: weblogic
  application:
    name: my-service
management:
  endpoints:
    web:
      exposure:
        include: "*"
server:
  port: 8082
  servlet:
    context-path: /my-service

---

spring:
  config:
    activate:
      on-profile: openshift
  application:
    name: my-service
management:
  endpoints:
    web:
      exposure:
        include: "*"

We will cover some of the uses of these profiles in the following sections.

Veteran features

Like runtime needs, build time needs also vary from environment to environment. For example, when we publish WAR files in WebLogic, the JAR implementation using java -jar It works in OpenShift. This difference can be achieved by declaring different profiles in pom.xml. For our example and needs, the different package output formats are declared as project.packaging property in pom.xml a file. Here is a sample pom.xml Profile section about profile ad:

...

<profiles>
    <profile>
        <id>weblogic</id>
        <activation>
            <activeByDefault>true</activeByDefault>
        </activation>
        <properties>
            <project.packaging>war</project.packaging>
        </properties>
    </profile>

    <profile>
        <id>openshift</id>
        <properties>
            <project.packaging>jar</project.packaging>
        </properties>
    </profile>
</profiles>

...

for correction purposes; It is easy to use the active profiles target of the maven-help plugin as follows pom.xml Excerpts:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-help-plugin</artifactId>
    <version>3.2.0</version>
    <executions>
        <execution>
            <id>show-profiles</id>
            <phase>compile</phase>
            <goals>
                <goal>active-profiles</goal>
            </goals>
        </execution>
    </executions>
</plugin>

Back log files

We use Logback for registration. In fact, there is no Personal logins As we meant here. the Profile personly Back to springProfile Here and it is used to differentiate the logging behaviors of different Spring Profiles. For our scenario, there are different environments for the applications container. This can be achieved by placing logback-spring.xml under the resource directory of each Spring Boot project. The contents are almost the same for each microservice application.

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="3 seconds">
    <springProfile name="weblogic">
        <include file="${DOMAIN_HOME}/config/logconfig/my-service/logback-included.xml"/>
    </springProfile>

    <springProfile name="openshift">
        <include file="${LOG_CONFIG_PATH}/logback-included.xml"/>
    </springProfile>
</configuration>

Here, we distinguish the embedded file from each Spring Profile. With this, hard-coded configurations of recording behaviors are reduced to a minimum.

Parallel operation: environment variables

As seen in logback-spring.xml Above, we use environment variables in both environments to characterize behaviors. In WebLogic, environment variables are introduced into WebLogic instances and start scripts. On the other hand, in OpenShift we introduce environment variables in publishing Application file. Here is a sample publishing A file where some environment variables are declared:

...
         containers:
            env:
                - name: LOG_CONFIG_PATH
                  value: /opt/app/conf/logconfig
                - name: REPORT_CONFIG_PATH
                  value: /opt/app/conf/reportconfig
                - name: APP_LOG_PATH
                  value: /opt/app/log
                - name: DATABASE_URL
                  valueFrom:
                    secretKeyRef:
                      name: database-secrets
                      key: db_url
                - name: DATABASE_USERNAME
                  valueFrom:
                    secretKeyRef:
                      name: database-secrets
                      key: db_username
                - name: DATABASE_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: database-secrets
                      key: db_password
                - name: OS_ENVIRONMENT_INFO
                  valueFrom:
                    secretKeyRef:
                      name: environment-variables
                      key: OS_ENVIRONMENT_INFO

...

Here, we use not only constant values, but also values ​​from OpenShift Secrets.

Data source configurations

At WebLogic, we, as developers, only deal with JNDI Names in terms of DataSource configurations. Almost everything about DataSources within the WebLogic connection pools in the Administrator console is managed by our operations team. But in OpenShift, DataSource configurations cost a bit more on the developer side than they do in WebLogic. After giving a small example about the difference between two environments, it will be explained how to achieve DataSource configurations in OpenShift for our experience in WebLogic.

Below is a WebLogic configuration for testing (also known as checking) unused connections to check if they are still alive.

Screenshot of WebLogic Configuration - 120 . Frequency Test

In OpenShift this has to be specified specifically. Otherwise, it is disabled by default. (We are using Hikari Library). Here is the configuration we did for this:

spring.datasource.hikari.keepaliveTime=120000 

As we explained in the previous sections, we benefited from Spring features To distinguish between OpenShift and WebLogic DataSource configurations. Here is our relevant section application.ymto To configure DataSource:

spring:
  config:
    activate:
      on-profile: weblogic
  application:
    name: my-service
  datasource:
    jndi-name: db/my_app

---

spring:
  config:
    activate:
      on-profile: openshift
  application:
    name: my-service
  datasource:
    driver-class-name: oracle.jdbc.OracleDriver
    url: ${DATABASE_URL}
    username: ${DATABASE_USERNAME}
    password: ${DATABASE_PASSWORD}
    hikari:
      keepalive-time: 120000

OpenShift path timeouts

The last configuration that we will cover for the scope of this article is the OpenShift path timeout configuration. Since there is no route Concept In WebLogic as in OpenShift, we have never had such a configuration before. So, after migrating our apps to the OpenShift platform and declaring services and paths, we started experiencing unexpected timeouts while calling our APIs. After doing some research, we got around this issue by selecting the annotation below for each one route In our project:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: my-service
  namespace: my-namespace
  annotations:
    haproxy.router.openshift.io/timeout: 120s
...

Abstract

In this article, we share our experience with the seamless migration of Spring Boot applets from WebLogic to OpenShift, and our parallel running practices in both environments. The practices shared above may not be the ultimate and best-suited solutions, but hopefully, they give you some ideas for dealing with this painful migration process.

.

Leave a Comment