Java/Spring Boot/MongoDb – Performance Analysis and Improvements

The AngularAndSpring project runs on startup (@Async + @EventListener) or once a day (@Scheduled) the average calculation of the quotes. It is implemented in the PrepareDataTask class. It gets started on startup by the TaskStarter class. It calculates the averages for newly available quotes. The average quotes for several years of data had to be recalculated so performance became interesting.

Everything was done on Linux x64 with a Terumium Jdk 17 and MongoDb 4.4.

Prepare Data on Startup

To run the calculation on startup of the application, the TaskStarter has the initAvgs() method:

public void initAvgs() {"ApplicationReady");

The ‘@Async‘ annotation runs the method on a different thread so that the startup can finish before the method is done.

The ‘@EventListener(ApplicationReadyEvent.class)‘ runs the method on the ApplicationReadEvent just before the application starts the accept requests.

Then the methods are called in sequence and the annotations are processed because the methods are called from a different class.

Prepare Data by Cron

The PrepareDataTask class has for example this method to start an average calculation task:

@Scheduled(cron = "0 10 2 ? * ?")
@SchedulerLock(name = "coinbase_avg_scheduledTask", 
        lockAtLeastFor = "PT1M", lockAtMostFor = "PT23H")
@Timed(value = "create.cb.avg", percentiles = { 0.5, 0.95, 0.99 })
public void createCbHAvg() {

The ‘@Scheduled‘ annotation runs the method every day at 2.10 o’clock.

The ‘@SchedulerLock‘ annotation makes a database entry that stops the method to be run twice. The name has to be unique for each lock. The db lock makes sure the a job is only started on one instance if it is horizontally scaled.

The ‘@Timed‘ annotation tells mirometer to record the percentiles of the method run times.

Run the Create Average Methods

The classes BitfinexService, BitstampService, ItbitService, CoinbaseService have a createAvg method that start the average calculation and the Coinbase one is shown here:

public void createCbAvg() {
	LocalDateTime start =;"CpuConstraint property: " + this.cpuConstraint);
	if (this.cpuConstraint) {
                        "Prepared Coinbase Data Time:"));
	} else {
		// This can only be used on machines without 
                // cpu constraints.
		CompletableFuture<String> future7 = CompletableFuture
                        .supplyAsync(() -> {
   			       return "createCbHourlyAvg() Done.";
		        }, CompletableFuture.
                               delayedExecutor(10, TimeUnit.SECONDS));
		CompletableFuture<String> future8 = CompletableFuture.
                        supplyAsync(() -> {
				return "createCbDailyAvg() Done.";
			}, CompletableFuture.
                                delayedExecutor(10, TimeUnit.SECONDS));
		String combined = Stream.of(future7, future8)
                        .collect(Collectors.joining(" "));;

First the cpuConstraint property is logged and checked. It is set in the file by the environment variable ‘CPU_CONSTRAINT‘ with the default ‘false’. It it should be set to true in a Kubernetes deployment with less than 2 cpus available for the application.

If the cpuConstraint property is set to true the ‘createCbHourlyAvg()‘ and ‘createCbDailyAvg()‘ methods are run in sequence to reduce cpu load.

If the cpuConstraint property is set to false the ‘createCbHourlyAvg()‘ and ‘createCbDailyAvg()‘ methods are run in CompletableFutures in concurrently. The DelayedExecutor is used to give MongoDb a few seconds to settle down between the jobs.

The ‘Stream‘ is used to wait for both results of the CompletableFutures and to concatenate them.

Then the result is logged.

Calculating the Averages

The classes BitfinexService, BitstampService, ItbitService, CoinbaseService have create??Avg methods. The ‘createCbHourlyAvg()‘ of the CoinbaseService is used as an example:

private void createCbHourlyAvg() {
	LocalDateTime startAll =;
	MyTimeFrame timeFrame = this.serviceUtils.
                createTimeFrame(CB_HOUR_COL, QuoteCb.class, true);
	SimpleDateFormat sdf = new SimpleDateFormat("dd.MM.yyyy");
	Calendar now = Calendar.getInstance();
	while (timeFrame.end().before(now)) {
		Date start = new Date();
		Query query = new Query();
		// Coinbase
		Mono<Collection<QuoteCb>> collectCb = 
                        find(query, QuoteCb.class).collectList()
                             .map(quotes -> 
		timeFrame.begin().add(Calendar.DAY_OF_YEAR, 1);
		timeFrame.end().add(Calendar.DAY_OF_YEAR, 1);"Prepared Coinbase Hour Data for: " + 
                sdf.format(timeFrame.begin().getTime()) + " Time: "
                        + (new Date().getTime() - start.getTime()) 
                        + "ms");
                "Prepared Coinbase Hourly Data Time:"));

The ‘createTimeFrame(...)‘ method finds the last average hour document in the collection or the first entry in the quotes collection and returns the first day to calculate the averages for.

In the while loop the hourly averages for the day are calculated. First the search criteria is set for the ‘createdAt’ timeframe of the day. The ‘createdAt’ property has an index to improve the search performance. The project uses the reactive MongoDb driver. Because of that the 'find(...).collectList()‘ methods return a Mono<Collection<QuoteCb>>(Spring Reactor) of the quotes that is mapped into the averages.

That Mono is then stored with ‘insertAll(...).blockLast()‘. The ‘blockLast()‘ starts the reactive flow and makes sure that averages are stored.

Then the ‘timeFrame’ is set to the next day and a log entry is written.

Working With Large Pojo

The QuoteCb class looks like this:

public class QuoteCb implements Quote {
	private ObjectId _id;
	private Date createdAt = new Date();

	private final BigDecimal aed;
	private final BigDecimal afn;
	private final BigDecimal all;
	private final BigDecimal amd;
        // 150 properties more

The Pojo is used as MongoDb ‘@Document’ with ‘@Id’ and ‘@Indexed’ ‘createdAt’. It has more than 150 of the ‘BigDecimal’ properties and a Constructor to set them all. To avoid having to code a mapper to get and set the values, the CoinbaseService class has the ‘avgCbQuotePeriod(...)‘ and ‘createGetMethodHandle(...)‘ methods:

private QuoteCb avgCbQuotePeriod(QuoteCb q1, QuoteCb q2, long count) {
	Class[] types = new Class[170];
	for (int i = 0; i < 170; i++) {
		types[i] = BigDecimal.class;
	QuoteCb result = null;
	try {
             BigDecimal[] bds = new BigDecimal[170];
	     IntStream.range(0, QuoteCb.class.getConstructor(types)
		.forEach(x -> {
 		    try {
                       MethodHandle mh = createGetMethodHandle(types, x);
		       BigDecimal num1 = (BigDecimal) mh.invokeExact(q1);
		       BigDecimal num2 = (BigDecimal) mh.invokeExact(q2);
		       bds[x] = this.serviceUtils
                           .avgHourValue(num1, num2, count);
		    } catch (Throwable e) {
			throw new RuntimeException(e);
	    result = QuoteCb.class.getConstructor(types)
               .newInstance((Object[]) bds);
	} catch (NoSuchMethodException | SecurityException | 
           InstantiationException | IllegalAccessException
	   | IllegalArgumentException | InvocationTargetException e) {
 	   throw new RuntimeException(e);
	return result;

private MethodHandle createGetMethodHandle(Class[] types, int x)
   throws NoSuchMethodException, IllegalAccessException {
   MethodHandle mh = cbMethodCache.get(Integer.valueOf(x));
   if (mh == null) {
      synchronized (this) {
	 mh = cbMethodCache.get(Integer.valueOf(x));
	 if (mh == null) {
	   JsonProperty annotation = (JsonProperty) QuoteCb.class.
	   String fieldName = annotation.value();
	   String methodName = String.format("get%s%s",
		fieldName.substring(0, 1).toUpperCase(),
	   if ("getTry".equals(methodName)) {
	        methodName = methodName + "1";
	   MethodType desc = MethodType
	   mh = MethodHandles.lookup().findVirtual(QuoteCb.class, 
                  methodName, desc);
	   cbMethodCache.put(Integer.valueOf(x), mh);
   return mh;

First the type array for the constructor of the ‘QuoteCb’ class is created. Then the ‘BigDecimal’ array for the constructor parameter is created. Then the ‘foreach(...)‘ iterates over the ‘QuoteCb’ class getters.

In the ‘createGetMethodHandle(...)‘ method the method handles of the getters for the constructor parameters are returned or created. The method handles are cached in a static ConcurrentHashMap because of that they are only created once in synchronized block (hourly and daily averages are executed concurrently).

The method handle is then used to get the values ​​of both Pojos and the average is calculated with the ‘serviceUtils.avgHourValue(...)‘ method. The value is then stored in the constructor parameter array.

The value access with the method handles is very fast. The objectcreation with so many params in the constructor has surprisingly little impact on the cpu load.

The other Pojos have only a hand full of params and are calculated with normal constructor calls and getter calls like it is done in the BitstampService in the ‘avgQuote(...)‘ method with QuoteBs:

private QuoteBs avgBsQuote(QuoteBs q1, QuoteBs q2, long count) {
   QuoteBs myQuote = new QuoteBs(
      this.serviceUtils.avgHourValue(q1.getHigh(), q2.getHigh(), count),
      this.serviceUtils.avgHourValue(q1.getLast(), q2.getLast(), count),
      this.serviceUtils.avgHourValue(q1.getBid(), q2.getBid(), count),
      this.serviceUtils.avgHourValue(q1.getVwap(), q2.getVwap(), count),
      this.serviceUtils.avgHourValue(q1.getVolume(), q2.getVolume(), 
      this.serviceUtils.avgHourValue(q1.getLow(), q2.getLow(), count),
      this.serviceUtils.avgHourValue(q1.getAsk(), q2.getAsk(), count),
      this.serviceUtils.avgHourValue(q1.getOpen(), q2.getOpen(), count));
   return myQuote;

Implementation Conclusion

Spring Boot supports starting an average calculation run at application start easy with the annotations ‘@Async‘ and ‘@EventListener‘. The ‘@Scheduled‘ annotation makes creating cron jobs easy and the ShedLock library with its ‘@SchedulerLock‘ annotation enables the horizontal scaling of applications that run cron/startup jobs. The reactive features of Spring Boot and the MongoDb driver make it possible to flow the db data from finder to mapper to insertAll.

Kubernetes Setup Extension in Helm Chart

The Minikube setup for the Kubernetes cluster can be found in the The environment variable ‘CPU_CONSTRAINT‘ is set in the values.yaml. The cpu and memory limits have been updated in the kubTemplate.yaml:

  memory: "3G"
  cpu: "0.6"
  memory: "1G"
  cpu: "0.3"

For the MongoDb deployment.

  memory: "768M"
  cpu: "1.4"
  memory: "256M"
  cpu: "0.5"

For the AngularAndSpring project deployment, with these cpu limits MongoDb never reaches its cpu limit.


The average calculation is run with the scheduler every night and has last days data data to process. It does so in seconds or less. The scheduler has its own thread pool to not interfere with the requests of the users. The performance became interesting after recalculating the averages for more than 3 years of data. The Bitstamp and Coinbase quotes have different structures that make it interesting to compare the performances of their average calculations. Both datasets have indices on the dates and all quotes for a day are queried once.

Coinbase Dataset

The Coinbase pojo has more that 150 BigDecimal values ​​for different currencies. There is one pojo per minute. 1440 a day.

Bitstamp Dataset

The Bitstamp pojo has 8 BigDecimal values ​​for one currency. There are 8 pojos per minute. 11520 a day.

Raw Performance

On a machine with 4 Cores enough memory available to the Jdk and MongoDb the daily and hourly average calculation can run concurrently at roughly the same time as one on its own.

  • Bitstamp concurrently Java CpuCore 100-140% MongoDb CpuCore 40-50% 780 sec.
  • Bitstamp only hourly Java CpuCore 60-70% MongoDb CpuCore ~20% 790 sec.
  • Coinbase concurrently Java CpuCore 160-190% MongoDb CpuCore ~10% 1680 sec.
  • Coinbase only hourly Java CpuCore 90-100% MongoDb CpuCore ~5% 1620 sec.

The Coinbase pojo with more values ​​seems to put more load on the Jdk Cores and the larger number of pojos of the Bitstamp dataset seems to put more load on the MongoDb cores.

Coinbase Pojo Performance Bottleneck

The Coinbase import is the slowest and a profiler showed that the virtual machine had used around 512 MB of memory available to it and had no memory pressure. The G1 garbage collector had pause times of less than 100 ms and the memory chart looked normal. The CPU time by method spend showed 60% of the time spend creating BigDecimal Objects and 25% of the time spend dividing them. All other values ​​were below 5%. A memory snapshot showed a near maximum amount of 3 million BigDecimal objects in memory(~120 MB). They were collected every few seconds without a noticeable gc pause.

Conclusion for Raw Performance

MongoDb was not at a limit neither with I/O or Cpu and 2 GB of Cache. The Jdk was at the cpu limit with the Coinbase calculation due to the large amount of creation/calculation with the BigDecimal class. The G1 gc did not show any issues. The constructor with 150+ parameters was not an issue.

Restricted Resources Performance in Kubernetes

To see how the performance was with a memory and cpu limit the project and MongoDb was run in a minikube cluster with 1.4 cpucores and 768 Mb memory for the jdk and 0.6 cpucores and 3 Gb for MongoDb. The average calculation performed slower as expected but the Coinbase calculation starved the cpu resources of the jdk to the point that the Shedlock library could not update the db locks in time(10 sec). Because of that the ‘CPU_CONSTRAINT‘ environment variable is checked in the to switch from concurrent to sequential calculation.


The average calculation uses too many BigDecimals to be fast. The querying for daily quotes for a large dataset is not efficient either. Neither bottleneck is an issue under normal operation and it works well enough if a complete recalculation of the averages is needed. The performance of the G1 garbage collector is good. Investigating the performance bottlenecks was very interesting and provides insights for performance critical code. The result shows that the performance bottleneck can be at a surprising place and a guess like the 150+ parameter constructor was not relevant.

Measure before optimization!


Leave a Comment