My team is implementing microservices using Spring Boot to bridge data from a legacy mainframe application to more modern consumers. Recently we discovered that some of the text descriptions for investment funds were being abbreviated due to the mainframe’s limited screen real estate. Our web users, however, have no such restrictions, so our product owners would like to get the full strings in all their unabbreviated, mixed-case glory.

Our team determined that we can augment the data with a simple query to a particular DB2 database maintained by another organization. We were tasked with adding a “quick-and-dirty” lookup to get it. This integration scenario is fairly simple, yet common in many organizations. Keep reading to see how we made this change easily using Spring Boot.

Simple JDBC from Spring

First, we will add some dependencies to the Maven pom.xml file for the database driver, Spring JDBC integration, and cache functionality.

<dependency>
 <groupId>com.ibm.db2</groupId>
 <artifactId>jcc</artifactId>
 <version>11.5.0.0</version>
</dependency>
<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>
<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-cache</artifactId>
</dependency>

Next, we can define a simple data transfer object to hold our fund data. We’re using Lombok to auto-generate accessor methods and a builder-pattern constructor, so this class is tiny and purely declarative.

@Data @Builder
@FieldDefaults(level = PRIVATE)
public class FundInfo {
 String fundNumber;
 String name;
 String assetClass;
}

Now we’ll create a repository class to encapsulate the database interaction. We could try to get fancy with an ORM framework like Hibernate, but since we know exactly how to query this table and map the results, we’ll do it the old-fashioned way by using Spring’s JdbcTemplate to execute SQL directly. Lombok again generates the constructor for Spring to autowire, and also initializes an SLF4J logger.

@Repository
@RequiredArgsConstructor
@Slf4j
class FundRepository {
 
 final JdbcTemplate jdbc;
 
 @Cacheable("FundInfo.all")
 public Map<String, FundInfo> getAll () {
 List<FundInfo> all = jdbc.query(
 "SELECT DISTINCT fund_id, fund_name, asset_class FROM funds",
 (rs, rowNum) -> FundInfo.builder()
 .fundNumber(trimToNull(rs.getString(1)))
 .name(trimToNull(rs.getString(2)))
 .assetClass(trimToNull(rs.getString(3)))
 .build());
 
 log.info("Fetched {} rows from database", all.size());
 return all.stream().collect(
 toMap(FundInfo::getFundNumber, identity(), (a, b) -> a));
 }
}

The inline lambda function implements a simple RowMapper, which uses the Lombok-generated builder to return a DTO for each row. A quick call to the Streams API transforms the List into a Map, taking care to handle any duplicates in a trivial yet safe way.

Now we’ll define a service that delegates to the repository and extracts either a single record or nothing.

@Service
@RequiredArgsConstructor
public class FundService {
 
 final FundRepository repository;
 
 public Optional<FundInfo> getInfo (String fundNumber) {
 return Optional.ofNullable(repository.getAll().get(fundNumber));
 }
}

A little later on I’ll explain why this had to be separated from the FundRepository class.

We’ll configure our database connection in our application.yml file. Our DBAs granted us a read-only service account and sent us the connection information. Since this is the only JDBC connection in our service, we can simply use the Spring default pool. Here I've used Jasypt to encrypt passwords in the configuration, but you could also use a product like Vault to act as a secrets repository.

spring:
 datasource:
 url: jdbc:db2://<hostname>:<port><dbname>
 username: <username>
 password: <encrypted password>
 hikari.schema: <database schema>

Now we can invoke our service class from our application controller. We create a private final member and let Lombok automatically generate a constructor to be autowired.

private final FundService fundService;

Finally, our controller can iterate over the data to augment it with values from the service. Using Optional in the return type lets us handle missing data gracefully. In this case, we just leave it to default values that are set elsewhere.

// Set names and asset classes from DB2 table.
for (Fund fund : account.getFunds()) {
 fundService.getInfo(fund.getFundNumber()).ifPresent(info -> {
 fund.setName(info.getName());
 fund.setAssetClassName(info.getAssetClass());
 });
}

Adding a Cache

If you fired this up, you’d get correct results but the service would be slow and you’d see in the log that the getAll() method is being called repeatedly.

2019-10-10 22:45:49,793 INFO FundRepository - Fetched 251 rows from database
2019-10-10 22:45:49,861 INFO FundRepository - Fetched 251 rows from database
2019-10-10 22:45:49,919 INFO FundRepository - Fetched 251 rows from database

We can speed this up by enabling Spring Caching to hold the results of the getAll() method in memory. With only 200 or so tuples, we’re not particularly worried about memory bloat.

@Configuration
@EnableCaching
@Slf4j
public class CachingConfiguration extends CachingConfigurerSupport {
 
 @Value("${cache.expire.seconds}")
 private long cacheExpireSeconds = 14400L;
 
 /**
 * @return A cache manager with a configured time-to-live.
 */
 @Override
 public CacheManager cacheManager () {
 return new ConcurrentMapCacheManager() {
 @Override
 protected Cache createConcurrentMapCache (String name) {
 log.info("Initializing cache with expiration time of {} seconds", cacheExpireSeconds);
 return new ConcurrentMapCache(name,
 CacheBuilder.newBuilder()
 .expireAfterWrite(cacheExpireSeconds, SECONDS)
 .build().asMap(), false);
 }
 };
 }
}

Here we’re making use of Guava's CacheBuilder to automatically expire entries after a timeout. Our service can stay up for days, but we’d like to have it so that if the administrators update the table, the service will automatically pick up the changes within a reasonable amount of time. By default, we’re setting the expiration to four hours, but we’ve used the @Value attribute to allow us to override it in our application.yml file.

cache.expire.seconds: 14400

This implementation is simple enough for our purposes, but Spring Cache can be configured to handle more complex scenarios if needed. A common setup might be to hold data in a shared memcached instance and evict the records whenever there is a write-through.

Why did we need to separate the FundRepository from the FundService? Spring implements the @Cacheable functionality by building a dynamic proxy class. Unfortunately, that means it doesn’t work if an object calls a method on itself directly through the “this” reference. There are some workarounds you could attempt using @Autowired or @Inject to have Spring inject a reference to the object’s own proxy, but that smells like a hack. Instead, I’d prefer to just separate the concerns and split the classes into @Cacheable versus non-cacheable methods.

Now when you try it out, you’ll see the fetch only happen once.

2019-10-10 22:48:32,227 INFO FundRepository - Fetched 251 rows from database

If you leave the service up for four hours or reconfigure it for a shorter expiration, you’ll see the service go get the data again.

Using this code, we were able to update our microservice to get the more-presentable descriptions our business users needed, with less than a day of work. Our code is clear and small, and the performance cost is minimal... so maybe this change wasn’t so "dirty" after all.