Friday, October 31, 2014

Spring Caching abstraction and Google Guava Cache

Spring provides a great out of the box support for caching expensive method calls. The caching abstraction is covered in a great detail here.

My objective here is to cover one of the newer cache implementations that Spring now provides with 4.0+ version of the framework - using Google Guava Cache

In brief, consider a service which has a few slow methods:

public class DummyBookService implements BookService {

 @Override
 public Book loadBook(String isbn) {
  // Slow method 1.

 }

 @Override
 public List<Book> loadBookByAuthor(String author) {
  // Slow method 2
 }

}

With Spring Caching abstraction, repeated calls with the same parameter can be sped up by an annotation on the method along these lines - here the result of loadBook is being cached in to a "book" cache and listing of books cached into another "books" cache:

public class DummyBookService implements BookService {

 @Override
 @Cacheable("book")
 public Book loadBook(String isbn) {
  // slow response time..

 }

 @Override
 @Cacheable("books")
 public List<Book> loadBookByAuthor(String author) {
  // Slow listing
 }
}

Now, Caching abstraction support requires a CacheManager to be available which is responsible for managing the underlying caches to store the cached results, with the new Guava Cache support the CacheManager is along these lines:

@Bean
public CacheManager cacheManager() {
 return new GuavaCacheManager("books", "book");
}

Google Guava Cache provides a rich API to be able to pre-load the cache, set eviction duration based on last access or created time, set the size of the cache etc, if the cache is to be customized then a guava CacheBuilder can be passed to the CacheManager for this customization:

@Bean
public CacheManager cacheManager() {
 GuavaCacheManager guavaCacheManager =  new GuavaCacheManager();
 guavaCacheManager.setCacheBuilder(CacheBuilder.newBuilder().expireAfterAccess(30, TimeUnit.MINUTES));
 return guavaCacheManager;
}

This works well if all the caches have a similar configuration, what if the caches need to be configured differently - for eg. in the sample above, I may want the "book" cache to never expire but the "books" cache to have an expiration of 30 mins, then the GuavaCacheManager abstraction does not work well, instead a better solution is actually to use a SimpleCacheManager which provides a more direct way to get to the cache and can be configured this way:

@Bean
public CacheManager cacheManager() {
 SimpleCacheManager simpleCacheManager = new SimpleCacheManager();
 GuavaCache cache1 = new GuavaCache("book", CacheBuilder.newBuilder().build());
 GuavaCache cache2 = new GuavaCache("books", CacheBuilder.newBuilder()
             .expireAfterAccess(30, TimeUnit.MINUTES)
             .build());
 simpleCacheManager.setCaches(Arrays.asList(cache1, cache2));
 return simpleCacheManager;
}

This approach works very nicely, if required certain caches can be configured to be backed by a different caching engines itself, say a simple hashmap, some by Guava or EhCache some by distributed caches like Gemfire.


Wednesday, October 22, 2014

Docker RabbitMQ cluster

I have been trying to create a Docker based RabbitMQ cluster on and off for sometime and got it working today - fairly basic and flaky but could be a good starting point for others to improve on.

This is how the sample cluster looks on my machine, this is a typical cluster described in the RabbitMQ clustering guide available here - https://www.rabbitmq.com/clustering.html. As recommended at the site, there are 2 disk based nodes and 1 RAM based node here.



To quickly replicate this, you only need to have fig in your machine, just create a fig.yml file with the following entry:

rabbit1:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit1
  ports:
    - "5672:5672"
    - "15672:15672"

rabbit2:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit2
  links:
    - rabbit1
  environment: 
   - CLUSTERED=true
   - CLUSTER_WITH=rabbit1
   - RAM_NODE=true

rabbit3:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit3
  links:
    - rabbit1
    - rabbit2
  environment: 
   - CLUSTERED=true
   - CLUSTER_WITH=rabbit1   

and in the folder holding this file, run:

  fig up

That is it!, the entire cluster should come up. If you need more nodes, just modify the fig.yml file.

The docker files for creating the dockerized rabbitmq-server is available at my github repo here: https://github.com/bijukunjummen/docker-rabbitmq-cluster and the "rabbitmq-server" image itself is here at the docker hub.

References:

Monday, October 13, 2014

Spring @Configuration - RabbitMQ connectivity

I have been playing around with converting an application that I have, to use Spring @Configuration mechanism to configure connectivity to RabbitMQ - originally I had the configuration described using an xml bean definition file.

So this was my original configuration:

<beans ...;>

 <context:property-placeholder/>
 <rabbit:connection-factory id="rabbitConnectionFactory" username="${rabbit.user}" host="localhost" password="${rabbit.pass}" port="5672"/>
 <rabbit:template id="amqpTemplate"
      connection-factory="rabbitConnectionFactory"
      exchange="rmq.rube.exchange"
      routing-key="rube.key"
      channel-transacted="true"/>

 <rabbit:queue name="rmq.rube.queue" durable="true"/>

 <rabbit:direct-exchange name="rmq.rube.exchange" durable="true">
  <rabbit:bindings>
   <rabbit:binding queue="rmq.rube.queue" key="rube.key"></rabbit:binding>
  </rabbit:bindings>
 </rabbit:direct-exchange>


</beans>

This is a fairly simple configuration that :

  • sets up a connection to a RabbitMQ server,
  • creates a durable queue(if not available)
  • creates a durable exchange
  • and configures a binding to send messages to the exchange to be routed to the queue based on a routing key called "rube.key"

This can be translated to the following @Configuration based java configuration:

@Configuration
public class RabbitConfig {

 @Autowired
 private ConnectionFactory rabbitConnectionFactory;

 @Bean
 DirectExchange rubeExchange() {
  return new DirectExchange("rmq.rube.exchange", true, false);
 }

 @Bean
 public Queue rubeQueue() {
  return new Queue("rmq.rube.queue", true);
 }

 @Bean
 Binding rubeExchangeBinding(DirectExchange rubeExchange, Queue rubeQueue) {
  return BindingBuilder.bind(rubeQueue).to(rubeExchange).with("rube.key");
 }

 @Bean
 public RabbitTemplate rubeExchangeTemplate() {
  RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
  r.setExchange("rmq.rube.exchange");
  r.setRoutingKey("rube.key");
  r.setConnectionFactory(rabbitConnectionFactory);
  return r;
 }
}

This configuration should look much more simpler than the xml version of the configuration. I am cheating a little here though, you should be seeing a missing connectionFactory which is just being injected into this configuration, where is that coming from..this is actually part of a Spring Boot based application and there is a Spring Boot Auto configuration for RabbitMQ connectionFactory based on whether the RabbitMQ related libraries are present in the classpath.

Here is the complete configuration if you are interested in exploring further - https://github.com/bijukunjummen/rg-si-rabbit/blob/master/src/main/java/rube/config/RabbitConfig.java

References:

  • Spring-AMQP project here
  • Spring-Boot starter project using RabbitMQ here

Saturday, October 11, 2014

Spring @Configuration and injecting bean dependencies as method parameters

One of the ways Spring recommends injecting inter-dependencies between beans is shown in the following sample copied from the Spring's reference guide here:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo() {
  return new Foo(bar());
 }

 @Bean
 public Bar bar() {
  return new Bar("bar1");
 }

}
So here, bean `foo` is being injected with a `bar` dependency.

However, there is one alternate way to inject dependency that is not documented well, it is to just take the dependency as a `@Bean` method parameter this way:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(Bar bar) {
  return new Foo(bar);
 }

 @Bean
 public Bar bar() {
  return new Bar("bar1");
 }

}

There is a catch here though, the injection is now by type, the `bar` dependency would be resolved by type first and if duplicates are found, then by name:

@Configuration
public static class AppConfig {

 @Bean
 public Foo foo(Bar bar1) {
  return new Foo(bar1);
 }

 @Bean
 public Bar bar1() {
  return new Bar("bar1");
 }

 @Bean
 public Bar bar2() {
  return new Bar("bar2");
 }
}

In the above sample dependency `bar1` will be correctly injected. If you want to be more explicit about it, an @Qualifer annotation can be added in:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(@Qualifier("bar1") Bar bar1) {
  return new Foo(bar1);
 }

 @Bean
 public Bar bar1() {
  return new Bar("bar1");
 }

 @Bean
 public Bar bar2() {
  return new Bar("bar2");
 }
}


So now the question of whether this is recommended at all, I would say yes for certain cases. For eg, had the bar bean been defined in a different @Configuration class , the way to inject the dependency then is along these lines:

@Configuration
public class AppConfig {

 @Autowired
 @Qualifier("bar1")
 private Bar bar1;

 @Bean
 public Foo foo() {
  return new Foo(bar1);
 }

}

I find the method parameter approach simpler here:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(@Qualifier("bar1") Bar bar1) {
  return new Foo(bar1);
 }

}


Thoughts?