Saturday, August 25, 2012

Spring Scoped Proxy

Consider two Spring beans defined this way:

@Component
class SingletonScopedBean{
 @Autowired private PrototypeScopedBean prototypeScopedBean;
 
 public String getState(){
  return this.prototypeScopedBean.getState();
 }
}

@Component
@Scope(value="prototype")
class PrototypeScopedBean{
 private final String state;
 
 public PrototypeScopedBean(){
  this.state = UUID.randomUUID().toString();
 }

 public String getState() {
  return state;
 }
}

Here a prototype scoped bean is injected into a Singleton scoped bean.

Now, consider this test using these beans:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class ScopedProxyTest {
 
 @Autowired private SingletonScopedBean singletonScopedBean;
 
 @Test
 public void testScopedProxy() {
  assertThat(singletonScopedBean.getState(), not(equalTo(singletonScopedBean.getState())));
 }
 
 @Configuration
 @ComponentScan("org.bk.samples.scopedproxy")
 public static class SpringContext{}

}

The point to note is that there is only 1 instance of PrototypeScopedBean that is created here - and that 1 instance is injected into the SingletonScopedBean, so the above test which actually expects a new instance of PrototypeScopedBean with each invocation of getState() method will fail.

If a new instance is desired with every request to PrototypeScopedBean (and in general if a bean with longer scope has a bean with shorter scope as a dependency, and the shorter scope needs to be respected), then there are a few solutions:

1. Lookup method injection - which can be read about here
2. A better solution is using Scoped proxies -

A scoped proxy can be specified this way using @Configuration:
@Component
@Scope(value="prototype", proxyMode=ScopedProxyMode.TARGET_CLASS)
class PrototypeScopedBean{
 private final String state;
 
 public PrototypeScopedBean(){
  this.state = UUID.randomUUID().toString();
 }

 public String getState() {
  return state;
 }

}

With this change, the bean injected into the SingletonScopedBean is not the PrototypeScopedBean itself, but a proxy to the bean (created using CGLIB or Dynamic proxies) and this proxy understands the scope and returns instances based on the requirements of the scope, the test should now work as expected.

Saturday, August 18, 2012

@ContextConfiguration defaults

Spring @ContextConfiguration is a way to specify the Application Context for a test.

The location of a xml based test application context can be specified using the locations attribute:
@ContextConfiguration(locations={"test-context.xml"})

and if @Configuration is used as the context, then a classes attibute can be specified:

@ContextConfiguration(classes={TestConfiguration.class})

There are intelligent defaults for these attributes though and that is what I wanted to highlight in this post.

If the locations or the classes attribute is not specified, the default behavior is to first look for a xml configuration with a name as the test class name - "context.xml" file

For eg. if I have a Test class this way:
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class TestSpringCache {


The location of the configuration that will be tried first is "TestSpringCache-context.xml"

If a context is not found at this location, then a @Configuration default is looked for by scanning all static inner classes annotated with @Configuration of the test class. So if I had the following:


@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class TestSpringCache {
   ...
 
 @Configuration
 @EnableCaching
 @ComponentScan("org.bk.samples.cache")
 public static class TestConfiguration{
 .. 


the inner class TestConfiguration would be used as the source of the Application context.

The default behaviour can be changed by supplying a loader attribute to the @ContextConfiguration, say for eg, if I want to default to @Configuration with its defaults, it can done this way:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(loader=AnnotationConfigContextLoader.class)
public class TestSpringCache {
..

So to conclude, the defaults provided by @Configuration are a great way to make the tests a little more concise!

Friday, August 10, 2012

Spring @Configuration and FactoryBean

Consider a FactoryBean  for defining a cache using a Spring configuration file:

 <cache:annotation-driven />
 <context:component-scan base-package="org.bk.samples.cachexml"></context:component-scan>
 
 <bean id="cacheManager" class="org.springframework.cache.support.SimpleCacheManager">
  <property name="caches">
   <set>
    <ref bean="defaultCache"/>
   </set>
  </property>
 </bean>
 
 <bean name="defaultCache" class="org.springframework.cache.concurrent.ConcurrentMapCacheFactoryBean">
  <property name="name" value="default"/>
 </bean>

The factory bean ConcurrentMapCacheFactoryBean is a bean which is in turn responsible for creating a Cache bean.

My first attempt at translating this setup to a @Configuration style was the following:


@Bean
public SimpleCacheManager cacheManager(){
 SimpleCacheManager cacheManager = new SimpleCacheManager();
 List<Cache> caches = new ArrayList<Cache>();
 ConcurrentMapCacheFactoryBean cacheFactoryBean = new ConcurrentMapCacheFactoryBean();
 cacheFactoryBean.setName("default");
 caches.add(cacheFactoryBean.getObject());
 cacheManager.setCaches(caches );
 return cacheManager;
}

This did not work however, the reason is that here I have bypassed some Spring bean lifecycle mechanisms altogether. It turns out that ConcurrentMapCacheFactoryBean also implements the InitializingBean interface and does a eager initialization of the cache in the "afterPropertiesSet" method of InitializingBean. Now by directly calling factoryBean.getObject() , I was completely bypassing the afterPropertiesSet method.

There are two possible solutions:
1. Define the FactoryBean the same way it is defined in the XML:
@Bean
public SimpleCacheManager cacheManager(){
 SimpleCacheManager cacheManager = new SimpleCacheManager();
 List<Cache> caches = new ArrayList<Cache>();
 caches.add(cacheBean().getObject());
 cacheManager.setCaches(caches );
 return cacheManager;
}

@Bean
public ConcurrentMapCacheFactoryBean cacheBean(){
 ConcurrentMapCacheFactoryBean cacheFactoryBean = new ConcurrentMapCacheFactoryBean();
 cacheFactoryBean.setName("default");
 return cacheFactoryBean;
}
In this case, there is an explicit FactoryBean being returned from a @Bean method, and Spring will take care of calling the lifecycle methods on this bean.

2. Replicate the behavior in the relevant lifecycle methods, in this specific instance I know that the FactoryBean instantiates the ConcurrentMapCache in the afterPropertiesSet method, I can replicate this behavior directly this way:

@Bean
public SimpleCacheManager cacheManager(){
 SimpleCacheManager cacheManager = new SimpleCacheManager();
 List<Cache> caches = new ArrayList<Cache>();
 caches.add(cacheBean());
 cacheManager.setCaches(caches );
 return cacheManager;
}

@Bean
public Cache  cacheBean(){
 Cache  cache = new ConcurrentMapCache("default");
 return cache;
}

Something to keep in mind when translating a FactoryBean from xml to @Configuration.

Note:
A working one page test as a gist is available here:

Tuesday, August 7, 2012

Mergesort using Fork/Join Framework

The objective of this entry is to show a simple example of a Fork/Join RecursiveAction, not to delve too much into the possible optimizations to merge sort or the relative advantages of using Fork/Join Pool over the existing Java 6 based implementations like ExecutorService.

The following is a typical implementation of a Top Down Merge sort algorithm using Java:

import java.lang.reflect.Array;

public class MergeSort {
 public static <T extends Comparable<? super T>> void sort(T[] a) {
  @SuppressWarnings("unchecked")
  T[] helper = (T[])Array.newInstance(a[0].getClass() , a.length);
  mergesort(a, helper, 0, a.length-1);
 }
 
 private static <T extends Comparable<? super T>> void mergesort(T[] a, T[] helper, int lo, int hi){
  if (lo>=hi) return;
  int mid = lo + (hi-lo)/2;
  mergesort(a, helper, lo, mid);
  mergesort(a, helper, mid+1, hi);
  merge(a, helper, lo, mid, hi);  
 }

 private static <T extends Comparable<? super T>> void merge(T[] a, T[] helper, int lo, int mid, int hi){
  for (int i=lo;i<=hi;i++){
   helper[i]=a[i];
  }
  int i=lo,j=mid+1;
  for(int k=lo;k<=hi;k++){
   if (i>mid){
    a[k]=helper[j++];
   }else if (j>hi){
    a[k]=helper[i++];
   }else if(isLess(helper[i], helper[j])){
    a[k]=helper[i++];
   }else{
    a[k]=helper[j++];
   }
  }
 }

 private static <T extends Comparable<? super T>> boolean isLess(T a, T b) {
  return a.compareTo(b) < 0;
 }
}

To quickly describe the algorithm -
The following steps are performed recursively:

  1.  The input data is divided into 2 halves
  2.  Each half is sorted
  3. The sorted data is then merged

Merge sort is a canonical example for an implementation using Java Fork/Join pool, and the following is a blind implementation of Merge sort using the Fork/Join framework:

The  recursive task in Merge sort can be succinctly expressed as an implementation of RecursiveAction -


 private static class MergeSortTask<T extends Comparable<? super T>> extends RecursiveAction{
  private static final long serialVersionUID = -749935388568367268L;
  private final T[] a;
  private final T[] helper;
  private final int lo;
  private final int hi;
  
  public MergeSortTask(T[] a, T[] helper, int lo, int hi){
   this.a = a;
   this.helper = helper;
   this.lo = lo;
   this.hi = hi;
  }
  @Override
  protected void compute() {
   if (lo>=hi) return;
   int mid = lo + (hi-lo)/2;
   MergeSortTask<T> left = new MergeSortTask<>(a, helper, lo, mid);
   MergeSortTask<T> right = new MergeSortTask<>(a, helper, mid+1, hi);
   invokeAll(left, right);
   merge(this.a, this.helper, this.lo, mid, this.hi);
   
   
  }
  private void merge(T[] a, T[] helper, int lo, int mid, int hi){
   for (int i=lo;i<=hi;i++){
    helper[i]=a[i];
   }
   int i=lo,j=mid+1;
   for(int k=lo;k<=hi;k++){
    if (i>mid){
     a[k]=helper[j++];
    }else if (j>hi){
     a[k]=helper[i++];
    }else if(isLess(helper[i], helper[j])){
     a[k]=helper[i++];
    }else{
     a[k]=helper[j++];
    }
   }
  }
  private boolean isLess(T a, T b) {
   return a.compareTo(b) < 0;
  }
 }

MergeSortTask above implements a compute method, which takes in a array of values, split it up into two parts, creates a MergeSortTask out of each of the parts and forks off two more tasks(hence it is called RecursiveAction!). The specific API used here to spawn of the task is invokeAll which returns only when the submitted subtasks are marked as completed. So once the left and right subtasks return the result is merged in a merge routine.

Given this the only work left is to use a ForkJoinPool to submit this task. ForkJoinPool is analogous to the ExecutorService used for distributing tasks in a threadpool, the difference to quote the ForkJoinPool's API docs:

A ForkJoinPool differs from other kinds of ExecutorService mainly by virtue of employing work-stealing: all threads in the pool attempt to find and execute subtasks created by other active tasks (eventually blocking waiting for work if none exist)

This is how the task of submitting the task to the Fork/Join Pool looks like:

 public static <T extends Comparable<? super T>> void sort(T[] a) {
  @SuppressWarnings("unchecked")
  T[] helper = (T[])Array.newInstance(a[0].getClass() , a.length);
  ForkJoinPool forkJoinPool = new ForkJoinPool(10);
  forkJoinPool.invoke(new MergeSortTask<T>(a, helper, 0, a.length-1));
 }

A complete sample is also available here: https://github.com/bijukunjummen/algos/blob/master/src/main/java/org/bk/algo/sort/algo04/merge/MergeSortForkJoin.java

Sunday, August 5, 2012

Accept header vs Content-Type Header

I occasionally get confused between the Accept and the Content-Type Headers and this post is a way of clarifying the difference for myself. Let me summarize the difference to start with and then go onto a little bit of detail -
Accept and Content-type are both headers sent from a client(browser say) to a service.
Accept header is a way for a client to specify the media type of the response content it is expecting and Content-type is a way to specify the media type of request being sent from the client to the server.

To expand on this:

Accept header to quote the HTTP/1.1 RFC:

The Accept request-header field can be used to specify certain media types which are acceptable for the response. 
An example of an Accept header for a json request to a REST based service will be the following:
Accept: application/json

This is saying the response expected is a json content.

Content-Type to quote from the HTTP/1.1 RFC:

The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.
As a sample if a json is being sent from a browser to a server, then the content type header would look like this:
Content-Type: application/json