Saturday, December 27, 2014

Spring retry - ways to integrate with your project

If you have a need to implement robust retry logic in your code, a proven way would be to use the spring retry library. My objective here is not to show how to use the spring retry project itself, but in demonstrating different ways that it can be integrated into your codebase.

Consider a service to invoke an external system:

package retry.service;

public interface RemoteCallService {
    String call() throws Exception;
}


Assume that this call can fail and you want the call to be retried thrice with a 2 second delay each time the call fails, so to simulate this behavior I have defined a mock service using Mockito this way, note that this is being returned as a mocked Spring bean:

@Bean
public RemoteCallService remoteCallService() throws Exception {
    RemoteCallService remoteService = mock(RemoteCallService.class);
    when(remoteService.call())
            .thenThrow(new RuntimeException("Remote Exception 1"))
            .thenThrow(new RuntimeException("Remote Exception 2"))
            .thenReturn("Completed");
    return remoteService;
}
So essentially this mocked service fails 2 times and succeeds with the third call.

And this is the test for the retry logic:

public class SpringRetryTests {

    @Autowired
    private RemoteCallService remoteCallService;

    @Test
    public void testRetry() throws Exception {
        String message = this.remoteCallService.call();
        verify(remoteCallService, times(3)).call();
        assertThat(message, is("Completed"));
    }
}

We are ensuring that the service is called 3 times to account for the first two failed calls and the third call which succeeds.

If we were to directly incorporate spring-retry at the point of calling this service, then the code would have looked like this:
@Test
public void testRetry() throws Exception {
    String message = this.retryTemplate.execute(context -> this.remoteCallService.call());
    verify(remoteCallService, times(3)).call();
    assertThat(message, is("Completed"));
}

This is not ideal however, a better way would be where the callers don't have have to be explicitly aware of the fact that there is a retry logic in place.

Given this, the following are the approaches to incorporate Spring-retry logic.

Approach 1: Custom Aspect to incorporate Spring-retry

This approach should be fairly intuitive as the retry logic can be considered a cross cutting concern and a good way to implement a cross cutting concern is using Aspects. An aspect which incorporates the Spring-retry would look something along these lines:

package retry.aspect;

import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.retry.support.RetryTemplate;

@Aspect
public class RetryAspect {

    private static Logger logger = LoggerFactory.getLogger(RetryAspect.class);

    @Autowired
    private RetryTemplate retryTemplate;

    @Pointcut("execution(* retry.service..*(..))")
    public void serviceMethods() {
        //
    }

    @Around("serviceMethods()")
    public Object aroundServiceMethods(ProceedingJoinPoint joinPoint) {
        try {
            return retryTemplate.execute(retryContext -> joinPoint.proceed());
        } catch (Throwable e) {
            throw new RuntimeException(e);
        }
    }
}

This aspect intercepts the remote service call and delegates the call to the retryTemplate. A full working test is here.

Approach 2: Using Spring-retry provided advice

Out of the box Spring-retry project provides an advice which takes care of ensuring that targeted services can be retried. The AOP configuration to weave the advice around the service requires dealing with raw xml as opposed to the previous approach where the aspect can be woven using Spring Java configuration. The xml configuration looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xsi:schemaLocation="
    http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd
    http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">

 <aop:config>
  <aop:pointcut id="transactional"
       expression="execution(* retry.service..*(..))" />
  <aop:advisor pointcut-ref="transactional"
      advice-ref="retryAdvice" order="-1"/>
 </aop:config>

</beans>

The full working test is here.

Approach 3: Declarative retry logic

This is the recommended approach, you will see that the code is far more concise than with the previous two approaches. With this approach, the only thing that needs to be done is to declaratively indicate which methods need to be retried:

package retry.service;

import org.springframework.retry.annotation.Backoff;
import org.springframework.retry.annotation.Retryable;

public interface RemoteCallService {
    @Retryable(maxAttempts = 3, backoff = @Backoff(delay = 2000))
    String call() throws Exception;
}

and a full test which makes use of this declarative retry logic, also available here:

package retry;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.retry.annotation.EnableRetry;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import retry.service.RemoteCallService;

import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.is;
import static org.mockito.Mockito.*;


@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration
public class SpringRetryDeclarativeTests {

    @Autowired
    private RemoteCallService remoteCallService;

    @Test
    public void testRetry() throws Exception {
        String message = this.remoteCallService.call();
        verify(remoteCallService, times(3)).call();
        assertThat(message, is("Completed"));
    }

    @Configuration
    @EnableRetry
    public static class SpringConfig {

        @Bean
        public RemoteCallService remoteCallService() throws Exception {
            RemoteCallService remoteService = mock(RemoteCallService.class);
            when(remoteService.call())
                    .thenThrow(new RuntimeException("Remote Exception 1"))
                    .thenThrow(new RuntimeException("Remote Exception 2"))
                    .thenReturn("Completed");
            return remoteService;
        }
    }
}

The @EnableRetry annotation activates the processing of @Retryable annotated methods and internally uses logic along the lines of approach 2 without the end user needing to be explicit about it.

I hope this gives you a slightly better taste for how to incorporate Spring-retry in your project. All the code that I have demonstrated here is also available in my github project here: https://github.com/bijukunjummen/test-spring-retry

Tuesday, December 23, 2014

Solving "Water buckets" problem using Scala

I recently came across a puzzle called the "Water Buckets" problem in this book, which totally stumped me.


You have a 12-gallon bucket, an 8-gallon bucket and a 5-gallon bucket. The 12-gallon bucket is full of water and the other two are empty. Without using any additional water how can you divide the twelve gallons of water equally so that two of the three buckets have exactly 6 gallons of water in them?


I and my nephew spent a good deal of time trying to solve it and ultimately gave up.

I remembered then that I have seen a programmatic solution to a similar puzzle being worked out in the "Functional Programming Principles in Scala" Coursera course by Martin Odersky.

This is the gist to the solution completely copied from the course:



and running this program spits out the following 7 step solution! (index 0 is the 12-gallon bucket, 1 is the 8-gallon bucket and 2 is the 5-gallon bucket)

Pour(0,1) 
Pour(1,2) 
Pour(2,0) 
Pour(1,2) 
Pour(0,1) 
Pour(1,2) 
Pour(2,0)

If you are interested in learning more about the code behind this solution, the best way is to follow the week 7 of the Coursera course that I have linked above, Martin Odersky does a fantastic job of seemingly coming up with a solution on the fly!.

Saturday, December 13, 2014

RabbitMQ - Processing messages serially using Spring integration Java DSL

If you ever have a need to process messages serially with RabbitMQ with a cluster of listeners processing the messages, the best way that I have seen is to use a "exclusive consumer" flag on a listener with 1 thread on each listener processing the messages.

Exclusive consumer flag ensures that only 1 consumer can read messages from the specific queue, and 1 thread on that consumer ensures that the messages are processed serially. There is a catch however, I will go over it later.

Let me demonstrate this behavior with a Spring Boot and Spring Integration based RabbitMQ message consumer.

First, this is the configuration for setting up a queue using Spring java configuration, note that since this is a Spring Boot application, it automatically creates a RabbitMQ connection factory when the Spring-amqp library is added to the list of dependencies:

@Configuration
@Configuration
public class RabbitConfig {

    @Autowired
    private ConnectionFactory rabbitConnectionFactory;

    @Bean
    public Queue sampleQueue() {
        return new Queue("sample.queue", true, false, false);
    }

}

Given this sample queue, a listener which gets the messages from this queue and processes them looks like this, the flow is written using the excellent Spring integration Java DSL library:

@Configuration
public class RabbitInboundFlow {
    private static final Logger logger = LoggerFactory.getLogger(RabbitInboundFlow.class);

    @Autowired
    private RabbitConfig rabbitConfig;

    @Autowired
    private ConnectionFactory connectionFactory;

    @Bean
    public SimpleMessageListenerContainer simpleMessageListenerContainer() {
        SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
        listenerContainer.setConnectionFactory(this.connectionFactory);
        listenerContainer.setQueues(this.rabbitConfig.sampleQueue());
        listenerContainer.setConcurrentConsumers(1);
        listenerContainer.setExclusive(true);
        return listenerContainer;
    }

    @Bean
    public IntegrationFlow inboundFlow() {
        return IntegrationFlows.from(Amqp.inboundAdapter(simpleMessageListenerContainer()))
                .transform(Transformers.objectToString())
                .handle((m) -> {
                    logger.info("Processed  {}", m.getPayload());
                })
                .get();
    }

}


The flow is very concisely expressed in the inboundFlow method, a message payload from RabbitMQ is transformed from byte array to String and finally processed by simply logging the message to the logs

The important part of the flow is the listener configuration, note the flag which sets the consumer to be an exclusive consumer and within this consumer the number of threads processing is set to 1. Given this even if multiple instances of the application is started up only 1 of the listeners will be able to connect and process messages.


Now for the catch, consider a case where the processing of messages takes a while to complete and rolls back during processing of the message. If the instance of the application handling the message were to be stopped in the middle of processing such a message, then the behavior is a different instance will start handling the messages in the queue, when the stopped instance rolls back the message, the rolled back message is then delivered to the new exclusive consumer, thus getting a message out of order.

If you are interested in exploring this further, here is a github project to play with this feature: https://github.com/bijukunjummen/test-rabbit-exclusive

Saturday, November 29, 2014

Spring RestTemplate with a linked resource

Spring Data REST is an awesome project that provides mechanisms to expose the resources underlying a Spring Data based repository as REST resources.

Exposing a service with a linked resource


Consider two simple JPA based entities, Course and Teacher:

@Entity
@Table(name = "teachers")
public class Teacher {
 @Id
 @GeneratedValue(strategy = GenerationType.AUTO)
 @Column(name = "id")
 private Long id;

 @Size(min = 2, max = 50)
 @Column(name = "name")
 private String name;

 @Column(name = "department")
 @Size(min = 2, max = 50)
 private String department;    
    ...
}

@Entity
@Table(name = "courses")
public class Course {
 @Id
 @GeneratedValue(strategy = GenerationType.AUTO)
 @Column(name = "id")
 private Long id;

 @Size(min = 1, max = 10)
 @Column(name = "coursecode")
 private String courseCode;

 @Size(min = 1, max = 50)
 @Column(name = "coursename")
 private String courseName;

 @ManyToOne
 @JoinColumn(name = "teacher_id")
 private Teacher teacher;
 
       ....
}

essentially the relation looks like this:

Now, all it takes to expose these entities as REST resources is adding a @RepositoryRestResource annotation on their JPA based Spring Data repositories this way, first for the "Teacher" resource:
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.rest.core.annotation.RepositoryRestResource;
import univ.domain.Teacher;

@RepositoryRestResource
public interface TeacherRepo extends JpaRepository<Teacher, Long> {
}

and for exposing the Course resource:

@RepositoryRestResource
public interface CourseRepo extends JpaRepository<Course, Long> {
}

With this done and assuming a few teachers and a few courses are already in the datastore, a GET on courses would yield a response of the following type:

{
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/api/courses{?page,size,sort}",
      "templated" : true
    }
  },
  "_embedded" : {
    "courses" : [ {
      "courseCode" : "Course1",
      "courseName" : "Course Name 1",
      "version" : 0,
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/api/courses/1"
        },
        "teacher" : {
          "href" : "http://localhost:8080/api/courses/1/teacher"
        }
      }
    }, {
      "courseCode" : "Course2",
      "courseName" : "Course Name 2",
      "version" : 0,
      "_links" : {
        "self" : {
          "href" : "http://localhost:8080/api/courses/2"
        },
        "teacher" : {
          "href" : "http://localhost:8080/api/courses/2/teacher"
        }
      }
    } ]
  },
  "page" : {
    "size" : 20,
    "totalElements" : 2,
    "totalPages" : 1,
    "number" : 0
  }
}

and a specific course looks like this:
{
  "courseCode" : "Course1",
  "courseName" : "Course Name 1",
  "version" : 0,
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/api/courses/1"
    },
    "teacher" : {
      "href" : "http://localhost:8080/api/courses/1/teacher"
    }
  }
}

If you are wondering what the "_links", "_embedded" are - Spring Data REST uses Hypertext Application Language(or HAL for short) to represent the links, say the one between a course and a teacher.

HAL Based REST service - Using RestTemplate


Given this HAL based REST service, the question that I had in my mind was how to write a client to this service. I am sure there are better ways of doing this, but what follows worked for me and I welcome any cleaner ways of writing the client.

First, I modified the RestTemplate to register a custom Json converter that understands HAL based links:

public RestTemplate getRestTemplateWithHalMessageConverter() {
 RestTemplate restTemplate = new RestTemplate();
 List<HttpMessageConverter<?>> existingConverters = restTemplate.getMessageConverters();
 List<HttpMessageConverter<?>> newConverters = new ArrayList<>();
 newConverters.add(getHalMessageConverter());
 newConverters.addAll(existingConverters);
 restTemplate.setMessageConverters(newConverters);
 return restTemplate;
}

private HttpMessageConverter getHalMessageConverter() {
 ObjectMapper objectMapper = new ObjectMapper();
 objectMapper.registerModule(new Jackson2HalModule());
 MappingJackson2HttpMessageConverter halConverter = new TypeConstrainedMappingJackson2HttpMessageConverter(ResourceSupport.class);
 halConverter.setSupportedMediaTypes(Arrays.asList(HAL_JSON));
 halConverter.setObjectMapper(objectMapper);
 return halConverter;
}

The Jackson2HalModule is provided by the Spring HATEOS project and understands HAL representation.


Given this shiny new RestTemplate, first let us create a Teacher entity:

Teacher teacher1 = new Teacher();
teacher1.setName("Teacher 1");
teacher1.setDepartment("Department 1");
URI teacher1Uri =
  testRestTemplate.postForLocation("http://localhost:8080/api/teachers", teacher1);

Note that when the entity is created, the response is a http status code of 201 with the Location header pointing to the uri of the newly created resource, Spring RestTemplate provides a neat way of posting and getting hold of this Location header through an API. So now we have a teacher1Uri representing the newly created teacher.

Given this teacher URI, let us now retrieve the teacher, the raw json for the teacher resource looks like the following:
{
  "name" : "Teacher 1",
  "department" : "Department 1",
  "version" : 0,
  "_links" : {
    "self" : {
      "href" : "http://localhost:8080/api/teachers/1"
    }
  }
}

and to retrieve this using RestTemplate:
ResponseEntity<Resource<Teacher>> teacherResponseEntity
  = testRestTemplate.exchange("http://localhost:8080/api/teachers/1", HttpMethod.GET, null, new ParameterizedTypeReference<Resource<Teacher>>() {
});

Resource<Teacher> teacherResource = teacherResponseEntity.getBody();

Link teacherLink = teacherResource.getLink("self");
String teacherUri = teacherLink.getHref();

Teacher teacher = teacherResource.getContent();

Jackson2HalModule is the one which helps unpack the links this cleanly and to get hold of the Teacher entity itself. I have previously explained ParameterizedTypeReference here.


Now, to a more tricky part, creating a Course.

Creating a course is tricky as it has a relation to the Teacher and representing this relation using HAL is not that straightforward. A raw POST to create the course would look like this:

 {
      "courseCode" : "Course1",
      "courseName" : "Course Name 1",
      "version" : 0,
      "teacher" : "http://localhost:8080/api/teachers/1"
}

Note how the reference to the teacher is a URI, this is how HAL represents an embedded reference specifically for a POST'ed content, so now to get this form through RestTemplate -

First to create a Course:

Course course1 = new Course();
course1.setCourseCode("Course1");
course1.setCourseName("Course Name 1");

At this point, it will be easier to handle providing the teacher link by dealing with a json tree representation and adding in the teacher link as the teacher uri:

ObjectMapper objectMapper = getObjectMapperWithHalModule();
ObjectNode jsonNodeCourse1 = (ObjectNode) objectMapper.valueToTree(course1);
jsonNodeCourse1.put("teacher", teacher1Uri.getPath());

and posting this should create the course with the linked teacher:

URI course1Uri = testRestTemplate.postForLocation(coursesUri, jsonNodeCourse1);

and to retrieve this newly created Course:

ResponseEntity<Resource<Course>> courseResponseEntity
  = testRestTemplate.exchange(course1Uri, HttpMethod.GET, null, new ParameterizedTypeReference<Resource<Course>>() {
});

Resource<Course> courseResource = courseResponseEntity.getBody();
Link teacherLinkThroughCourse = courseResource.getLink("teacher");

This concludes how to use the RestTemplate to create and retrieve a linked resource, alternate ideas are welcome.

If you are interested in exploring this further, the entire sample is available at this github repo -  and the test is here


References:

Hypertext Application Language(or HAL for short)
HAL Specification
Spring RestTemplate

Sunday, November 23, 2014

Externalizing session state for a Spring-boot application using spring-session

Spring-session is a very cool new project that aims to provide a simpler way of managing sessions in Java based web applications. One of the features that I explored with spring-session recently was the way it supports externalizing session state without needing to fiddle with the internals of specific web containers like Tomcat or Jetty.

To test spring-session I have used a shopping cart type application(available here) which makes heavy use of session by keeping the items added to the cart as a session attribute, as can be seen from these screenshots:





Consider first a scenario without Spring session. So this is how I have exposed my application:


I am using nginx to load balance across two instances of this application. This set-up is very easy to run using Spring boot, I brought up two instances of the app up using two different server ports, this way:

mvn spring-boot:run -Dserver.port=8080
mvn spring-boot:run -Dserver.port=8082

and this is my nginx.conf to load balance across these two instances:

events {
    worker_connections  1024;
}
http {
    upstream sessionApp {
        server localhost:8080;
        server localhost:8082;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://sessionApp;
        }       
    }
}

I display the port number of the application in the footer just to show which instance is handling the request.

If I were to do nothing to move the state of the session out the application then the behavior of the application would be erratic as the session established on one instance of the application would not be recognized by the other instance - specifically if Tomcat receives a session id it does not recognize then the behavior is to create a new session.

Introducing Spring session into the application


There are container specific ways to introduce a external session stores - One example is here, where Redis is configured as a store for Tomcat. Pivotal Gemfire provides a module to externalize Tomcat's session state.

The advantage of using Spring-session is that there is no dependence on the container at all - maintaining session state becomes an application concern. The instructions on configuring an application to use Spring session is detailed very well at the Spring-session site, just to quickly summarize how I have configured my Spring Boot application, these are first the dependencies that I have pulled in:

<dependency>
 <groupId>org.springframework.session</groupId>
 <artifactId>spring-session</artifactId>
 <version>1.0.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
 <groupId>org.springframework.session</groupId>
 <artifactId>spring-session-data-redis</artifactId>
 <version>1.0.0.BUILD-SNAPSHOT</version>
</dependency>
<dependency>
 <groupId>org.springframework.data</groupId>
 <artifactId>spring-data-redis</artifactId>
 <version>1.4.1.RELEASE</version>
</dependency>
<dependency>
 <groupId>redis.clients</groupId>
 <artifactId>jedis</artifactId>
 <version>2.4.1</version>
</dependency>



and my configuration to use Spring-session for session support, note the Spring Boot specific FilterRegistrationBean which is used to register the session repository filter:

mport org.springframework.boot.context.embedded.FilterRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.annotation.Order;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
import org.springframework.session.data.redis.config.annotation.web.http.EnableRedisHttpSession;
import org.springframework.session.web.http.SessionRepositoryFilter;
import org.springframework.web.filter.DelegatingFilterProxy;

import java.util.Arrays;

@Configuration
@EnableRedisHttpSession
public class SessionRepositoryConfig {

 @Bean
 @Order(value = 0)
 public FilterRegistrationBean sessionRepositoryFilterRegistration(SessionRepositoryFilter springSessionRepositoryFilter) {
  FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
  filterRegistrationBean.setFilter(new DelegatingFilterProxy(springSessionRepositoryFilter));
  filterRegistrationBean.setUrlPatterns(Arrays.asList("/*"));
  return filterRegistrationBean;
 }

 @Bean
 public JedisConnectionFactory connectionFactory() {
  return new JedisConnectionFactory();
 }
}


And that is it! magically now all session is handled by Spring-session, and neatly externalized to Redis.

If I were to retry my previous configuration of using nginx to load balance two different Spring-Boot applications using the common Redis store, the application just works irrespective of the instance handling the request. I look forward to further enhancements to this excellent new project.

The sample application which makes use of Spring-session is available here: https://github.com/bijukunjummen/shopping-cart-cf-app.git

Tuesday, November 11, 2014

Spring boot war packaging

Spring boot recommends creating an executable jar with an embedded container(tomcat or jetty) during build time and using this executable jar as a standalone process at runtime. It is common however to deploy applications to an external container instead and Spring boot provides packaging the applications as a war specifically for this kind of a need.

My focus here is not to repeat the already detailed Spring Boot instructions on creating the war artifact, but on testing the created file to see if it would reliably work on a standalone container. I recently had an issue when creating a war from a Spring Boot project and deploying it on Jetty and this is essentially a learning from that experience.

The best way to test if the war will work reliably will be to simply use the jetty-maven and/or the tomcat maven plugin, with the following entries to the pom.xml file:

<plugin>
 <groupId>org.apache.tomcat.maven</groupId>
 <artifactId>tomcat7-maven-plugin</artifactId>
 <version>2.2</version>
</plugin>
<plugin>
 <groupId>org.eclipse.jetty</groupId>
 <artifactId>jetty-maven-plugin</artifactId>
 <version>9.2.3.v20140905</version>
</plugin>

With the plugins in place, starting up the war with the tomcat plugin:
mvn tomcat7:run

and with the jetty plugin:
mvn jetty:run

If there any issues with the way the war has been created, it should come out at start-up time with these containers. For eg, if I were to leave in the embedded tomcat dependencies:

<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-tomcat</artifactId>
</dependency> 

then when starting up the maven tomcat plugin, an error along these lines will show up:
java.lang.ClassCastException: org.springframework.web.SpringServletContainerInitializer cannot be cast to javax.servlet.ServletContainerInitializer

an indication of a servlet jar being packaged with the war file, fixed by specifying the scope as provided in the maven dependencies:
<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-tomcat</artifactId>
 <scope>provided</scope>
</dependency> 

why both jetty and tomcat plugins, the reason is I saw a difference in behavior specifically with websocket support with jetty as the runtime and not in tomcat. So consider the websocket dependencies which are pulled in the following way:
<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-websocket</artifactId>
</dependency> 

This gave me an error when started up using the jetty runtime, and the fix again is to mark the underlying tomcat dependencies as provided, replace above with the following:

<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-websocket</artifactId>
</dependency>
<dependency>
 <groupId>org.apache.tomcat.embed</groupId>
 <artifactId>tomcat-embed-websocket</artifactId>
 <scope>provided</scope>
</dependency>
<dependency>
 <groupId>org.springframework</groupId>
 <artifactId>spring-messaging</artifactId>
</dependency>

So to conclude, a quick way to verify if the war file produced for a Spring-boot application will cleanly deploy to a container(atleast tomcat and jetty) is to add the tomcat and jetty maven plugins and use these plugins to start the application up. Here is a sample project demonstrating this - https://github.com/bijukunjummen/spring-websocket-chat-sample.git

Wednesday, November 5, 2014

Spring boot based websocket application and capturing http session id

I was involved in a project recently where we needed to capture the http session id for a websocket request - the reason was to determine the number of websocket sessions utilizing the same underlying http session

The way to do this is based on a sample utilizing the new spring-session module and is described here.

The trick to capturing the http session id is in understanding that before a websocket connection is established between the browser and the server, there is a handshake phase negotiated over http and the session id is passed to the server during this handshake phase.

Spring Websocket support provides a nice way to register a HandShakeInterceptor, which can be used to capture the http session id and set this in the sub-protocol(typically STOMP) headers. First, this is the way to capture the session id and set it to a header:

public class HttpSessionIdHandshakeInterceptor implements HandshakeInterceptor {

 @Override
 public boolean beforeHandshake(ServerHttpRequest request, ServerHttpResponse response, WebSocketHandler wsHandler, Map<String, Object> attributes) throws Exception {
  if (request instanceof ServletServerHttpRequest) {
   ServletServerHttpRequest servletRequest = (ServletServerHttpRequest) request;
   HttpSession session = servletRequest.getServletRequest().getSession(false);
   if (session != null) {
    attributes.put("HTTPSESSIONID", session.getId());
   }
  }
  return true;
 }

 public void afterHandshake(ServerHttpRequest request, ServerHttpResponse response, WebSocketHandler wsHandler, Exception ex) {
 }
}

And to register this HandshakeInterceptor with Spring Websocket support:

@Configuration
@EnableWebSocketMessageBroker
public class WebSocketDefaultConfig extends AbstractWebSocketMessageBrokerConfigurer {

 @Override
 public void configureMessageBroker(MessageBrokerRegistry config) {
  config.enableSimpleBroker("/topic/", "/queue/");
  config.setApplicationDestinationPrefixes("/app");
 }

 @Override
 public void registerStompEndpoints(StompEndpointRegistry registry) {
  registry.addEndpoint("/chat").withSockJS().setInterceptors(httpSessionIdHandshakeInterceptor());
 }

 @Bean
 public HttpSessionIdHandshakeInterceptor httpSessionIdHandshakeInterceptor() {
  return new HttpSessionIdHandshakeInterceptor();
 }

}

Now that the session id is a part of the STOMP headers, this can be grabbed as a STOMP header, the following is a sample where it is being grabbed when subscriptions are registered to the server:

@Component
public class StompSubscribeEventListener implements ApplicationListener<SessionSubscribeEvent> {

 private static final Logger logger = LoggerFactory.getLogger(StompSubscribeEventListener.class);

 @Override
 public void onApplicationEvent(SessionSubscribeEvent sessionSubscribeEvent) {
  StompHeaderAccessor headerAccessor = StompHeaderAccessor.wrap(sessionSubscribeEvent.getMessage());
  logger.info(headerAccessor.getSessionAttributes().get("HTTPSESSIONID").toString());
 }
}

or it can be grabbed from a controller method handling websocket messages as a MessageHeaders parameter:

@MessageMapping("/chats/{chatRoomId}")
 public void handleChat(@Payload ChatMessage message, @DestinationVariable("chatRoomId") String chatRoomId, MessageHeaders messageHeaders, Principal user) {
  logger.info(messageHeaders.toString());
  this.simpMessagingTemplate.convertAndSend("/topic/chats." + chatRoomId, "[" + getTimestamp() + "]:" + user.getName() + ":" + message.getMessage());
 }

Here is a complete working sample which implements this pattern.

Friday, October 31, 2014

Spring Caching abstraction and Google Guava Cache

Spring provides a great out of the box support for caching expensive method calls. The caching abstraction is covered in a great detail here.

My objective here is to cover one of the newer cache implementations that Spring now provides with 4.0+ version of the framework - using Google Guava Cache

In brief, consider a service which has a few slow methods:

public class DummyBookService implements BookService {

 @Override
 public Book loadBook(String isbn) {
  // Slow method 1.

 }

 @Override
 public List<Book> loadBookByAuthor(String author) {
  // Slow method 2
 }

}

With Spring Caching abstraction, repeated calls with the same parameter can be sped up by an annotation on the method along these lines - here the result of loadBook is being cached in to a "book" cache and listing of books cached into another "books" cache:

public class DummyBookService implements BookService {

 @Override
 @Cacheable("book")
 public Book loadBook(String isbn) {
  // slow response time..

 }

 @Override
 @Cacheable("books")
 public List<Book> loadBookByAuthor(String author) {
  // Slow listing
 }
}

Now, Caching abstraction support requires a CacheManager to be available which is responsible for managing the underlying caches to store the cached results, with the new Guava Cache support the CacheManager is along these lines:

@Bean
public CacheManager cacheManager() {
 return new GuavaCacheManager("books", "book");
}

Google Guava Cache provides a rich API to be able to pre-load the cache, set eviction duration based on last access or created time, set the size of the cache etc, if the cache is to be customized then a guava CacheBuilder can be passed to the CacheManager for this customization:

@Bean
public CacheManager cacheManager() {
 GuavaCacheManager guavaCacheManager =  new GuavaCacheManager();
 guavaCacheManager.setCacheBuilder(CacheBuilder.newBuilder().expireAfterAccess(30, TimeUnit.MINUTES));
 return guavaCacheManager;
}

This works well if all the caches have a similar configuration, what if the caches need to be configured differently - for eg. in the sample above, I may want the "book" cache to never expire but the "books" cache to have an expiration of 30 mins, then the GuavaCacheManager abstraction does not work well, instead a better solution is actually to use a SimpleCacheManager which provides a more direct way to get to the cache and can be configured this way:

@Bean
public CacheManager cacheManager() {
 SimpleCacheManager simpleCacheManager = new SimpleCacheManager();
 GuavaCache cache1 = new GuavaCache("book", CacheBuilder.newBuilder().build());
 GuavaCache cache2 = new GuavaCache("books", CacheBuilder.newBuilder()
             .expireAfterAccess(30, TimeUnit.MINUTES)
             .build());
 simpleCacheManager.setCaches(Arrays.asList(cache1, cache2));
 return simpleCacheManager;
}

This approach works very nicely, if required certain caches can be configured to be backed by a different caching engines itself, say a simple hashmap, some by Guava or EhCache some by distributed caches like Gemfire.


Wednesday, October 22, 2014

Docker RabbitMQ cluster

I have been trying to create a Docker based RabbitMQ cluster on and off for sometime and got it working today - fairly basic and flaky but could be a good starting point for others to improve on.

This is how the sample cluster looks on my machine, this is a typical cluster described in the RabbitMQ clustering guide available here - https://www.rabbitmq.com/clustering.html. As recommended at the site, there are 2 disk based nodes and 1 RAM based node here.



To quickly replicate this, you only need to have fig in your machine, just create a fig.yml file with the following entry:

rabbit1:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit1
  ports:
    - "5672:5672"
    - "15672:15672"

rabbit2:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit2
  links:
    - rabbit1
  environment: 
   - CLUSTERED=true
   - CLUSTER_WITH=rabbit1
   - RAM_NODE=true

rabbit3:
  image: bijukunjummen/rabbitmq-server
  hostname: rabbit3
  links:
    - rabbit1
    - rabbit2
  environment: 
   - CLUSTERED=true
   - CLUSTER_WITH=rabbit1   

and in the folder holding this file, run:

  fig up

That is it!, the entire cluster should come up. If you need more nodes, just modify the fig.yml file.

The docker files for creating the dockerized rabbitmq-server is available at my github repo here: https://github.com/bijukunjummen/docker-rabbitmq-cluster and the "rabbitmq-server" image itself is here at the docker hub.

References:

Monday, October 13, 2014

Spring @Configuration - RabbitMQ connectivity

I have been playing around with converting an application that I have, to use Spring @Configuration mechanism to configure connectivity to RabbitMQ - originally I had the configuration described using an xml bean definition file.

So this was my original configuration:

<beans ...;>

 <context:property-placeholder/>
 <rabbit:connection-factory id="rabbitConnectionFactory" username="${rabbit.user}" host="localhost" password="${rabbit.pass}" port="5672"/>
 <rabbit:template id="amqpTemplate"
      connection-factory="rabbitConnectionFactory"
      exchange="rmq.rube.exchange"
      routing-key="rube.key"
      channel-transacted="true"/>

 <rabbit:queue name="rmq.rube.queue" durable="true"/>

 <rabbit:direct-exchange name="rmq.rube.exchange" durable="true">
  <rabbit:bindings>
   <rabbit:binding queue="rmq.rube.queue" key="rube.key"></rabbit:binding>
  </rabbit:bindings>
 </rabbit:direct-exchange>


</beans>

This is a fairly simple configuration that :

  • sets up a connection to a RabbitMQ server,
  • creates a durable queue(if not available)
  • creates a durable exchange
  • and configures a binding to send messages to the exchange to be routed to the queue based on a routing key called "rube.key"

This can be translated to the following @Configuration based java configuration:

@Configuration
public class RabbitConfig {

 @Autowired
 private ConnectionFactory rabbitConnectionFactory;

 @Bean
 DirectExchange rubeExchange() {
  return new DirectExchange("rmq.rube.exchange", true, false);
 }

 @Bean
 public Queue rubeQueue() {
  return new Queue("rmq.rube.queue", true);
 }

 @Bean
 Binding rubeExchangeBinding(DirectExchange rubeExchange, Queue rubeQueue) {
  return BindingBuilder.bind(rubeQueue).to(rubeExchange).with("rube.key");
 }

 @Bean
 public RabbitTemplate rubeExchangeTemplate() {
  RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
  r.setExchange("rmq.rube.exchange");
  r.setRoutingKey("rube.key");
  r.setConnectionFactory(rabbitConnectionFactory);
  return r;
 }
}

This configuration should look much more simpler than the xml version of the configuration. I am cheating a little here though, you should be seeing a missing connectionFactory which is just being injected into this configuration, where is that coming from..this is actually part of a Spring Boot based application and there is a Spring Boot Auto configuration for RabbitMQ connectionFactory based on whether the RabbitMQ related libraries are present in the classpath.

Here is the complete configuration if you are interested in exploring further - https://github.com/bijukunjummen/rg-si-rabbit/blob/master/src/main/java/rube/config/RabbitConfig.java

References:

  • Spring-AMQP project here
  • Spring-Boot starter project using RabbitMQ here

Saturday, October 11, 2014

Spring @Configuration and injecting bean dependencies as method parameters

One of the ways Spring recommends injecting inter-dependencies between beans is shown in the following sample copied from the Spring's reference guide here:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo() {
  return new Foo(bar());
 }

 @Bean
 public Bar bar() {
  return new Bar("bar1");
 }

}
So here, bean `foo` is being injected with a `bar` dependency.

However, there is one alternate way to inject dependency that is not documented well, it is to just take the dependency as a `@Bean` method parameter this way:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(Bar bar) {
  return new Foo(bar);
 }

 @Bean
 public Bar bar() {
  return new Bar("bar1");
 }

}

There is a catch here though, the injection is now by type, the `bar` dependency would be resolved by type first and if duplicates are found, then by name:

@Configuration
public static class AppConfig {

 @Bean
 public Foo foo(Bar bar1) {
  return new Foo(bar1);
 }

 @Bean
 public Bar bar1() {
  return new Bar("bar1");
 }

 @Bean
 public Bar bar2() {
  return new Bar("bar2");
 }
}

In the above sample dependency `bar1` will be correctly injected. If you want to be more explicit about it, an @Qualifer annotation can be added in:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(@Qualifier("bar1") Bar bar1) {
  return new Foo(bar1);
 }

 @Bean
 public Bar bar1() {
  return new Bar("bar1");
 }

 @Bean
 public Bar bar2() {
  return new Bar("bar2");
 }
}


So now the question of whether this is recommended at all, I would say yes for certain cases. For eg, had the bar bean been defined in a different @Configuration class , the way to inject the dependency then is along these lines:

@Configuration
public class AppConfig {

 @Autowired
 @Qualifier("bar1")
 private Bar bar1;

 @Bean
 public Foo foo() {
  return new Foo(bar1);
 }

}

I find the method parameter approach simpler here:

@Configuration
public class AppConfig {

 @Bean
 public Foo foo(@Qualifier("bar1") Bar bar1) {
  return new Foo(bar1);
 }

}


Thoughts?

Sunday, September 28, 2014

Spring WebApplicationInitializer and ApplicationContextInitializer confusion

These are two concepts that I mix up occasionally - a WebApplicationInitializer and an ApplicationContextInitializer, and wanted to describe each of them to clarify them for myself.

I have previously blogged about WebApplicationInitializer here and here. It is relevant purely in a Servlet 3.0+ spec compliant servlet container and provides a hook to programmatically configure the servlet context. How does this help - you can have a web application without potentially any web.xml file, typically used in a Spring based web application to describe the root application context and the Spring web front controller called the DispatcherServlet. An example of using WebApplicationInitializer is the following:

public class CustomWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {
 @Override
 protected Class<?>[] getRootConfigClasses() {
  return new Class<?>[]{RootConfiguration.class};
 }

 @Override
 protected Class<?>[] getServletConfigClasses() {
  return new Class<?>[]{MvcConfiguration.class};
 }

 @Override
 protected String[] getServletMappings() {
  return new String[]{"/"};
 }
}

Now, what is an ApplicationContextInitializer. It is essentially code that gets executed before the Spring application context gets completely created. A good use case for using an ApplicationContextInitializer would be to set a Spring environment profile programmatically, along these lines:

public class DemoApplicationContextInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {

 @Override
 public void initialize(ConfigurableApplicationContext ac) {
  ConfigurableEnvironment appEnvironment = ac.getEnvironment();
  appEnvironment.addActiveProfile("demo");

 }
}

If you have a Spring-Boot based application then registering an ApplicationContextInitializer is fairly straightforward:

@Configuration
@EnableAutoConfiguration
@ComponentScan
public class SampleWebApplication {
 
 public static void main(String[] args) {
  new SpringApplicationBuilder(SampleWebApplication.class)
    .initializers(new DemoApplicationContextInitializer())
    .run(args);
 }
}

For a non Spring-Boot Spring application though, it is a little more tricky, if it is a programmatic configuration of web.xml, then the configuration is along these lines:
public class CustomWebAppInitializer implements WebApplicationInitializer {

 @Override
 public void onStartup(ServletContext container) {
  AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext();
  rootContext.register(RootConfiguration.class);
  ContextLoaderListener contextLoaderListener = new ContextLoaderListener(rootContext);
  container.addListener(contextLoaderListener);
  container.setInitParameter("contextInitializerClasses", "mvctest.web.DemoApplicationContextInitializer");
  AnnotationConfigWebApplicationContext webContext = new AnnotationConfigWebApplicationContext();
  webContext.register(MvcConfiguration.class);
  DispatcherServlet dispatcherServlet = new DispatcherServlet(webContext);
  ServletRegistration.Dynamic dispatcher = container.addServlet("dispatcher", dispatcherServlet);
  dispatcher.addMapping("/");
 }
}

If it a normal web.xml configuration then the initializer can be specified this way:
<context-param>
    <param-name>contextInitializerClasses</param-name>
    <param-value>com.myapp.spring.SpringContextProfileInit</param-value>
</context-param>

<listener>
    <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>

So to conclude, except for the Initializer suffix, both WebApplicationInitializer and ApplicationContextInitializer serve fairly different purposes. Whereas the WebApplicationInitializer is used by a Servlet Container at startup of the web application and provides a way for programmatic creating a web application(replacement for a web.xml file), ApplicationContextInitializer provides a hook to configure the Spring application context before it gets fully created.

Tuesday, September 16, 2014

Scala and Java 8 type inference in higher order functions sample

One of the concepts mentioned in the Functional Programming in Scala is about the type inference in higher order functions in Scala and how it fails in certain situations and a workaround for the same. So consider a sample higher order function, purely for demonstration:

def filter[A](list: List[A], p: A => Boolean):List[A] = {
 list.filter(p)
} 

Ideally, passing in a list of say integers, you would expect the predicate function to not require an explicit type:

val l = List(1, 5, 9, 20, 30) 

filter(l, i => i < 10)

Type inference does not work in this specific instance however, the fix is to specify the type explicitly:

filter(l, (i:Int) => i < 10)

Or a better fix is to use currying, then the type inference works!

def filter[A](list: List[A])(p: A=>Boolean):List[A] = {
 list.filter(p)
}                                         

filter(l)(i => i < 10) 
//OR
filter(l)(_ < 10) 
I was curious whether Java 8 type inference has this issue and tried a similar sample with Java 8 Lambda expression, the following is an equivalent filter function -
public <A> List<A> filter(List<A> list, Predicate<A> condition) {
 return list.stream().filter(condition).collect(toList());
}
and type inference for the predicate works cleanly -
List ints = Arrays.asList(1, 5, 9, 20, 30);
List lessThan10 =  filter(ints, i -> i < 10);
Another blog entry on a related topic by the author of the "Functional Programming in Scala" book is available here - http://pchiusano.blogspot.com/2011/05/making-most-of-scalas-extremely-limited.html

Sunday, September 14, 2014

Customizing HttpMessageConverters with Spring Boot and Spring MVC

Exposing a REST based endpoint for a Spring Boot application or for that matter a straight Spring MVC application is straightforward, the following is a controller exposing an endpoint to create an entity based on the content POST'ed to it:

@RestController
@RequestMapping("/rest/hotels")
public class RestHotelController {
        ....
 @RequestMapping(method=RequestMethod.POST)
 public Hotel create(@RequestBody @Valid Hotel hotel) {
  return this.hotelRepository.save(hotel);
 }
}

Internally Spring MVC uses a component called a HttpMessageConverter to convert the Http request to an object representation and back.

A set of default converters are automatically registered which supports a whole range of different resource representation formats - json, xml for instance.

Now, if there is a need to customize the message converters in some way, Spring Boot makes it simple. As an example consider if the POST method in the sample above needs to be little more flexible and should ignore properties which are not present in the Hotel entity - typically this can be done by configuring the Jackson ObjectMapper, all that needs to be done with Spring Boot is to create a new HttpMessageConverter bean and that would end up overriding all the default message converters, this way:

@Bean
 public MappingJackson2HttpMessageConverter mappingJackson2HttpMessageConverter() {
  MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter();
  ObjectMapper objectMapper = new ObjectMapper();
  objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
  jsonConverter.setObjectMapper(objectMapper);
  return jsonConverter;
 }

This works well for a Spring-Boot application, however for straight Spring MVC applications which do not make use of Spring-Boot, configuring a custom converter is a little more complicated - the default converters are not registered by default and an end user has to be explicit about registering the defaults - the following is the relevant code for Spring 4 based applications:

@Configuration
public class WebConfig extends WebMvcConfigurationSupport {

 @Bean
 public MappingJackson2HttpMessageConverter customJackson2HttpMessageConverter() {
  MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter();
  ObjectMapper objectMapper = new ObjectMapper();
  objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
  jsonConverter.setObjectMapper(objectMapper);
  return jsonConverter;
 }
 
 @Override
 public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
  converters.add(customJackson2HttpMessageConverter());
  super.addDefaultHttpMessageConverters();
 }
}

Here WebMvcConfigurationSupport provides a way to more finely tune the MVC tier configuration of a Spring based application. In the configureMessageConverters method, the custom converter is being registered and then an explicit call is being made to ensure that the defaults are registered also. A little more work than for a Spring-Boot based application.

Thursday, August 28, 2014

Spring MVC endpoint documentation with Spring Boot

A long time ago I had posted about a way to document all the uri mappings exposed by a typical Spring MVC based application. The steps to do this however are very verbose and requires a fairly deep knowledge of some of the underlying Spring MVC components.

Spring Boot makes this kind of documentation way simpler. All you need to do for a Spring-boot based application is to activate the Spring-boot actuator. Adding in the actuator brings in a lot more production ready features to a Spring-boot application, my focus however is specifically on the endpoint mappings.

So, first to add in the actuator as a dependency to a Spring-boot application:

<dependency>
 <groupId>org.springframework.boot</groupId>
 <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

and if the Spring-Boot app is started up now, a REST endpoint at this http://machinename:8080/mappings url should be available which lists out all the uri's exposed by the application, a snippet of this information looks like the following in a sample application I have:

{
  "/**/favicon.ico" : {
    "bean" : "faviconHandlerMapping"
  },
  "/hotels/partialsEdit" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/hotels/partialsCreate" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/hotels/partialsList" : {
    "bean" : "viewControllerHandlerMapping"
  },
  "/**" : {
    "bean" : "resourceHandlerMapping"
  },
  "/webjars/**" : {
    "bean" : "resourceHandlerMapping"
  },
  "{[/hotels],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.lang.String mvctest.web.HotelController.list(org.springframework.ui.Model)"
  },
  "{[/rest/hotels/{id}],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.get(long)"
  },
  "{[/rest/hotels],methods=[GET],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.util.List<mvctest.domain.Hotel> mvctest.web.RestHotelController.list()"
  },
  "{[/rest/hotels/{id}],methods=[DELETE],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public org.springframework.http.ResponseEntity<java.lang.Boolean> mvctest.web.RestHotelController.delete(long)"
  },
  "{[/rest/hotels],methods=[POST],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.create(mvctest.domain.Hotel)"
  },
  "{[/rest/hotels/{id}],methods=[PUT],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public mvctest.domain.Hotel mvctest.web.RestHotelController.update(long,mvctest.domain.Hotel)"
  },
  "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public java.lang.String mvctest.web.RootController.onRootAccess()"
  },
  "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" : {
    "bean" : "requestMappingHandlerMapping",
    "method" : "public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)"
  },
  
  ....

Note that by default the json is not formatted, to get a formatted json just ensure that you have the following entry in your application.properties file:

http.mappers.json-pretty-print=true

This listing is much more comprehensive than the listing that I originally had.

The same information can of course be presented in a better way by rendering it to html and I have opted to use angularjs to present this information, the following is the angularjs service factory to retrieve the mappings and the controller which makes use of this factory to populate a mappings model:

app.factory("mappingsFactory", function($http) {
    var factory = {};
    factory.getMappings = function() {
        return $http.get(URLS.mappingsUrl);
    }
    return factory;
});

app.controller("MappingsCtrl", function($scope, $state, mappingsFactory) {
    function init() {
        mappingsFactory.getMappings().success(function(data) {
           $scope.mappings = data;
        });
    }

    init();
});

The returned mappings model is essentially a map of a map, the key of the map being the uri path exposed by Spring-Boot application and the values being the name of the bean handling the endpoint and if available the details of the controller handling the call, this can be rendered using a template of the following form:

<table class="table table-bordered table-striped">
    <thead>
    <tr>
        <th width="50%">Path</th>
        <th width="10%">Bean</th>
        <th width="40%">Method</th>
    </tr>
    </thead>
    <tbody>
    <tr ng-repeat="(k, v) in mappings">
        <td>{{k}}</td>
        <td>{{v.bean}}</td>
        <td>{{v.method}}</td>
    </tr>
    </tbody>
</table>

the final rendered view of the endpoint mappings is displayed in the following way:

Here is a sample github project with the rendering implemented: https://github.com/bijukunjummen/spring-boot-mvc-test

Thursday, August 21, 2014

GemFire XD cluster using Docker

I stared learning how to build and use Docker containers a few days back and one of my learning samples has been to build a series of docker containers to hold a Pivotal GemFire XD cluster.

First the result and then I will go into some details on how the containers were built -

This is the GemfireXD topology that I wanted to build:


The topology consists of 2 GemFire XD servers each running in its own process and a GemFire XD locator to provide connectivity to clients using this cluster and to load balance between the 2(or potentially more) server processes.

The following fig definition shows how my cluster is configured:

fig.yml:
locator:
  image: bijukunjummen/gfxd-locator
  ports:
    - "10334"
    - "1527:1527"
    - "7075:7075"

server1:
  image: bijukunjummen/gfxd-server
  ports:
    - "1528:1528"
  links:
    - locator
  environment: 
   - CLIENT_PORT=1528

server2:
  image: bijukunjummen/gfxd-server
  ports:
    - "1529:1529"
  links:
    - locator
  environment: 
   - CLIENT_PORT=1529   

This simple fig definition would boot up and start the 3 container cluster, linking the Locator to the GemfireXD server and information about this cluster can be viewed through a tool called Pulse that GemFire XD comes packaged with:

This cluster definition is eminently repeatable - I was able to publish the 2 images "gfxd-locator" and "gfxd-server" to Docker Hub and using the fig.yml the entire cluster can be brought up by anybody with a local installation of Docker and Fig.

So how was the Docker image created:

I required two different Docker image types - a GemFire XD locator and a GemFire XD server, there is a lot common among these images, they both use the common Gemfire XD installation except for how each of them is started up. So I have a base image which is defined in the Dockerfile at this github location which builds on top of the CentOS image and deploys the Gemfire XD to the image. Then I have the Gemfire XD server and the locator images deriving from the base image with ENTRYPOINT's specifying how each of the processes should be started up.

The entire project is available at this github location - https://github.com/bijukunjummen/docker-gfxd-cluster and the README instruction there should provide enough information on how to build up and run the cluster.

To conclude, this has been an excellent learning exercise on how Docker works and how easy Fig makes it to orchestrate multiple containers to create a cohesive cluste, to share a repeatable configuration.

I would like to thank my friends Alvin Henrick and Jeff Cherng for their help with a good part of the Docker and GemFire XD configurations!

Friday, August 1, 2014

Deploying a Spring boot application to Cloud Foundry with Spring-Cloud

I have a small Spring boot based application that uses a Postgres database as a datastore. I wanted to document the steps involved in deploying this sample application to Cloud Foundry.

Some of the steps are described in the Spring Boot reference guide, however the guides do not sufficiently explain how to integrate with the datastore provided in a cloud based environment.

Spring-cloud provides the glue to connect Spring based applications deployed on a Cloud to discover and connect to bound services, so the first step is to pull in the Spring-cloud libraries into the project with the following pom entries:

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-spring-service-connector</artifactId>
	<version>1.0.0.RELEASE</version>
</dependency>

<dependency>
	<groupId>org.springframework.cloud</groupId>
	<artifactId>spring-cloud-cloudfoundry-connector</artifactId>
	<version>1.0.0.RELEASE</version>
</dependency>

Once this dependency is pulled in, connecting to a bound service is easy, just define a configuration along these lines:
@Configuration
public class PostgresCloudConfig extends AbstractCloudConfig {

	@Bean
	public DataSource dataSource() {
		return connectionFactory().dataSource();
	}

}

Spring-Cloud understands that the application is deployed on a specific Cloud(currently Cloud Foundry and Heroku by looking for certain characteristics of the deployed Cloud platform), discovers the bound services, recognizes that there is a bound service using which a Postgres based datasource can be created and returns the datasource as a Spring bean.

This application can now deploy cleanly to a Cloud Foundry based Cloud. The sample application can be tried out in a version of Cloud Foundry deployed with bosh-lite, these are how the steps in my machine looks like once Cloud Foundry is up and running with bosh-lite:

The following command creates a user provided service in Cloud Foundry:
cf create-user-provided-service psgservice -p '{"uri":"postgres://postgres:p0stgr3s@bkunjummen-mbp.local:5432/hotelsdb"}'

Now, push the app, however don't start it up. We can do that once the service above is bound to the app:
cf push spring-boot-mvc-test -p target/spring-boot-mvc-test-1.0.0-SNAPSHOT.war --no-start

Bind the service to the app and restart the app:
cf bind-service spring-boot-mvc-test psgservice
cf restart spring-boot-mvc-test

That is essentially it, Spring Cloud should ideally take over at the point and cleanly parse the credentials from the bound service which within Cloud Foundry translates to an environment variable called VCAP_SERVICES, and create the datasource from it.


There is however an issue with this approach - once the datasource bean is created using spring-cloud approach, it does not work in a local environment anymore.

The potential fix for this is to use Spring profiles, assume that there is a different "cloud" Spring profile available in Cloud environment where the Spring-cloud based datasource gets returned:

@Profile("cloud")
@Configuration
public class PostgresCloudConfig extends AbstractCloudConfig {

	@Bean
	public DataSource dataSource() {
		return connectionFactory().dataSource();
	}
}

and let Spring-boot auto-configuration create a datasource in the default local environment, this way the configuration works both local as well as in Cloud. Where does this "cloud" profile come from, it can be created using a ApplicationContextInitializer, and looks this way:

public class SampleWebApplicationInitializer implements ApplicationContextInitializer<AnnotationConfigEmbeddedWebApplicationContext> {

	private static final Log logger = LogFactory.getLog(SampleWebApplicationInitializer.class);

	@Override
	public void initialize(AnnotationConfigEmbeddedWebApplicationContext applicationContext) {
		Cloud cloud = getCloud();
		ConfigurableEnvironment appEnvironment = applicationContext.getEnvironment();

		if (cloud!=null) {
			appEnvironment.addActiveProfile("cloud");
		}

		logger.info("Cloud profile active");
	}

	private Cloud getCloud() {
		try {
			CloudFactory cloudFactory = new CloudFactory();
			return cloudFactory.getCloud();
		} catch (CloudException ce) {
			return null;
		}
	}
}

This initializer makes use of the Spring-cloud's scanning capabilities to activate the "cloud" profile.


One last thing which I wanted to try was to make my local behave like Cloud atleast in the eyes of Spring-Cloud and this can be done by adding in some environment variables using which Spring-Cloud makes the determination of the type of cloud where the application is deployed, the following is my startup script in local for the app to pretend as if it is deployed in Cloud Foundry:

read -r -d '' VCAP_APPLICATION <<'ENDOFVAR'
{"application_version":"1","application_name":"spring-boot-mvc-test","application_uris":[""],"version":"1.0","name":"spring-boot-mvc-test","instance_id":"abcd","instance_index":0,"host":"0.0.0.0","port":61008}
ENDOFVAR

export VCAP_APPLICATION=$VCAP_APPLICATION

read -r -d '' VCAP_SERVICES <<'ENDOFVAR'
{"postgres":[{"name":"psgservice","label":"postgresql","tags":["postgresql"],"plan":"Standard","credentials":{"uri":"postgres://postgres:p0stgr3s@bkunjummen-mbp.local:5432/hotelsdb"}}]}
ENDOFVAR

export VCAP_SERVICES=$VCAP_SERVICES

mvn spring-boot:run

This entire sample is available at this github location:https://github.com/bijukunjummen/spring-boot-mvc-test

Conclusion


Spring Boot along with Spring-Cloud project now provide an excellent toolset to create Spring-powered cloud ready applications, and hopefully these notes are useful in integrating Spring Boot with Spring-Cloud and using these for seamless local and Cloud deployments.

Saturday, July 19, 2014

Tailing a file - Spring Websocket sample

This is a sample that I have wanted to try for sometime - A Websocket application to tail the contents of a file.


The following is the final view of the web-application:



There are a few parts to this application:

Generating a File to tail:


I chose to use a set of 100 random quotes as a source of the file content, every few seconds the application generates a quote and writes this quote to the temporary file. Spring Integration is used for wiring this flow for writing the contents to the file:

<int:channel id="toFileChannel"/>

<int:inbound-channel-adapter ref="randomQuoteGenerator" method="generateQuote" channel="toFileChannel">
	<int:poller fixed-delay="2000"/>
</int:inbound-channel-adapter>

<int:chain input-channel="toFileChannel">
	<int:header-enricher>
		<int:header name="file_name" value="quotes.txt"/>
	</int:header-enricher>
	<int-file:outbound-channel-adapter directory="#{systemProperties['java.io.tmpdir']}" mode="APPEND" />
</int:chain>

Just a quick note, Spring Integration flows can now also be written using a Java Based DSL, and this flow using Java is available here

Tailing the file and sending the content to a broker


The actual tailing of the file itself can be accomplished by OS specific tail command or by using a library like Apache Commons IO. Again in my case I decided to use Spring Integration which provides Inbound channel adapters to tail a file purely using configuration, this flow looks like this:
<int:channel id="toTopicChannel"/>

<int-file:tail-inbound-channel-adapter id="fileInboundChannelAdapter"
				channel="toTopicChannel"
				file="#{systemProperties['java.io.tmpdir']}/quotes.txt"
				delay="2000"
				file-delay="10000"/>

<int:outbound-channel-adapter ref="fileContentRecordingService" method="sendLinesToTopic" channel="toTopicChannel"/>
and its working Java equivalent

There is a reference to a "fileContentRecordingService" above, this is the component which will direct the lines of the file to a place where the Websocket client will subscribe to.

Websocket server configuration

Spring Websocket support makes it super simple to write a Websocket based application, in this instance the entire working configuration is the following:
@Configuration
@EnableWebSocketMessageBroker
public class WebSocketDefaultConfig extends AbstractWebSocketMessageBrokerConfigurer {

	@Override
	public void configureMessageBroker(MessageBrokerRegistry config) {
		//config.enableStompBrokerRelay("/topic/", "/queue/");
		config.enableSimpleBroker("/topic/", "/queue/");
		config.setApplicationDestinationPrefixes("/app");
	}

	@Override
	public void registerStompEndpoints(StompEndpointRegistry registry) {
		registry.addEndpoint("/tailfilesep").withSockJS();
	}
}

This may seem a little over the top, but what these few lines of configuration does is very powerful and the configuration can be better understood by going through the reference here. In brief, it sets up a websocket endpoint at '/tailfileep' uri, this endpoint is enhanced with SockJS support, Stomp is used as a sub-protocol, endpoints `/topic` and `/queue` is configured to a real broker like RabbitMQ or ActiveMQ but in this specific to an in-memory one.

Going back to the "fileContentRecordingService" once more, this component essentially takes the line of the file and sends it this in-memory broker, SimpMessagingTemplate facilitates this wiring:

public class FileContentRecordingService {
	@Autowired
	private SimpMessagingTemplate simpMessagingTemplate;

	public void sendLinesToTopic(String line) {
		this.simpMessagingTemplate.convertAndSend("/topic/tailfiles", line);
	}
}


Websocket UI configuration

The UI is angularjs based, the client controller is set up this way and internally uses the javascript libraries for sockjs and stomp support:

var tailFilesApp = angular.module("tailFilesApp",[]);

tailFilesApp.controller("TailFilesCtrl", function ($scope) {
    function init() {
        $scope.buffer = new CircularBuffer(20);
    }

    $scope.initSockets = function() {
        $scope.socket={};
        $scope.socket.client = new SockJS("/tailfilesep);
        $scope.socket.stomp = Stomp.over($scope.socket.client);
        $scope.socket.stomp.connect({}, function() {
            $scope.socket.stomp.subscribe("/topic/tailfiles", $scope.notify);
        });
        $scope.socket.client.onclose = $scope.reconnect;
    };

    $scope.notify = function(message) {
        $scope.$apply(function() {
            $scope.buffer.add(angular.fromJson(message.body));
        });
    };

    $scope.reconnect = function() {
        setTimeout($scope.initSockets, 10000);
    };

    init();
    $scope.initSockets();
});

The meat of this code is the "notify" function which the callback acting on the messages from the server, in this instance the new lines coming into the file and showing it in a textarea.


This wraps up the entire application to tail a file. A complete working sample without any external dependencies is available at this github location, instructions to start it up is also available at that location.

Conclusion

Spring Websockets provides a concise way to create Websocket based applications, this sample provides a good demonstration of this support. I had presented on this topic recently at my local JUG (IndyJUG) and a deck with the presentation is available here

Friday, July 4, 2014

Scala Tail Recursion confusion

I was looking at a video of Martin Odersky's keynote during Scala Days 2014 and there was a sample tail recursion code that confused me:

@tailrec
private def sameLength[T, U](xs: List[T], ys: List[U]): Boolean = {
  if (xs.isEmpty) ys.isEmpty
  else ys.nonEmpty && sameLength(xs.tail, ys.tail)
}

On a quick glance, this did not appear to be tail recursive to me, as there is the && operation that needs to be called after the recursive call.

However, thinking a little more about it, && is a short-circuit operator and the recursive operation would get called only if the ys.nonEmpty statement evaluates to true, thus maintaining the definition of a tail recursion.

The decompiled class clarifies this a little more, surprisingly the && operator does not appear anywhere in the decompiled code!:

public <T, U> boolean org$bk$sample$SameLengthTest$$sameLength(List<T> xs, List<U> ys)
  {
    for (; ys.nonEmpty(); xs = (List)xs.tail()) ys = (List)ys.tail();
    return 
      xs.isEmpty() ? ys.isEmpty() : 
      false;
  }

If the operator were changed to something that does not have short-circuit behavior, the method of course will not be a tail-recursion at that point, say a hypothetical method with the XOR operator:

private def notWorking[T, U](xs: List[T], ys: List[U]): Boolean = {
  if (xs.isEmpty) ys.isEmpty
  else ys.nonEmpty ^ notWorking(xs.tail, ys.tail)
}

Something fairly basic that tripped me up today!

Sunday, June 29, 2014

Spring Integration Java DSL sample - further simplification with Jms namespace factories

In an earlier blog entry I had touched on a fictitious rube goldberg flow for capitalizing a string through a complicated series of steps, the premise of the article was to introduce Spring Integration Java DSL as an alternative to defining integration flows through xml configuration files.

I learned a few new things after writing that blog entry, thanks to Artem Bilan and wanted to document those learnings here:


So, first my original sample, here I have the following flow(the one's in bold):

  1. Take in a message of this type - "hello from spring integ"
  2. Split it up into individual words(hello, from, spring, integ)
  3. Send each word to a ActiveMQ queue
  4. Pick up the word fragments from the queue and capitalize each word
  5. Place the response back into a response queue
  6. Pick up the message, re-sequence based on the original sequence of the words
  7. Aggregate back into a sentence("HELLO FROM SPRING INTEG") and
  8. Return the sentence back to the calling application.

EchoFlowOutbound.java:
@Bean
 public DirectChannel sequenceChannel() {
  return new DirectChannel();
 }

 @Bean
 public DirectChannel requestChannel() {
  return new DirectChannel();
 }

 @Bean
 public IntegrationFlow toOutboundQueueFlow() {
  return IntegrationFlows.from(requestChannel())
    .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s"))
    .handle(jmsOutboundGateway())
    .get();
 }

 @Bean
 public IntegrationFlow flowOnReturnOfMessage() {
  return IntegrationFlows.from(sequenceChannel())
    .resequence()
    .aggregate(aggregate ->
      aggregate.outputProcessor(g ->
        Joiner.on(" ").join(g.getMessages()
          .stream()
          .map(m -> (String) m.getPayload()).collect(toList())))
      , null)
    .get();
 }

@Bean
public JmsOutboundGateway jmsOutboundGateway() {
 JmsOutboundGateway jmsOutboundGateway = new JmsOutboundGateway();
 jmsOutboundGateway.setConnectionFactory(this.connectionFactory);
 jmsOutboundGateway.setRequestDestinationName("amq.outbound");
 jmsOutboundGateway.setReplyChannel(sequenceChannel());
 return jmsOutboundGateway;
}

It turns out, based on Artem Bilan's feedback, that a few things can be optimized here.

First notice how I have explicitly defined two direct channels, "requestChannel" for starting the flow that takes in the string message and the "sequenceChannel" to handle the message once it returns back from the jms message queue, these can actually be totally removed and the flow made a little more concise this way:

@Bean
public IntegrationFlow toOutboundQueueFlow() {
 return IntegrationFlows.from("requestChannel")
   .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s"))
   .handle(jmsOutboundGateway())
   .resequence()
   .aggregate(aggregate ->
     aggregate.outputProcessor(g ->
       Joiner.on(" ").join(g.getMessages()
         .stream()
         .map(m -> (String) m.getPayload()).collect(toList())))
     , null)
   .get();
}

@Bean
public JmsOutboundGateway jmsOutboundGateway() {
 JmsOutboundGateway jmsOutboundGateway = new JmsOutboundGateway();
 jmsOutboundGateway.setConnectionFactory(this.connectionFactory);
 jmsOutboundGateway.setRequestDestinationName("amq.outbound");
 return jmsOutboundGateway;
}


"requestChannel" is now being implicitly created just by declaring a name for it. The sequence channel is more interesting, quoting Artem Bilan -
do not specify outputChannel for AbstractReplyProducingMessageHandler and rely on DSL
, what it means is that here jmsOutboundGateway is a AbstractReplyProducingMessageHandler and its reply channel is implicitly derived by the DSL. Further, two methods which were earlier handling the flows for sending out the message to the queue and then continuing once the message is back, is collapsed into one. And IMHO it does read a little better because of this change.


The second good change and the topic of this article is the introduction of the Jms namespace factories, when I had written the previous blog article, DSL had support for defining the AMQ inbound/outbound adapter/gateway, now there is support for Jms based inbound/adapter adapter/gateways also, this simplifies the flow even further, the flow now looks like this:

@Bean
public IntegrationFlow toOutboundQueueFlow() {
 return IntegrationFlows.from("requestChannel")
   .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s"))
   .handle(Jms.outboundGateway(connectionFactory)
     .requestDestination("amq.outbound"))
   .resequence()
   .aggregate(aggregate ->
     aggregate.outputProcessor(g ->
       Joiner.on(" ").join(g.getMessages()
         .stream()
         .map(m -> (String) m.getPayload()).collect(toList())))
     , null)
   .get();
}

The inbound Jms part of the flow also simplifies to the following:

@Bean
public IntegrationFlow inboundFlow() {
 return IntegrationFlows.from(Jms.inboundGateway(connectionFactory)
   .destination("amq.outbound"))
   .transform((String s) -> s.toUpperCase())
   .get();
}


Thus, to conclude, Spring Integration Java DSL is an exciting new way to concisely configure Spring Integration flows. It is already very impressive in how it simplifies the readability of flows, the introduction of the Jms namespace factories takes it even further for JMS based flows.


I have updated my sample application with the changes that I have listed in this article - https://github.com/bijukunjummen/rg-si