On September 17, 2011 christopher wrote: Handling Multiple File Uploads With Uploadify

More JavaScript fun this week, dur­ing which I have been play­ing around with Upload­ify, one of a slew of ready-made JavaScript + Flash solu­tions for han­dling mul­ti­ple file uploads in the browser. I have cho­sen to look at Upload­ify over the other alter­na­tives due to the fact that I find it per­son­ally to be the most sim­ple to use. Also, it is pro­vided as an exten­sion to jQuery, which is prob­a­bly my favourite JavaScript library. Last, but not least, it works … which is a qual­ity never to be under­es­ti­mated in software!

As usual, I am going to be demon­strat­ing its use in con­junc­tion with a Spring-Velocity web app. Rather than go through the whole process of set­ting up such an app again, I would refer you to my post on ActiveMQ, JMS & Ajax. The struc­ture of the project I talk about here is iden­ti­cal to that, but with fewer depen­den­cies, and, of course, you can down­load the code if you want to take a closer look. So let’s begin …

Let’s begin by imag­in­ing what our use case here might look like. For the sake of argu­ment, I am going to say that what we want is this:

  • The user needs to be able to upload one or more files.
  • The names of these files must match a nam­ing con­ven­tion from which the appli­ca­tion infers some kind of seman­tic value (which might be used to process them in dif­fer­ent ways, for example).
  • When the user uploads valid file(s), they should see a list of the files uploaded with the seman­tic infor­ma­tion we have parsed from them via their name(s).
  • When the user uploads invalid file(s), they should see infor­ma­tion about this error.

In my domain model, there­fore, I have 2 classes:

public enum UploadFileType {

    BAR(Pattern.compile("(?i)^[a-z0-9]+-bar\\.[a-z]+$")),
    BAZ(Pattern.compile("(?i)^[a-z0-9]+-baz\\.[a-z]+$")),
    FOO(Pattern.compile("(?i)^[a-z0-9]+-foo\\.[a-z]+$"));

    public static UploadFileType forName(String name) {
        for (UploadFileType type : values()) {
            if (type.p.matcher(name).matches()) return type;
        }
        throw new IllegalArgumentException(name + " does not match any of the patterns in the naming convention");
    }

    private Pattern p;

    private UploadFileType(Pattern p) {
        this.p = p;
    }

}

Because the logic required to parse my nam­ing con­ven­tion is sim­ple, I do not really need any kind of com­plex fac­tory … all I need is this sim­ple enum. You give it the file name in the forName(String) method and it returns the type. If no type is found, we throw an IllegalArgumentException which we will use later to man­age error flow. After this, I have a sim­ple Java bean to rep­re­sent the model for an UploadFile:

public class UploadFile {

    private final File file;

    private final String name;

    private final UploadFileType type;

    public UploadFile(File file, String name, UploadFileType type) {
        this.file = file;
        this.name = name;
        this.type = type;
    }

    @Override public boolean equals(Object obj) {
        return reflectionEquals(this, obj);
    }

    public File getFile() {
        return file;
    }

    public String getName() {
        return name;
    }

    public UploadFileType getType() {
        return type;
    }

    @Override public int hashCode() {
        return reflectionHashCode(this);
    }

    @Override public String toString() {
        return reflectionToString(this);
    }

}

Now, let’s assume I have a web page with a com­pletely stan­dard HTML form with a file input:

                <form id="upload_file" action="/upload" method="post" enctype="multipart/form-data">
                    <fieldset>
                        <legend>Choose the files to upload</legend>
                        <label for="file">File</label> <input id="file" name="file" type="file" /> <input type="submit" value="Upload" />
                    </fieldset>
                </form>

And, on the server-side, using Spring’s mul­ti­part form sup­port, I have a FileController that looks like this:

@Controller public class FileController {

    @RequestMapping(value = "/", method = RequestMethod.GET) public String getFileUploadViewName() {
        return "/fileupload";
    }

    @RequestMapping(value = "/upload", method = RequestMethod.POST) public UploadFile upload(@RequestParam("file") MultipartFile mf) throws IOException {
        String name = getName(mf.getOriginalFilename());
        File f = File.createTempFile(name, "." + getExtension(name));
        mf.transferTo(f);
        return new UploadFile(f, name, UploadFileType.forName(name));
    }

    @ExceptionHandler(IllegalArgumentException.class) @ResponseBody public String handleIllegalArgumentException(IllegalArgumentException e) {
        return "{\"error\":{\"message\":\"" + e.getMessage() + "\"}}";
    }

}

For the sake of clar­ity, here is what each of the three con­troller meth­ods do:

  1. getFileUploadViewName() sim­ply returns the Spring view name required to return the HTML file upload form. I have hard-coded this name for now but, in real life, you should prob­a­bly inject this as configuration.
  2. upload(MultipartFile) is the meat of the con­troller class: Spring will bind the mul­ti­part file from the file input of the sub­mit­ted form which has name="file". The only pro­cess­ing we are doing with this file for now is cre­at­ing a temp file from it and deter­min­ing its “type” by using the UploadFileType.forName(String) method. This will either return the type (in which case we have suc­cess­fully an UploadFile which we return as the model object) or it will throw the IllegalArgumentException in which case, our Spring error han­dler method is called …
  3. The handleIllegalArgumentException method is a Spring error han­dler method (indi­cated by the @ExceptionHandler anno­ta­tion. In such an event, all we do is return a hand-crafted JSON string as the response body with the excep­tion mes­sage in it … which is a bit lazy, I admit — but it will do for now.

With this in place, we have an end-to-end con­ven­tion HTML single-file upload mech­a­nism in place. So now it is time to dec­o­rate it with a lit­tle JavaScript magic so that the whole thing becomes a bit more usable and so that it becomes pos­si­ble for the user to upload mul­ti­ple files.

First, to pro­vide the user with the nec­es­sary feed­back in the page from the file uploads, I am going to add a few place­holder sec­tions to my HTML page within which I will dis­play data from the parsed JSON responses pro­vided by the controller …

        <section id="file_queue">
            <hgroup>
                <h1>File Upload Queue</h1>
                <div id="queue">
                </div>
            </hgroup>
        </section>
        <section id="files">
            <hgroup>
                <h1>Uploaded Files</h1>
                <ol>
                </ol>
            </hgroup>
        </section>
        <section id="errors">
            <hgroup>
                <h1>Errors</h1>
                <ol>
                </ol>
            </hgroup>
        </section>

We add, first, a sec­tion that will be used by Upload­ify to dis­play the queue of files as they are being uploaded with the ID “queue”. This is a built-in fea­ture of Upload­ify, so we don’t need to do any­thing spe­cial to get this queue dis­played, other than let Upload­ify know the ID of our “queue div” by set­ting the queueID prop­erty. After that, I have placed the 2 sec­tions that we will be writ­ing the info from the parsed JSON responses from the con­troller into — 1 for suc­cess­ful file uploads and 1 for errors.

Now I am just about ready to sprin­kle a lit­tle JavaScript on the page … But, stop! Before I do this there is ONE VERY IMPORTANT GOTCHA to deal with!

Now, due to the nature of the genius minds that work at Adobe, it is appar­ently impos­si­ble to set HTTP head­ers in requests made by Flash (I know, I know — what kind of non­sense is that?!) I am not a Flash expert, so I don’t know the ins-and-outs of this. How­ever, since Upload­ify (and pretty much any JavaScript-based mul­ti­file uploader) uses swfobject.js and Flash, this means that an Accept header can­not be set to request JSON responses. By default, as far as I can work out, Flash will always send an Accept header request­ing text/*. If you are using Spring’s ContentNegotiatingViewResolver (which I am in this exam­ple), you will need to work around this prob­lem. There are 3 options: the first is to use a .json file exten­sion on the request URI to indi­cate the required media type for the response this way. The sec­ond option is to use a request para­me­ter to achieve the same thing. The final way is to fil­ter the request and over­ride the Accept header for Upload­ify requests.

In real life, I would prob­a­bly just send Upload­ify requests to /upload.json and then the ContentNegotiatingViewResolver would know to send a JSON response. How­ever, for what­ever rea­son, you might not want or be able to change the URI in this way, so I will demon­strate the Old Skool servlet fil­ter approach (you might think you could also use a Spring HandlerInterceptor but that leads to call-order prob­lems with the ContentNegotiatingViewResolver — using a clas­sic fil­ter will ensure that the request is inter­cepted and wrapped before it even reaches Spring:

public class UploadifyFilter implements Filter {

    @SuppressWarnings({ "deprecation", "rawtypes" }) private static class UploadifyRequest implements HttpServletRequest {

        private final HttpServletRequest del;

        UploadifyRequest(HttpServletRequest delegate) {
            del = delegate;
        }

        @Override public String getHeader(String name) {
            return "Accept".equals(name) ? "application/json" : del.getHeader(name);
        }

        /* ... all other delegate methods not shown for brevity ... */

    }

    @Override public void init(FilterConfig filterConfig) throws ServletException {}

    @Override public void doFilter(ServletRequest req, ServletResponse resp, FilterChain chain) throws IOException, ServletException {
        if (req instanceof HttpServletRequest) chain.doFilter(new UploadifyRequest((HttpServletRequest) req), resp);
        else chain.doFilter(req, resp);
    }

    @Override public void destroy() {}

}

I then sim­ply wire this up in my web.xml to fil­ter requests sent to the /upload URI:

    <filter>
        <filter-name>uploadify</filter-name>
        <filter-class>com.christophertownson.foo.UploadifyFilter</filter-class>
    </filter>

    <filter-mapping>
        <filter-name>uploadify</filter-name>
        <url-pattern>/upload</url-pattern>
    </filter-mapping>

And now I am ready to receive Accept: text/* requests from the “Shock­wave Flash” user-agent and ensure that my app under­stands what content-type it should return. So let’s put in some JavaScript to make it all happen …

function init() {
    $('#file').uploadify({
        'script'         : '/upload', // URI to send the files to (could specify '/upload.json' here rather than using filter)
        'method'         : 'post', // request method to use
        'uploader'       : '/resources/uploadify.swf', // path to key uploadify files
        'expressInstall' : '/resources/expressInstall.swf', // path to key uploadify files
        'cancelImg'      : '/resources/cancel.png', // path to key uploadify files
        'auto'           : true, // automatically start uploading files (don't wait for user to push another button)
        'multi'          : true, // allow upload of multiple files
        'queueSizeLimit' : 10, // the number of files that can be placed into the queue at any 1 time
        'queueID'        : 'queue', // div ID to display the file upload queue in
        'fileDataName'   : 'file', // the request param name for the multipart file
        'fileExt'        : '*.txt;*.jpg;*.gif;*.png', // filter to use in the select files dialog
        'fileDesc'       : 'Upload Files (.txt, .jpg, .gif, .png)', // we MUST provide a description when we specify fileExt otherwise filter will NOT be applied
        // process JSON response to single file upload
        'onComplete'     : function(event, id, file, resp, data) {
            var obj = jQuery.parseJSON(resp);
            if (obj.error) {
                $('#errors ol').append('<li>' + obj.error.message + '</li>');
            } else if (obj.uploadFile) {
                $('#files ol').append('<li>' + obj.uploadFile.name + ' (Type: ' + obj.uploadFile.type +')</li>');
            }
        },
        // process HTTP 5xx errors etc
        'onError'        : function (event, id, file, err) {
            $('#errors ol').append('<li>' + err.type + ' Error: ' + err.info + '</li>')
        }
    });

    $('#upload_file').submit(function() {
        $('#file').uploadifyUpload();
        return false;
    });
}

$(document).ready(init);

It should be fairly evi­dent what this sim­ple JavaScript achieves but I will run through it briefly:

  1. It begins by con­fig­ur­ing some basic Upload­ify prop­er­ties includ­ing, per­haps most impor­tant for my pur­poses here 'multi' : true as this is what per­mits the upload of mul­ti­ple files.
  2. After that, we pro­vide an event call­back han­dler func­tion for the onComplete event — this event sig­nals com­ple­tion of the upload of an indi­vid­ual file within the queue and pro­vides access to the response from the server. In our case, this con­sists of a JSON string which we dese­ri­al­ize and append to the list of upload files or errors, as nec­es­sary. I also hook into the onError event to pro­vide sim­i­lar error han­dling for Upload­ify errors (rather than the log­i­cal excep­tions thrown by our con­troller that come back as JSON) … these are com­monly things like HTTP 5xx errors and so forth.
  3. Finally, I attach a lis­tener to the onSubmit event of the upload file form to pre­vent nor­mal sub­mis­sion and, instead, to get Upload­ify to han­dle the input.

There are many con­fig­u­ra­tion options and pos­si­bil­i­ties for event han­dling within the API. I would rec­om­mend read­ing the won­der­fully con­cise Upload­ify doc­u­men­ta­tion and it really is an excep­tion­ally easy piece of kit to use, espe­cially com­pared with the pain I recall try­ing to get some of the early ver­sions of the YUI uploader to work nicely a few years back … although the YUI3 Uploader looks a great improve­ment on pre­vi­ous ver­sions and is widely used and would also be well-worth considering.

On September 11, 2011 christopher wrote: ActiveMQ, JMS & Ajax: Involving the Browser in Event-Driven Systems

Event-driven archi­tec­ture is a big topic these days and with good rea­son. Aside from pro­vid­ing sig­nif­i­cant oppor­tu­ni­ties to increase scale and per­for­mance, given the right approach it can also make it eas­ier to divide sys­tems up into log­i­cal com­po­nents. The basic prin­ci­ple behind “event-driven” archi­tec­tures, as one might expect from the name, is asyn­chronic­ity: rather than view­ing your appli­ca­tion as a rigid sequence of processes, each being called in order as the result of a user action or sched­uled exe­cu­tion, com­po­nents pub­lish seman­tic events to which one or more other com­po­nents may be sub­scrib­ing and to which they may exe­cute logic (and per­haps pub­lish sub­se­quent events) in response. So, as you can prob­a­bly tell, like most buzz­words, “event-driven” really just tries to put a catchy name to an age-old con­cept; in this case “publish-subscribe” — but it does so, I would sug­gest, with the intent of help­ing devel­op­ers to get their head into the right space to think about how to actu­ally build sys­tems that are con­structed around such a mechanism.

In Java-land, we are for­tu­nate to have a sta­ble API with a num­ber of proven imple­men­ta­tions for publish-subscribe mes­sag­ing; namely: JMS, my favourite provider for which is Apache’s ActiveMQ. This is all well-and-good for han­dling our needs in this area on the server-side, but what about the user (sit­ting there with their browser) as an actor in this sys­tem? This ques­tion becomes very impor­tant when you con­sider that, described in abstract terms, the pri­mary use case, in my expe­ri­ence, for event-based mes­sag­ing sys­tems comes when you have users sub­mit­ting batches of work that demand inten­sive pro­cess­ing and require noti­fi­ca­tion of its completion.

In the past, one might have han­dled such sit­u­a­tions syn­chro­nously: the user would make the request through their browser, they would then sit and watch the “wheel of death” go around and around until, even­tu­ally, hope­fully, the request would be com­pleted. Of course, as soon as you got a few users doing this at the same time, your server became over­loaded and the whole thing ground to a halt. So you might have got as far as mak­ing this pro­cess­ing asyn­chro­nous, but then user instead would have to resort hit­ting the refresh but­ton and going “Is it done yet? Is it done yet? Is it done yet?” … or, worse, you would resort to that bête noire of office “noti­fi­ca­tion meth­ods”: email. Yuck!

Nowa­days, this kind of behav­iour is totally unnec­es­sary. Mak­ing the browser part of your event-driven sys­tem is a dod­dle and requires zero-alterations to your exist­ing JMS-based code (assum­ing that is what you are using). In this post, I would like to pro­vide a really basic exam­ple of how to send and receive JMS mes­sages using Active MQ’s Ajax sup­port. (Whilst there are bet­ter solu­tions on the hori­zon — most notably, in my view, Web­Sock­ets and STOMP — I have found them more than a lit­tle bit painful to inte­grate with exist­ing code to date, so I will focus on the more sta­ble solu­tion for the time being.)

Let’s begin by assum­ing that we have an appli­ca­tion that con­tains 1 “pro­cess­ing queue” and 1 “event topic”. With ActiveMQ, I can setup both a mes­sage bro­ker, the queue and the topic in a Spring appli­ca­tion con­text file:

    <amq:topic id="eventTopic" name="topic.events" />

    <amq:queue id="processingQueue" name="queue.process" />

    <amq:connectionFactory id="connectionFactory" brokerURL="vm://localhost" />

    <bean id="eventTopicTemplate" class="org.springframework.jms.core.JmsTemplate">
        <property name="connectionFactory" ref="connectionFactory" />
        <property name="defaultDestinationName" value="topic.events" />
    </bean>

Now, let’s say we have some kind of “ser­vice” that is respon­si­ble for doing this “inten­sive” asyn­chro­nous pro­cess­ing. The ser­vice will pick up mes­sages from the pro­cess­ing queue and, when the pro­cess­ing is com­plete, it will pub­lish a mes­sage to any inter­ested sub­scribers on the event topic say­ing that pro­cess­ing is com­plete. The API might look some­thing like this:

package com.christophertownson.foo;

public interface SomeService {

    void doProcessing(String msg);

}

For now, let’s make the most basic imple­men­ta­tion possible:

@Service("someService") public class SomeServiceImpl implements SomeService {

    private final JmsTemplate jms;

    @Autowired public SomeServiceImpl(@Qualifier("eventTopicTemplate") JmsTemplate eventTopicTemplate) {
        jms = eventTopicTemplate;
    }

    public void doProcessing(String msg) {
        jms.convertAndSend("We did something with your message: " + msg);
    }

}

All our imple­men­ta­tion does for now is send a mes­sage to the event topic to indi­cate that pro­cess­ing is “com­plete”. Any num­ber of com­po­nents may sub­scribe to this topic. In our case, the browser will ini­ti­ate pro­cess­ing requests by send­ing mes­sages to the pro­cess­ing queue and will sub­scribe to the event topic so that it can be noti­fied when the pro­cess­ing is com­plete and dis­play this to the user. But first we will wire up our ser­vice imple­men­ta­tion so that it will receive mes­sages sent to the pro­cess­ing queue:

    <jms:listener-container connection-factory="connectionFactory">
        <jms:listener destination="queue.process" ref="someService" method="doProcessing" />
    </jms:listener-container>

And that is it for the JMS-stuff on the server-side. Now let’s turn our atten­tion to the browser. I am going to assume that our user is inter­act­ing with our appli­ca­tion with some kind of “dash­board” web page. I am going to make my extremely beau­ti­ful (not!) dash­board look like this:

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="utf-8" />
        <title>Dashboard : Browser JMS Demo</title>
        <link rel="stylesheet" href="/resources/style.css" />
        <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.3/jquery.min.js" type="text/javascript"></script>
        <script type="text/javascript" src="/resources/amq_jquery_adapter.js"></script>
        <script type="text/javascript" src="/resources/amq.js"></script>
        <script type="text/javascript" src="/resources/behaviour.js"></script>
    </head>
    <body>
        <header>
            <hgroup>
                <h1>Browser JMS Demo Dashboard</h1>
            </hgroup>
        </header>
        <section id="send">
            <hgroup>
                <h1>Make something happen</h1>
                <form id="sendMessage" action="/process" method="post">
                    <fieldset>
                        <legend>Send a message to the processing queue.</legend>
                        <label for="msg">Message</label> <input type="text" id="msg" name="msg" maxlength="255" /> <input type="submit" value="Send" />
                    </fieldset>
                </form>
            </hgroup>
        </section>
        <section id="receive">
            <hgroup>
                <h1>Messages sent to the event topic</h1>
                <ol id="messages">
                </ol>
            </hgroup>
        </section>
        <footer>
            <p>Copyright &copy; Christopher Townson 2011</p><!-- yeah, that's right maaan: hands off my design! :p -->
        </footer>
    </body>
</html>

Note the use of the ActiveMQ JavaScript files on lines 8–9: these are pro­vided as part of any ActiveMQ dis­tri­b­u­tion pack­age, if you were won­der­ing where they come from. “Adapters” are also pro­vided for Pro­to­type and Dojo, but I am using JQuery in my exam­ple so opted for the JQuery adapter.

I will assume for now that you know how to setup some kind of con­troller to deliver this HTML file and will not go into the details of that here. As usual, I am using a Spring-Velocity setup to achieve this. How­ever, in this instance, that is not impor­tant because “con­trollers” are not really being used at all in this example.

So, let’s take a look at that behaviour.js file ref­er­enced on line 10. This is where I am putting the impor­tant stuff to actu­ally send and receive JMS messages …

function init() {
    var amq = org.activemq.Amq;
    amq.init({
        uri: 'amq',
        logging: true,
        timeout: 20
    });

    amq.addListener('theBrowser', 'topic.events', function(msg) {
        $('#messages').append('<li>' + msg.textContent + '</li>')
    });

    $('#sendMessage').submit(function() {
        var msg = $('#msg').val();
        amq.sendMessage('queue.process', msg);
        $('#sendMessage').after('<p id="ack">Sent message: "' + msg + '"');
        $("#ack").fadeOut(2000, function () {
            $("#ack").remove();
        });
        return false;
    });
}

$(document).ready(init);

Let’s go through this step-by-step …

  1. On lines 2–7, we instan­ti­ate and ini­tialise ActiveMQ.
  2. On lines 9–11, we setup lis­ten­ing for mes­sages on the event topic, pass­ing a client ID, the name of the des­ti­na­tion we want to lis­ten to (i.e. the event topic in this case) and a call­back to exe­cute when a mes­sage is received … I am just going to add the mes­sage to a list on the page for now.
  3. On lines 13–21, I inter­cept mes­sage form sub­mis­sions, send­ing a JMS mes­sage rather than con­tin­u­ing with a nor­mal HTTP form sub­mis­sion. As a lit­tle piece of sugar, I have a fade ani­ma­tion alert to say that the mes­sage has been sent after you sub­mit the form. (In real life, you should make this degrade grace­fully by hav­ing a han­dler on the server-side to send the JMS mes­sage and mak­ing your JavaScript code per­form an asyn­chro­nous request to that han­dler, rather than send­ing the mes­sage directly as I am doing here.)

The final piece of the jig­saw puz­zle is the ActiveMQ AjaxServlet. This is part of the activemq-web, which you will need to declare as a depen­dency in addi­tion to the usual activemq-core. Then we just need to declare this servlet in our web.xml as follows:

    <servlet>
        <servlet-name>amq</servlet-name>
        <servlet-class>org.apache.activemq.web.AjaxServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>amq</servlet-name>
        <url-pattern>/amq/*</url-pattern>
    </servlet-mapping>

And, Lo and behold! Our dash­board page receives event noti­fi­ca­tions in the page with­out reload. Isn’t JavaScript magic?! Now, of course, there are all kinds of sub­tleties and sophis­ti­ca­tions that one can intro­duce, includ­ing mes­sage fil­ter­ing, brows­ing, con­ver­sion and so on. How­ever, first you need to con­sider care­fully what type of mes­sages you are send­ing to the browser. Ide­ally, in my view, these should nor­mally only con­sti­tute events that lead a call­back han­dler to update the data dis­played about some under­ly­ing per­sis­tent entity (because you don’t get these mes­sages again when the page is reloaded, obvi­ously). Also, there is much to con­sider about the for­mat of such mes­sages: using JSON as a JMS text mes­sage for­mat, rather than map or object mes­sages, makes a lot of sense when you look at things from this angle because it is then very easy to serialize/deserialize mes­sages in both the browser and in a MessageConverter in your Java code. Finally, as with all event-driven sys­tems, pay very close atten­tion to the point in time at which mes­sages are sent. For exam­ple, if, as I sug­gest, you are using these events to trig­ger an update to dis­played per­sis­tent state, you need to be damned sure that this state has actu­ally been updated before you pub­lish the “update your dis­play” event.

You can down­load the Maven project with the com­plete code for this demo. You can, of course, dis­trib­ute and change it in any way you like … but do share the results if you do. Enjoy!

On September 4, 2011 christopher wrote: Customising Data Access Exception Handling In Spring

Out-of-the box, Spring pro­vides a well-conceived and fine-grained run­time data access excep­tion hier­ar­chy. As I am sure most of you are already aware, for most data­bases, this is achieved by map­ping SQL error codes to excep­tions via a file called sql-error-codes.xml (this lives in the org.springframework.jdbc.support pack­age inside the org.springframework.jdbc.jar). How­ever, what is less com­monly know — because it is less com­monly required — is how easy it is to cus­tomise this excep­tion han­dling process. I came across a case where I needed to do this recently, so I thought I would share the process with you here.

I was work­ing on an appli­ca­tion that inte­grates with an Ora­cle data­base and I was oper­at­ing on a table that had trig­gers asso­ci­ated with it. These trig­gers were per­form­ing com­plex data integrity checks. Now, the prob­lem here of course comes when the trig­ger raises an error. In Ora­cle PL/SQL, this is done through the raise_application_error func­tion. This takes 2 required argu­ments: a num­ber in the range –20000 to –20999 (which will become a pos­i­tive SQL excep­tion code) and a var­char (which will be included in the excep­tion mes­sage). So, for exam­ple, raise_application_error(-20999, 'something went wrong!') will result in a SQL excep­tion with code 20999 and will include the mes­sage “some­thing went wrong!”. So far, so obvious.

Now, because Ora­cle will not let you raise an appli­ca­tion error using this tech­nique that results in a stan­dard, pre­de­fined SQL error code, Spring clearly will have no idea what the real nature of the prob­lem is and your appli­ca­tion will throw an UncategorizedSQLException, mak­ing it dif­fi­cult to han­dle with any intel­li­gence. So, for the sake of argu­ment, let’s say we have a trig­ger that looks like this:

CREATE OR REPLACE TRIGGER check_data_integrity_trigger
BEFORE INSERT OR UPDATE ON some_table
    REFERENCING OLD AS pre NEW AS post
    FOR EACH ROW
DECLARE
    lv_table_col some_table.table_col%type;
BEGIN
    SELECT DISTINCT table_col INTO lv_table_col FROM some_table WHERE another_col = :post.another_col;
    IF lv_some_col != :post.some_col THEN
        raise_application_error(-20999, 'constraint violation: table_col must be unique for each another_col'); -- hey, Bob, why don't you just normalise your schema, rather than using nasty triggers? ;-)
    END IF;
EXCEPTION
    WHEN no_data_found THEN
        NULL;
END check_data_integrity_trigger;
/

ALTER TRIGGER check_data_integrity_trigger ENABLE;

And we have a unit test that looks like this:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:com/foo/bar/applicationContext.xml")
@Transactional
@TransactionConfiguration(defaultRollback = true)
public class SomeDAOIntegrationTest {

    @Autowired private SomeDAO dao;

    @Test(expected = DataIntegrityViolationException.class) public void shouldNotBeAbleToInsertTwoDifferentValuesForTableColWhenAnotherColHasTheSameValue() throws Exception {
        // given
        SomeTableBean b1 = instanceWhereTableColValueIs("foo").andAnotherColValueIs("baz");
        dao.insert(b1);
        SomeTableBean b2 = instanceWhereTableColValueIs("bar").andAnotherColValueIs("baz");
        // when
        dao.insert(b2);
    }

}

Just so we get the full pic­ture, let’s say the DAO imple­men­ta­tion looks like like this:

@Repository @SuppressWarnings("synthetic-access") public class SomeJdbcDAO implements SomeDAO {

    private final JdbcTemplate db;

    @Autowired public SomeJdbcDAO(DataSource dataSource) {
        db = new JdbcTemplate(dataSource);
    }

    @Override @Transactional public int insert(SomeTableBean bean) {
        Long id = db.queryForLong("select some_table_seq.nextval from dual");
        bean.setId(id);
        return db.update("insert into some_table(table_id, table_col, another_col) values(?, ?, ?)", id, bean.getTableCol(), bean.getAnotherCol());
    }

}

Now, when we run our test, it fails because, rather than get­ting the expected DataIntegrityViolationException, we get an UncategorizedSQLException. To make the test pass, all we need to do is add a sim­ple Spring con­text file to the root of the class­path called sql-error-codes.xml, like so:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:util="http://www.springframework.org/schema/util"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd">

    <bean id="dataIntegrityViolatonTriggerCodes" class="org.springframework.jdbc.support.CustomSQLErrorCodesTranslation">
        <property name="errorCodes" value="20999" /><!-- comma-separated list of error code to translate into the given exception class -->
        <property name="exceptionClass" value="org.springframework.dao.DataIntegrityViolationException" /><!-- we could also make this a custom exception if desired but why create our own when a perfectly suitable class already exists? -->
    </bean>

    <util:list id="customTranslations">
        <ref bean="dataIntegrityViolatonTriggerCodes" />
    </util:list>

    <bean id="Oracle" class="org.springframework.jdbc.support.SQLErrorCodes">
        <property name="customTranslations" ref="customTranslations" />
    </bean>

</beans>

Now, when­ever the trig­ger raises its cus­tom error, the expected DataIntegrityViolationException will be thrown and our appli­ca­tion will be able to inter­pret the mean­ing of the error and act accord­ingly if necessary.

On July 25, 2011 christopher wrote: Delineating Architectures By Extending Spring Stereotypes

In ver­sion 2.5, released in Novem­ber 2007, Spring intro­duced annotation-based appli­ca­tion con­text con­fig­u­ra­tion ori­ented pri­mar­ily around the org.springframework.stereotype pack­age. Clearly, anno­ta­tions have been around in the Java and Spring world for quite some time now. Leav­ing aside the ques­tion of the pros and cons of anno­ta­tions them­selves for another time, it occurs to me that most devel­op­ers still tend to treat them as if they were a slightly mys­te­ri­ous form of code: they are reluc­tant to imple­ment their own and tend to use those that exist in only the most nor­ma­tive and mun­dane ways. There­fore, I would like to go through a very sim­ple exam­ple of how to take advan­tage of Spring’s stereo­type anno­ta­tions to sup­port and enhance your own appli­ca­tion architecture.

Out of the box, Spring pro­vides 4 basic archi­tec­tural stereo­types used com­monly by n-tier applications:

@Component
Used to indi­cate that the anno­tated class encap­su­lates a generic appli­ca­tion com­po­nent of an essen­tially unde­fined type.
@Repository
Used to indi­cate that the anno­tated class enca­pu­lates access to a sys­tem of record (e.g. a data accessor).
@Service
Used to indi­cate that the anno­tated class encap­su­lates a “ser­vice”; com­monly defined in terms of an API the under­ly­ing imple­men­ta­tion of which organ­ises sequences of lower-level tech­ni­cal logic into a call sequence within a trans­ac­tion bound­ary with the aim of deliv­er­ing a func­tional requirement.
@Controller
Used to indi­cate that the anno­tated class is a con­troller within an MVC application.

With the excep­tion of the @Component anno­ta­tion (which is the base type used to deter­mine that a class is a Spring bean dur­ing con­text ini­tial­i­sa­tion) and the @Controller anno­ta­tion (which is key when using Spring’s MVC sup­port), the spe­cial­i­sa­tions indi­cated serve lit­tle func­tional use on the whole. In most appli­ca­tions, it would be absolutely equiv­a­lent to mark a class as @Component in pref­er­ence to @Service, for exam­ple. How­ever, it need not be that way as not only can this meta­data be utilised for func­tional ends but also extend­ing these stereo­types pro­vides an excel­lent way to clar­ify the archi­tec­tural role of a given unit of code when none of the pre-existing spe­cial­i­sa­tions are appropriate.

For exam­ple, in my lit­tle hobby-project “Appo­site” (which I have men­tioned in pre­vi­ous posts), I have a num­ber of require­ments around ensur­ing state on appli­ca­tion startup (i.e. bootstrapping).

Now, boot­strap­ping is a clas­sic “non-functional” prob­lem (in so far as it usu­ally does not pro­vide func­tion­al­ity to an end user, merely ensur­ing that the appli­ca­tion is in an ade­quate state to be capa­ble of sub­se­quently deliv­er­ing this) … and, by no coin­ci­dence, it is also the kind of thing that there­fore also often gets left to the end and piled into some unread­able file that suf­fers from vir­u­lent bit-rot.

Alter­na­tively (and a slight improve­ment), the more organised-of-mind will retain the basic script­ing approach but struc­ture it via the use of the Tem­plate Pat­tern:

// abstract class with public *final* method defining algorithm, the individual methods for which may or may not also be abstract
public abstract class AbstractTemplatePattern {

    public final void doSomething() {
        doStepOne();
        Foo resultOfStepTwo = doStepTwo();
        doStepThree(resultOfStepTwo);
    }

    protected void doStepOne() {
        // maybe a default implementation here ...
    }

    protected abstracted Foo doStepTwo();

    protected abstracted void doStepThree(Foo foo);

}

How­ever, whilst this is slightly prefer­able, it is still not really ideal for the boot­strap­ping use case. This is because we expect the “algo­rithm” (i.e. “what needs to be done when boot­strap­ping”) to change with some fre­quency … this is usu­ally the cause of the main­te­nance night­mares that can some­times occur around such activ­i­ties. Whilst scripts can become illeg­i­ble under such cir­cum­stances, the tem­plate pat­tern becomes dif­fi­cult to main­tain because it is pre­cisely intended to rep­re­sent a sta­tic algo­rithm with chang­ing imple­men­ta­tions, whereas our boot­strap is the inverse: a chang­ing algo­rithm with sta­tic imple­men­ta­tions. In both cases, the code unit(s) involved are become sub­ject to change and, con­se­quently, bugs are liable to be intro­duced in the process of change. Asso­ci­ated tests may start to fail, and col­lab­o­ra­tors or sub­classes may be affected in all kinds of ways that are, to put it bluntly, a painful waste of time to deal with.

To talk in terms of pat­terns again, what we really want here are com­mands and a com­mand execu­tor. What we want to be able to do is sim­ply “drop-in” a new chunk of “stuff to do” and be com­fort­able in the knowl­edge that it will get done.

NB. It is not nec­es­sary to cre­ate your own inter­face for the com­mand pat­tern: java.lang.Runnable should always suf­fice, in my view. Use the stan­dard libraries when­ever possible!

We could, of course, use the JSR-250 @PostConstruct anno­ta­tion, sup­ported by Spring, wher­ever we wanted to do some kind of “boot­strap­ping”. How­ever, if we do so in an ad hoc fash­ion, (a) this logic becomes scat­tered through­out the code­base under generic meta­data that makes it more dif­fi­cult to iso­late in the process (b) frag­ment­ing this respon­si­bil­ity and (c) hin­der­ing re-use. There­fore, my approach was as follows …

First, cre­ate an org.apposite.command pack­age as the name­space for all code units that con­sti­tute a com­mand (i.e. they are an imple­men­ta­tion of java.lang.runnable and achieve some given tech­ni­cal require­ment (i.e. guar­an­tee state on com­ple­tion or throw an IllegalStateException on post-condition failure).

Sec­ond, extend the @Component stereo­type to intro­duce a new spe­cial­i­sa­tion, which I decided to call @StartupCommand:

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Component
public @interface StartupCommand {

    String value() default "";
}

Sec­ond, imple­ment a java.lang.Runnable for each “thing” that needs to be done on startup. For exam­ple, in Appo­site, for var­i­ous rea­sons I will not bore you with here, I want to “reg­is­ter” the run­time instance that is in the pro­ces of start­ing up with a repository:

@StartupCommand public class RegisterRuntimeInstanceCommand implements Runnable {

    private final AppositeRuntimeInstanceDAO ridao;

    private final String riname;

    @Autowired public DetermineRuntimeInstanceCommand(@Value("${apposite.runtime.instance.name}") String instanceName, AppositeRuntimeInstanceDAO appositeRuntimeInstanceDAO) {
        riname = instanceName;
        ridao = appositeRuntimeInstanceDAO;
    }

    @Override public void run() {
        try {
            InetAddress localhost = InetAddress.getLocalHost();
            ridao.insertOrUpdate(riname, localhost.getHostName(), localhost.getHostAddress());
        } catch (Exception e) {
            throw new IllegalStateException("Runtime instance registration failure", e);
        }
    }

}

Now, for the com­mand execu­tor, we needn’t do any­thing nearly so fancy: we can encap­su­late our boos­t­rap logic (@PostConstruct) into a nor­mal Spring bean which, for the sake of argu­ment, I am going to call a boot­strap “service” …

public interface BootstrapService {

    void startup();

}

My imple­men­ta­tion of which is as follows:

@Service public class BootstrapSpringService implements BootstrapService {

    private static final Logger LOG = getLogger(BootstrapSpringService.class);

    private boolean bootstrap = true;

    private List<Runnable> startupCommands = new ArrayList<Runnable>();

    private final PlatformTransactionManager txmanager;

    @Autowired public BootstrapSpringService(PlatformTransactionManager transactionManager) {
        txmanager = transactionManager;
    }

    @Autowired public void setBootstrap(@Value("${apposite.bootstrap}") boolean bootstrap) {
        this.bootstrap = bootstrap;
    }

    @Autowired(required = false) public void setCommands(List<Runnable> commands) {
        for (Runnable command : commands) {
            if (AnnotationUtils.findAnnotation(command.getClass(), StartupCommand.class) != null) startupCommands.add(command);
        }
    }

    // NB. we cannot use @Transactional here because this is called *before* the creation of the transaction proxies
    @PostConstruct @Override public void startup() {
        if (bootstrap) {
            LOG.info("Bootstrapping Apposite ...");
            TransactionStatus tx = txmanager.getTransaction(new DefaultTransactionDefinition());
            for (Runnable command : startupCommands) {
                LOG.info("Running startup action: " + command.getClass().getName());
                long start = System.currentTimeMillis();
                command.run();
                LOG.info(command.getClass().getName() + " completed in " + (System.currentTimeMillis() - start) + "ms");
            }
            txmanager.commit(tx);
            LOG.info("Bootstrap complete.");
        }
    }

}

As a result, I can very eas­ily deter­mine what is sup­posed to hap­pen at startup and where. More­over, nei­ther the boot­strap “ser­vice” nor any of the indi­vid­ual com­mands will be sub­ject to change when, as I expect, addi­tional boot­strap logic is added in future. As a fringe ben­e­fit, the intent of the code is described with greater seman­tic force through the util­i­sa­tion of a spe­cific spe­cial­i­sa­tion annotation.

On July 3, 2011 christopher wrote: A Small Exercise in Refactoring Data Access Objects

Deal­ing with “legacy” code is one of the prob­lems I face most often on a day-to-day basis. Whilst it would be lovely to always be in a posi­tion to write clean, fresh code it is far more com­mon that one will be in a posi­tion of hav­ing to adapt and mod­ify pre-existing code that is far from ideal (which is often why it needs to be mod­i­fied). Such code will com­monly have no tests or any tests which do exist fail to test the code in any mean­ing­ful sense. Mod­i­fy­ing such code can be fraught with peril: in the absence of good tests, the changes one makes can have more or less sub­tle repur­cus­sions that can rip­ple across a sys­tem, fre­quently with adverse, unan­tic­i­pated func­tional side-effects. More­over, in order to imple­ment new or mod­i­fied func­tion­al­ity within such code util­is­ing test-first method­olo­gies, it is com­monly first nec­es­sary to change the code in order to make it “testable”. Hence, at unit-test level, we have a kind of “Catch 22″ sit­u­a­tion: I want to test the code in order to change it but first need to change it in order to test it. How do I ensure that my changes for the pur­poses of testa­bil­ity do not have a func­tional impact when those changes can­not them­selves be tested? My answer is usu­ally to “go up a level” and iso­late the legacy code within an inte­gra­tion test har­ness. For the exam­ple of data access objects with which I am con­cerned here, this means actu­ally going to the data­base. The ratio­nale for this is sim­ple: an inte­gra­tion test should prove the con­tracts of a class or method under run­time con­di­tions. Inte­gra­tion tests may some­times be slow but the con­fi­dence they pro­vide to change the under­ly­ing code is essen­tial in this case. More­over, they serve to expose and doc­u­ment con­tracts which hereto­fore had prob­a­bly never even been given con­sid­er­a­tion and this is a nec­es­sary step.

To begin with, one should be clear about the def­i­n­i­tions of “legacy” and “refac­tor­ing” here:

Legacy
In line with many authors on TDD, such as Steve Free­man, I would ini­tially define “legacy” as being sim­ply “code with­out tests” (or code with mean­ing­less tests, which is the same thing). Addi­tion­ally, I would add the archi­tec­tural def­i­n­i­tion that “legacy” code is that which utilises patterns/supporting tech­nolo­gies etc that are explic­itly “dep­re­cated” within the over­all archi­tec­tural plan for a sys­tem (you do have such a plan, don’t you?) — it has been iden­ti­fied and there is a strat­egy in place for its removal.
Refac­tor­ing
Con­se­quently, refac­tor­ing can be defined as chang­ing the under­ly­ing imple­men­ta­tion for code that already passes good tests such that it is brought more fully into line with your over­all archi­tec­tural plan. If you change code with­out a test (I’m not so dog­matic as to say you should never do this, it is just that you need a very good rea­son), or before it is pass­ing the test (a mis­take when deal­ing with pre-existing code / sim­ply the process of imple­men­ta­tion for new code), then it is not refactoring.

Refac­tor­ing can be a con­tentious topic from a busi­ness per­spec­tive because, by def­i­n­i­tion, it should have no func­tional impact. The eco­nomic jus­ti­fi­ca­tions are:

  1. Per­for­mance: the code works bet­ter under given run­time conditions.
  2. Main­tain­abil­ity: the code can be more quickly and eas­iliy mod­i­fied under con­di­tions of change to exist­ing functionality.
  3. Exten­si­bil­ity: the code can be more quickly and eas­ily mod­i­fied given require­ments for addi­tional functionality.

Note that “bring­ing into line with over­all archi­tec­tural vision” is not a jus­ti­fi­ca­tion in-itself as this should just be a catch-all way of express­ing the above points. (If not, what is your archi­tec­ture achiev­ing exactly?)

Any­way, mov­ing on, I thought I would share with you a fairly straight­for­ward case I had to deal with recently (note: names have been changed to pro­tect the not-so-innocent!) I was given a story card describ­ing some desired func­tional change from a use case per­spec­tive. Super­fi­cially, the change was sim­ple, requir­ing only a change to some data “lookups” which were used to gen­er­ate HTML select boxes for a user inter­face. There­fore, begin­ning with the rel­e­vant con­troller classes for the user inter­faces that required the change, I was even­tu­ally con­fronted with the fol­low­ing class which is, to all intents and pur­poses, a data access object. It was being instan­ti­ated directly within a “view helper” class. Natch.

public class LegacyDAO extends AbstractLegacyDAO {

    private static final String QUERY_TBL3 = "Select tbl3_id, tbl3_name from tbl3 where tbl3_boolean = 1 order by tbl3_name";

    private static final String QUERY_WITH_JOIN =
        "select distinct  tbl1_id, " +
        "   tbl1_name,tbl1_boolean,tbl2_string " +
        "from " + "   tbl1, " + "   tbl2 " +
        "where " +
        "   tbl1_join_key = tbl2_join_key and " +
        "   (tbl2_some_id = 9005 or " +
        "       (tbl2_some_id = ? and tbl2_string in ('FOO','BAR'))) and " +
        "   tbl2_Start_date < sysdate and " +
        "   tbl2_end_date > sysdate " +
        "order by" + "   tbl1_name";

    private List rowsFromTbl3;

    private List hardCodedData;

    private List rowsFromTbls1And2;

    private List subsetOfRowsFromTbls1And2;

    public List listSubsetOfRowsFromTbls1And2(int someId) {
        try {
            if (subsetOfRowsFromTbls1And2 == null) {
                List allRows = listRowsFromTbls1And2(someId);
                Iterator it = allRows.iterator();
                while (it.hasNext()) {
                    KeyNameObject row = (KeyNameObject) it.next();
                    if (!row.getId().toUpperCase().endsWith("_SOMETHING")) {
                        if (subsetOfRowsFromTbls1And2 == null) {
                            subsetOfRowsFromTbls1And2 = new ArrayList();
                        }
                        subsetOfRowsFromTbls1And2.add(row);
                    }
                }
            }

        } catch (Throwable e) {
            throw new FullTraceException(e);
        }
        return subsetOfRowsFromTbls1And2;
    }

    public List listRowsFromTbls1And2(int someId) {

        rowsFromTbls1And2 = new ArrayList();
        Connection con = null;
        PreparedStatement ps = null;
        ResultSet rs = null;
        try {
            con = getDataSource().getConnection();
            ps = con.prepareStatement(QUERY_WITH_JOIN);
            ps.setInt(1,someId);
            rs = ps.executeQuery();
            while (rs.next()) {
                int theId = rs.getInt("tbl1_id");

                String name = rs.getString("tbl1_name");
                String string = rs.getString("tbl2_string");
                boolean bool = rs.getInt("tbl1_boolean") == 1;

                if (string.toUpperCase().startsWith("FOO1")) {
                    string = "FOO1 FULL NAME";
                } else if (string.equalsIgnoreCase("FOO2")) {
                    string = "FOO2 FULL NAME";
                } else if (string.equalsIgnoreCase("FOO3")) {
                    string = "FOO3 FULL NAME";
                } else if (string.equalsIgnoreCase("FOO4")) {
                    string = "FOO4 FULL NAME";
                } else if (string.equalsIgnoreCase("FOO5")) {
                    string = "FOO5 FULL NAME";
                } else {
                    string = null;
                }

                KeyNameObject row = new KeyNameObject();
                String id = (new Integer(theId)).toString();
                if (string != null) {
                    id = id + "_" + string;
                } else {
                    id = id + "_DEFAULT FOO";
                }
                row.setId(id);
                if (name.toLowerCase().startsWith("the")) {
                    String str = name.substring(0, "the".length());
                    name = name.substring("the".length() + 1, name.length()) + ", " + str;
                }

                if (string != null) {
                    name = name + " (" + string + ")";
                } else {
                    name = name + " (DEFAULT FOO)";
                }
                row.setName(name);
                boolean added = false;
                for (int i = 0; i < rowsFromTbls1And2.size(); i++) {
                    KeyNameObject obj = (KeyNameObject) rowsFromTbls1And2.get(i);
                    if (row.getName().toUpperCase().compareTo(obj.getName().toUpperCase()) < 0) {
                          rowsFromTbls1And2.add(i, row);
                          added = true;
                          break;
                    }
                }

                if (!added) {
                    rowsFromTbls1And2.add(row);
                }
            }
        } catch (Throwable e) {
            throw new FullTraceException(e);
        } finally {
            try {
                rs.close();
            } catch (Throwable ignore) {

            }
            rs = null;
            try {
                ps.close();
            } catch (Throwable ignore) {

            }
            ps = null;
            try {
                con.close();
            } catch (Throwable ignore) {

            }
            con = null;

        }
        return rowsFromTbls1And2;
    }

    public List listRowsFromTbl3() {
        if (rowsFromTbl3 != null) {
            return rowsFromTbl3;
        }
        rowsFromTbl3 = new ArrayList();
        Connection con = null;
        PreparedStatement ps = null;
        ResultSet rs =   null;
        try {
            con = getDataSource().getConnection();
            ps = con.prepareStatement(QUERY_TBL3);
            rs = ps.executeQuery();
            while (rs.next()) {
                int id = rs.getInt("tbl3_id");
                String name = rs.getString("tbl3_name");
                KeyNameObject obj = new KeyNameObject();
                obj.setId(String.valueOf(id));
                obj.setName(name);
                rowsFromTbl3.add(obj);
            }
        } catch (Throwable e) {
            throw new FullTraceException(e);
        } finally {
            try {
                rs.close();
            } catch (Throwable ignore) {

            }
            rs = null;
            try {
                ps.close();
            } catch (Throwable ignore) {

            }
            ps = null;
            try {
                con.close();
            } catch (Throwable ignore) {

            }
            con = null;

        }
        return rowsFromTbl3;
    }

    public List listHardCodedData() {
        if (hardCodedData != null) {
            return hardCodedData;
        }
        hardCodedData = new ArrayList();
        KeyNameObject role1 = new KeyNameObject();
        role1.setName("NAME1");
        hardCodedData.add(role1);

        KeyNameObject role2 = new KeyNameObject();
        role1.setName("NAME2");
        hardCodedData.add(role2);
        return hardCodedData;
    }

    @Override
    protected String getPrimaryKeyQuery() {
        return null;
    }

    @Override
    protected Class getObjectClass() {
        return LegacyDAO.class;
    }

}

On see­ing this, nat­u­rally, I groaned and the class was lit up like a Christ­mas tree by all the com­piler warn­ings. I looked for the tests: of course, there were none. So I knuck­led down and decided what to do about it. In line with my usual approach to such things, the broad strat­egy went like this:

  1. Read the code closely! We know what it is doing, because we know the user inter­face out­put, but how is it doing it? It has been in pro­duc­tion for a long time, so we can safely assume that it must basi­cally work, even if there are some known or currently-unknown “glitches”.
  2. Find all the things that are wrong about this class, both in sim­ple terms (i.e out­right bugs) and given the cur­rent archi­tec­tural plan. Try to deter­mine which parts of the code had sim­ply become redun­dant since they were writ­ten (i.e. iden­tify straight­for­ward deletes).
  3. Deter­mine basic plan for cor­rect­ing the defi­cien­cies and come up with a back-of-the-envelope esti­mate for how long each would take to complete.
  4. Given the bud­get and time con­straints of the busi­ness change request dri­ving the work, deter­mine which of the changes would be appro­pri­ate within the scope of the cur­rent story card. Those that are too large can be marked for later com­ple­tion either as inde­pen­dent tech­ni­cal debt story cards or sub­se­quent func­tional change requests.

My notes ran as follows:

  • Gen­er­ally speak­ing, all each of the meth­ods does is return query results mapped to a sim­ple, generic KeyNameObject class. It also hard-codes some manip­u­la­tions of “name” val­ues in order to pro­duce the desired view out­put. The fact that it requires 200+ lines to achieve this is out­ra­geous enough. Ide­ally, the desired view out­put names would be per­sis­tent val­ues them­selves and not be gen­er­ated on read (this kind of “on read” manip­u­la­tion is a fail­ing I see very reg­u­larly: nor­malise your data on write, peo­ple!!)
  • It extends a dep­re­cated sup­port class (which I am here call­ing AbstractLegacyDAO); a large class of which this class only uses the getDataSource method. Con­se­quently, the class is com­pelled to “imple­ment” 2 abstract meth­ods from this class (getPrimaryKeyQuery and getObjectClass), which it does incor­rectly. We could, there­fore, fac­tor out this inher­i­tance with ease, erad­i­cat­ing these point­less pseudo-implementations in the process. (Note: when you end up with point­less imple­men­ta­tions like this, it is a clear sign that your inher­i­tance is wrong, wrong, wrong.)
  • The data acces­sor meth­ods lazily-initialise the results and “cache” them as instance mem­bers. The class is con­se­quently not thread-safe. I then checked the num­ber of times this class is used and these meth­ods are called. Unsur­pris­ingly, it turned out to be pre­cisely 1. Hence the lazy-initialisation is a totally unnec­es­sary “opti­mi­sa­tion” that only increases the mem­ory foot­print of the class and increases complexity.
  • No gener­ics are used, since author­ship of the class pre­dated adop­tion of Java 1.5. At least strength­en­ing of type safety in the API would be an easy win.
  • Most of the class con­sists of boil­er­plate JDBC. More­over, the data source is obtained (in the abstract super­class) via a sta­tic lookup. This could make test­ing dif­fi­cult, except that I knew that test­ing of these sta­tic lookups had already been addressed as part of a move to Spring done some time back … at least, there­fore, test har­nesses were in-place already to deal with this. The adop­tion of Spring since this class was writ­ten meant that we could now also eas­ily erad­i­cate the boil­er­plate JDBC. We could, poten­tially, also make this DAO a Spring bean but, since nei­ther the view helper (part of a Struts 1 app) nor the action had yet been Spring-ified, this would require some fairly exten­sive changes which would increase the time required for the work and its and change-impact.
  • The on-read “name manip­u­la­tions” were effec­tively describ­ing a small enu­mer­a­tion. Com­plex­ity of the code could be reduced and read­abil­ity improved by actu­ally using one (i.e. an enum)!
  • The “hard-coded data” method was patently absurd and did not even suc­ceed in its own lim­ited terms, con­tain­ing an obvi­ous copy & paste bug. For­tu­nately, on fur­ther inves­ti­ga­tion, it turned out to be com­pletely unused. I silently hoped that had always been the case.
  • The join query was sub-optimal and a quick explain plan showed that using a left join over a sub­s­e­lect would improve the per­for­mance of the query in this case.
  • Both the queries and the code are full of magic num­bers and strings. There is gen­eral plan in place to deal with this “facet” of the appli­ca­tion, how­ever resolv­ing these issues in this case would be beyond the scope of the work as it would require changes in user inter­ac­tion pat­terns amongst a num­ber of other things, requir­ing the buy-in of the stakeholder.
  • Why do peo­ple insist on writ­ing their code in such an illeg­i­ble man­ner (e.g. those sta­tic strings con­tain­ing the SQL queries, with exces­sive con­ca­ten­ta­tion make the queries almost unread­able [as well as being point­less, as they are only ref­er­enced once]) when, I would argue, it is as easy, of not eas­ier, to achieve a con­sis­tent style. Also, at what point do you think the orig­i­nal author of this class might have stopped and begun to think that maybe, just maybe, 90 lines was a tad exces­sive for a sim­ple data acces­sor method?! Clearly, s/he had never heard of “refac­tor into method”. A reg­u­lar pat­tern of short, sim­ple meth­ods not only reduces com­plex­ity but also improves read­abil­ity by virtue of the fact that humans are extremely adept at pat­tern recog­ni­tion. What is often referred to as “beauty” (and too often dis­missed by tech­ni­cians as “mere aes­thet­ics”) can, in fact, be described in such terms. The aes­thet­ics of your code is not merely aes­thetic: it is functional!

So, armed with this knowl­edge of the code, it was time to set about writ­ing a good test that would ensure that, before I even con­tem­plated intro­duc­ing the requested func­tional changes (let alone start­ing refac­tor­ing), I could be sure that only (a) desired effects and not (b) unde­sired side-effects were the out­come of the work.

The process of writ­ing such tests for “legacy” code can seem a lit­tle back-to-front because, rather than the usual “write test and change code till it passes” we instead “write test and keep chang­ing it till it passes”. In other words: the code itself is the accep­tance cri­te­ria of the test. We can­not change the code until our test passes. This often involves a process of “copy­ing” or inter­pret­ing the exist­ing imple­men­ta­tion in test form. My test looked some­thing like this:

public class LegacyDAOTest {

    private LegacyDAO dao;

    private JdbcTemplate db;

    @Mock private BeanContainer container;

    @Before public void setup() throws Exception {
        initMocks(this);
        Properties p = new Properties();
        p.load(getClass().getResourceAsStream("/jdbc-test.properties"));
        SimpleDriverDataSource ds = new SimpleDriverDataSource((Driver) Class.forName(p.getProperty("jdbc.driver")).newInstance(), p.getProperty("jdbc.url"), p.getProperty("jdbc.username"), p.getProperty("jdbc.password"));
        when(container.get(DataSource.class)).thenReturn(ds);
        StaticBeanFactory.setContainer(container);
        db = new JdbcTemplate(ds);
        dao = new LegacyDAO();
    }

    @After public void teardown() throws Exception {
        StaticBeanFactory.setContainer(new DefaultContainer());
    }

    @Test public void shouldReturnSubsetOfRowsFromTbls1And2() throws Exception {
        List<Integer> journals = db.queryForList("select some_id from some_table", Integer.class);
        for (Integer id : journals) {
            List<KeyNameObject> expected = getExpectedSubsetOfRowsFromTbls1And2(id);
            List<KeyNameObject> actual = dao.listSubsetOfRowsFromTbls1And2(id);
            assertThat(actual, equalTo(expected));
        }
    }

    @Test public void shouldReturnRowsFromTbls1And2() throws Exception {
        List<Integer> journals = db.queryForList("select some_id from some_table", Integer.class);
        for (Integer id : journals) {
            List<KeyNameObject> expected = getExpectedRowsFromTbls1And2(id);
            List<KeyNameObject> actual = dao.listRowsFromTbls1And2(id);
            assertThat(actual, equalTo(expected));
        }
    }

    @Test public void shouldReturnRowsFromTbl3() throws Exception {
        List<KeyNameObject> expected = db.query("select tbl3_id, tbl3_name from tbl3 where tbl3_boolean = 1 order by tbl3_name", new RowMapper<KeyNameObject>() {

            @Override
            public KeyNameObject mapRow(ResultSet rs, int rowNum) throws SQLException {
                KeyNameObject kno = new KeyNameObject();
                kno.setId(rs.getString("tbl3_id"));
                kno.setName(rs.getString("tbl3_name"));
                return kno;
            }

        });
        List<KeyNameObject> actual = dao.listRowsFromTbl3();
        assertThat(actual, equalTo(expected));
    }

    private List<KeyNameObject> getExpectedRowsFromTbls1And2(Integer journalId) {
        List<KeyNameObject> expected = db
                .query("select distinct tbl1_id, tbl1_name, tbl2_string from tbl1 join tbl2 on tbl2_join_key = tbl1_join_key where (tbl2_some_id = 9005 or (tbl2_some_id = ? and tbl2_string in ('FOO','BAR'))) and tbl2_start_date < sysdate and tbl2_end_date > sysdate order by tbl1_name",
                        new RowMapper<KeyNameObject>() {

                            @Override
                            public KeyNameObject mapRow(ResultSet rs, int rowNum) throws SQLException {
                                KeyNameObject o = new KeyNameObject();
                                String string = getMembershipType(rs.getString("tbl2_string"));
                                String id = rs.getString("tbl1_id");
                                String name = rs.getString("tbl1_name");
                                if (name.toLowerCase().startsWith("the")) {
                                    String str = name.substring(0, "the".length());
                                    name = name.substring("the".length() + 1, name.length()) + ", " + str;
                                }
                                o.setId(id + "_" + string);
                                o.setName(name + " (" + string + ")");
                                return o;
                            }

                            private String getMembershipType(String membershiptype) {
                                if (membershiptype.toUpperCase().startsWith("FOO1")) {
                                    return "FOO1 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO2")) {
                                    return "FOO2 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO3")) {
                                    return "FOO3 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO4")) {
                                    return "FOO4 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO5")) {
                                    return "FOO5 FULL NAME";
                                } else {
                                    return "DEFAULT FOO";
                                }
                            }

                        }, journalId);
        Collections.sort(expected, new Comparator<KeyNameObject>() {

            @Override
            public int compare(KeyNameObject o1, KeyNameObject o2) {
                return o1.getName().toUpperCase().compareTo(o2.getName().toUpperCase());
            }

        });
        return expected;
    }

    private List<KeyNameObject> getExpectedSubsetOfRowsFromTbls1And2(Integer journalId) {
        List<KeyNameObject> expected = db
                .query("select distinct tbl1_id, tbl1_name, tbl2_string from tbl1 join tbl2 on tbl2_subscriber_id = tbl1_subscriber_id where (tbl2_some_id = 9005 or (tbl2_some_id = ? and tbl2_string in ('FOO','BAR'))) and tbl2_start_date < sysdate and tbl2_end_date > sysdate and not regexp_like(tbl2_string, '^.*_SOMETHING$') order by tbl1_name",
                        new RowMapper<KeyNameObject>() {

                            @Override
                            public KeyNameObject mapRow(ResultSet rs, int rowNum) throws SQLException {
                                KeyNameObject o = new KeyNameObject();
                                String string = getMembershipType(rs.getString("tbl2_string"));
                                String id = rs.getString("tbl1_id");
                                String name = rs.getString("tbl1_name");
                                if (name.toLowerCase().startsWith("the")) {
                                    String str = name.substring(0, "the".length());
                                    name = name.substring("the".length() + 1, name.length()) + ", " + str;
                                }
                                o.setId(id + "_" + string);
                                o.setName(name + " (" + string + ")");
                                return o;
                            }

                            private String getMembershipType(String membershiptype) {
                                if (membershiptype.equalsIgnoreCase("FOO2")) {
                                    return "FOO2 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO3")) {
                                    return "FOO3 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO4")) {
                                    return "FOO4 FULL NAME";
                                } else if (membershiptype.equalsIgnoreCase("FOO5")) {
                                    return "FOO5 FULL NAME";
                                } else {
                                    return "DEFAULT FOO";
                                }
                            }

                        }, journalId);
        Collections.sort(expected, new Comparator<KeyNameObject>() {

            @Override
            public int compare(KeyNameObject o1, KeyNameObject o2) {
                return o1.getName().toUpperCase().compareTo(o2.getName().toUpperCase());
            }

        });
        return expected;
    }

}

As you can see, my test here repli­cates the exist­ing imple­men­ta­tion in most respects. I don’t try to do any­thing espe­cially clever: the pri­or­ity is to get some test code that accu­rately repro­duces the out­put of the exist­ing class as quickly as pos­si­ble. The pri­mary dif­fer­ences are:

  • I use the improved query using a join in my test code and, for the method which obtains a sub­set of rows, I have a sep­a­rate query that includes the required fil­ter­ing clause in the query, so that I do not have to iter­ate over the result set twice in order to achieve this.
  • I use exist­ing test har­ness tech­niques (a mock Container, using the won­der­ful Mock­ito) for the StaticBeanFactory … because it is sta­tic, I need to restore the default con­tainer in a tear­down to ensure that I do not inter­fere with any sub­se­quent tests.
  • I use JdbcTemplate and extract the row map­ping logic into RowMappers (which were not avail­able when the code was writ­ten, prior to the adop­tion of Spring — although I’m sure it would have been pos­si­ble to con­ceive of one’s own alternative!)
  • To sort the results (which is done on the basis of the names as trans­formed in the code, so can­not be done in the query), I use a Comparator imple­men­ta­tion which I pass to Collections.sort(Collection, Comparator). This option was avail­able when the code was writ­ten and is infi­nitely prefer­able to the frankly painful sight of watch­ing some poor fool iter­at­ing over the results again and again try­ing to do this manually!

It is worth not­ing also that, in this test, the are effec­tively mul­ti­ple asser­tions per-test because, for the sake of iron-clad con­fi­dence in the changes, I go through the entire table, test­ing against each avail­able row iden­ti­fied by some_id (as I am call­ing it in my obfus­ca­tion). This, of course, makes the test take quite a long time to run. Before com­mit­ting the test to the build, I spent a lit­tle time select­ing a small num­ber of crit­i­cal cases and tested only those IDs because oth­er­wise the build would have ground to a halt.

Given that this test passes (N.B. please excuse any errors I may have intro­duced whilst try­ing to obfus­cate to pro­tect the inno­cent — the real ver­sion works fine, I promise!), I then set about incre­men­tally chang­ing the imple­men­ta­tions of the meth­ods that I was going to retain through a series of IDE refac­tor­ings, sup­ported by the intro­duc­tion of a num­ber of embed­ded classes aimed at remov­ing dupli­ca­tion and reduc­ing complexity:

@SuppressWarnings("synthetic-access") public class LegacyDAORefactored {

    private static class CaseInsensitiveNameComaparator implements Comparator<KeyNameObject> {

        @Override
        public int compare(KeyNameObject o1, KeyNameObject o2) {
            return o1.getName().toUpperCase().compareTo(o2.getName().toUpperCase());
        }
    }

    private static enum MatchType {
        TO_UPPER_CASE_STARTS_WITH {

            @Override public boolean matches(String a, String b) {
                return b != null && b.toUpperCase().startsWith(a);
            }

        },
        EQUALS_IGNORE_CASE {

            @Override public boolean matches(String a, String b) {
                return a.equalsIgnoreCase(b);
            }

        };

        public abstract boolean matches(String a, String b);
    }

    private static enum Foo {
        FOO1_FULL_NAME("FOO1_DB_VALUE", MatchType.TO_UPPER_CASE_STARTS_WITH),
        FOO2_FULL_NAME("FOO2", MatchType.EQUALS_IGNORE_CASE),
        FOO3_FULL_NAME("FOO3", MatchType.EQUALS_IGNORE_CASE),
        FOO4_FULL_NAME("FOO4", MatchType.EQUALS_IGNORE_CASE),
        FOO5_FULL_NAME("FOO5", MatchType.EQUALS_IGNORE_CASE),
        DEFAULT_FOO(null, null);

        static Foo forString(String str) {
            for (Foo foo : values()) {
                if (!DEFAULT_FOO.equals(foo) && foo.matcher.matches(foo.string, str)) return foo;
            }
            return DEFAULT_FOO;
        }

        private final String string;

        private final MatchType matcher;

        private final String fullName;

        private Foo(String string, MatchType matcher) {
            this.string = string;
            this.matcher = matcher;
            fullName = name().replaceAll("_", " ");
        }

        public String getIdSuffix() {
            return "_" + fullName;
        }

        public String getNameSuffix() {
            return " (" + fullName + ")";
        }
    }

    private static class KeyNameObjectRowMapper implements RowMapper<KeyNameObject> {

        private static String putTheAtTheEnd(String name) {
            if (name.toLowerCase().startsWith("the")) {
                String str = name.substring(0, "the".length());
                return name.substring("the".length() + 1, name.length()) + ", " + str;
            }
            return name;
        }

        @Override
        public KeyNameObject mapRow(ResultSet rs, int rowNum) throws SQLException {
            KeyNameObject kno = new KeyNameObject();
            String string = rs.getString("tbl2_string");
            Foo mt = Foo.forString(string);
            kno.setId(rs.getString("tbl1_id") + mt.getIdSuffix());
            kno.setName(putTheAtTheEnd(rs.getString("tbl1_name")) + mt.getNameSuffix());
            return kno;
        }

    }

    private static final RowMapper<KeyNameObject> RM = new KeyNameObjectRowMapper();

    private static final Comparator<KeyNameObject> CMP = new CaseInsensitiveNameComaparator();

    private final JdbcTemplate db;

    public LegacyDAORefactored() {
        db = new JdbcTemplate((DataSource) StaticBeanFactory.getBean(DataSource.class));
    }

    public List<KeyNameObject> listSubsetOfRowsFromTbls1And2(int someId) {
        List<KeyNameObject> res = db.query("select distinct tbl1_id, tbl1_name, tbl2_string from tbl1 join tbl2 on tbl2_subscriber_id = tbl1_subscriber_id where (tbl2_some_id = 9005 or (tbl2_some_id = ? and tbl2_string in ('FOO','BAR'))) and tbl2_start_date < sysdate and tbl2_end_date > sysdate and not regexp_like(tbl2_string, '^.*_SOMETHING$') order by tbl1_name", RM, someId);
        Collections.sort(res, CMP);
        return res;
    }

    public List<KeyNameObject> listRowsFromTbls1And2(int someId) {
        List<KeyNameObject> res = db.query("select distinct tbl1_id, tbl1_name, tbl2_string from tbl1 join tbl2 on tbl2_join_key = tbl1_join_key where (tbl2_some_id = 9005 or (tbl2_some_id = ? and tbl2_string in ('FOO','BAR'))) and tbl2_start_date < sysdate and tbl2_end_date > sysdate order by tbl1_name", RM, someId);
        Collections.sort(res, CMP);
        return res;
    }

    public List<KeyNameObject> listRowsFromTbl3() {
        return db.query("select tbl3_id, tbl3_name from tbl3 where tbl3_boolean = 1 order by tbl3_name", new RowMapper<KeyNameObject>() {

            @Override
            public KeyNameObject mapRow(ResultSet rs, int rowNum) throws SQLException {
                KeyNameObject kno = new KeyNameObject();
                kno.setId(rs.getString("tbl3_id"));
                kno.setName(rs.getString("tbl3_name"));
                return kno;
            }

        });
    }

}

Let’s go through some of the changes that were sup­ported by the inte­gra­tion test, then …

  1. The Comparator imple­men­ta­tion required to do the final sort­ing of the result on the basis of mutated name val­ues obtained from the data­base is used by more than one method. I there­fore cre­ate a nested class for this and, because it is thread­safe, ref­er­ence it via a sta­tic, final mem­ber on the DAO class.
  2. The many if ... else if ... else state­ments used to eval­u­ate and trans­form the name val­ues obtained from the data­base are encap­su­lated into a cou­ple of enums using a tech­nique I am very fond of whereby we have an abstract method in the enum which allows each value to encap­su­late small dif­fer­ences in logic. Here, for exam­ple, I have a Matcher enum which expresses the 2 dif­fer­ent meth­ods use to com­pare strings. I sub­se­quently cre­ate a Foo enum that con­tains the var­i­ous name val­ues I expect from the data­base. Each Foo is instan­ti­ated with the required Matcher type. This, in turn, means that we can now …
  3. Intro­duce a sin­gle RowMapper imple­men­ta­tion shared by the 2 main data access meth­ods (again, ref­er­enced via a sta­tic, final mem­ber on the DAO class) that can cal­cu­late the desired muta­tions of the name and ID val­ues obtained from the data­base in a sin­gle state­ment, sim­ply by call­ing Foo.forString(string_from_database). Nice. All the details of the enu­mer­ated val­ues and their respec­tive strate­gies for match­ing are encap­su­lated within the enums. I strongly rec­om­mend this way of using Java enums as a form of imple­men­ta­tion of the strat­egy pat­tern wher­ever pos­si­ble. It really is very neat.
  4. We strongly-type the API by intro­duc­ing gener­ics into the col­lec­tion return types. Quick and easy. (It also then becomes pos­si­ble to sub­se­quently remove some casts in call­ing code, which gives you the real gain here.)
  5. The very sim­ple “table 3″ method is changed into a very stan­dard JdbcTemplate method with an inline row map­per. Again, a very easy change but one that really must be sup­ported by the kind of test shown above to ensure that you don’t sub­tly alter the out­put in some unan­tic­i­pated way.
  6. The absurd and unused “hard coded data” method is sim­ply removed with an IDE refac­tordont’t just delete it!
  7. The DAO no longer extends AbstractLe­ga­cy­DAO, get­ting rid of one more depen­dency on a heav­ily dep­re­cated class. The just-plain-wrong imple­men­ta­tions of the inher­ited abstract meth­ods from that class could con­se­quently be removed, too.
  8. The class no longer “caches” the results of the query as instance mem­bers. Because we know that the class is only called once (and under no cir­cum­stances should we be adding new usages of this class), it is unec­es­sary to do this and bet­ter that this is sim­ply removed. So I removed it. Besides which, “embed­ding” the con­cern of “caching” directly in the DAO in this man­ner, aside from being done in an extremely poor way, is wrong: if it turns out later that we need this, it can be intro­duced as a cross-cutting con­cern using reli­able and robust techniques.

Clearly, the changes that were made did not address many of the big­ger archi­tec­tural issues such as magic num­bers, data struc­ture issues that require manip­u­la­tion of per­sis­tent val­ues on-read, the fact that this DAO is instan­ti­ated and called only within a “view helper”, nor its obtain­ing a DataSource via a dep­re­cated sta­tic lookup rather than being depen­dency injected. Nonethe­less, I hope you agree that the refac­tored ver­sion of the code is, given the speed and ease of the changes (once the test was in place), con­sid­er­ably improved. Con­se­quently, deal­ing with the big­ger issues at a later date, under the aus­pices of work which would grant that scope, becomes fea­si­ble. Cer­tainly, I found that chang­ing the test in a sim­ple way and sub­se­qurntly intro­duc­ing the requested func­tional changes was far eas­ier and more reli­able as a result. This is the every­day bread & but­ter of refac­tor­ing … improve, improve, improve, each time a class is mod­i­fied so that you get you code­base into a posi­tion where it becomes pos­si­ble to safely han­dle big­ger and big­ger issues, architecturally-speaking. Some­times it can feel like you are get­ting nowhere and, if you have peo­ple in your team writ­ing new crap faster than you can clean it up, then you can encounter prob­lems. How­ever, even a small team of good devel­op­ers united and act­ing con­sis­tently in this way can effect remark­able improve­ments over even large code­bases in a surpis­ingly short amount of time.

On June 7, 2011 christopher wrote: How To Use GMail Via Secure SMTP & JavaMailSender

This post is really just a “code snip­pet” note to myself so I have it for ref­er­ence, but it may be use­ful for others …

Very occa­sion­ally, usu­ally when san­ity check­ing myself, I want to send an email from an app using my GMail account. GMail uses SMTPS, so you just need to setup your JavaMailSender as follows:

Properties p = new Properties();
p.put("mail.smtps.auth", "true");
p.put("mail.smtp.starttls.enable", "true");
JavaMailSenderImpl mail = new JavaMailSenderImpl();
mail.setProtocol("smtps");
mail.setHost("smtp.gmail.com");
mail.setPort(465);
mail.setUsername(gmailUsername);
mail.setPassword(gmailPassword);
mail.setJavaMailProperties(p);

On June 6, 2011 christopher wrote: On Test Harnesses & Testing Batch Emailers With Dumbster

One of my many rather sad obses­sions is the sub­ject of test har­nesses: I am a strong advo­cate of the argu­ment that, if you are unable to cleanly artic­u­late a har­ness that iso­lates the run­time inte­gra­tion points of a set of code units such that one can write a good end-to-end test, then your under­stand­ing of your appli­ca­tion archi­tec­ture is defi­cient in a crit­i­cal way.

Batch email­ing pro­vides a good exam­ple of how use­ful this under­stand­ing can be. Most com­pa­nies have some kind of “newslet­ter” that they send out to who­ever signs up for it (and they would usu­ally like as many peo­ple as pos­si­ble to sign-up, for obvi­ous com­mer­cial rea­sons). I have encoun­tered real-world sit­u­a­tions where the imple­men­ta­tion of such sys­tems starts out work­ing fine for a com­par­a­tively small num­ber of ini­tial sub­scribers but, as the “newslet­ter” becomes more and more heav­ily subscribed-to, things start to go hor­ri­bly wrong: it con­sumes more and more sys­tem resources; poor data access grid­locks the data­base; the task slows to a halt and can­not even com­plete the batch before it is due to run again (if it even gets that far). In other words, you can encounter some clas­sic scal­a­bil­ity issues. There­fore, whilst the tech­ni­cal prob­lem itself is fairly triv­ial, we can get some good test­ing sce­nar­ios out of such an app:

  1. The set of poten­tial sub­scribers is unbounded. There­fore, scal­a­bil­ity con­cerns should indi­cate the neces­sity to prove the appli­ca­tion via “soak test­ing” which can assert that a thresh­old num­ber of emails n can be deliv­ered within a given time period t. This can have the addi­tional ben­e­fit of pro­vid­ing scal­a­bil­ity met­rics that can be used to project when addi­tional mod­i­fi­ca­tions, such as clus­ter­ing and/or strip­ing might become necessary.
  2. You often need to be able to gen­er­ate a real­is­tic, but ran­dom, infi­nite set of data to accu­rately repli­cate the dynamic nature of the sys­tem at run­time (e.g. do all emails have the same con­tent? If not, you are prob­a­bly hav­ing to go to the data­base every so often, if not for each email, to get the required data — this needs to be repli­cated to prop­erly soak test the system).
  3. You prob­a­bly also want to be able to make a rea­son­able stab at repli­cat­ing the mail spool which, given 100,000s of mes­sages may well itself become a point of con­tention within the sys­tem (i.e. more than just mock­ing a JavaMailSender). How­ever, you need to do so with­out any risk of send­ing a real email to a real recip­i­ent and with the facil­ity to make asser­tions about the email that is enqueued.

In terms of gen­eral abstrac­tions (ignor­ing con­tex­tual opti­mi­sa­tions, such as caches), the logic of such sys­tems is usu­ally pretty standard:

  1. A batch task execu­tor (e.g. cron or a Java sub­sti­tute, such as Quartz).
  2. A batch task to execute.
  3. A sys­tem of record for subscribers.
  4. A sys­tem of record for email data.
  5. An email ren­derer (i.e. a tem­plate engine such as Veloc­ity or Freemarker).
  6. A mail spool (i.e. an SMTP server)
Batch E-mailer Architecture

Archi­tec­tural descrip­tion of generic batch e-mailer application

The inte­gra­tion points for such a sim­ple archi­tec­ture are straight­foward: they are rep­re­sented by the point at which the lines from grey com­po­nents (extrin­sic to the sys­tem itself) cross the sys­tem thresh­old, rep­re­sented by the dotted-line-box.

Now, for the sake of argu­ment (and to make the prob­lem a lit­tle more inter­est­ing), let’s say that the con­tent of the emails per-subscriber is not uni­form (ignor­ing obvi­ous essen­tial dif­fer­ences, such as salu­ta­tions etc). What this tends to mean in prac­tice is that one or more parts of the sub­scriber descrip­tion data is a para­me­ter to the query used to obtain e-mail data. As men­tioned above, what this means from a test­ing per­spec­tive is that we will need to ensure that this aspect of the appli­ca­tion is accounted for within our test har­ness: a test which sim­ply returns uni­form e-mail con­tent will prob­a­bly not be suf­fi­ciently accu­rate. This might give us a test har­ness some­thing like the following:

Batch E-mailer architecture test harness

Inte­gra­tion points and test­ing approaches

  1. Our test cases them­selves will become the task executor.
  2. With all trans­ac­tions set to roll­back, we can use a real data source (although obvi­ously not your real live appli­ca­tion data unless you are very brave [for “brave” there, read “stu­pid”]). Using the real data source will help us to get prop­erly rep­re­sen­ta­tive data for the test cases whilst using roll­back will ensure that the data remains the same for the next time the tests are run. More­over, hav­ing a full inte­gra­tion test using a real data­base will repli­cate that load — you may, for instance, want to enable some form of data­base activ­ity log­ging so that asser­tions can be made about such things, too. How­ever, we will almost cer­tainly need to proxy the data sources so that, where insuf­fi­cient “real data” is avail­able for a par­tic­u­lar test case (e.g. a test case which says “keep send­ing emails for a given period of time” … which means the num­ber is unknown) so that we can gen­er­ate ran­dom, but nonethe­less rep­re­sen­ta­tive, data on demand.
  3. Finally, we can use Dumb­ster as a fake mail server. How­ever, whilst Dumb­ster is a very use­ful piece of kit, it has some lim­i­ta­tions for use in this kind of sce­nario: the sent emails will accu­mu­late in mem­ory until “received” by the test case. Con­se­quently, for large batch “soak” tests such as we are dis­cussing here, it is nec­es­sary to flush the server occa­sion­ally dur­ing the test exe­cu­tion in order to pre­vent out-of-memory excep­tions. Addi­tion­ally, there­fore, we also need to be able to “inject” asser­tions into our mail server because there will be no way we can aggre­gate all mail after the test has com­pleted and make asser­tions about it then with­out run­ning into the same mem­ory issues.

Obvi­ously, you should be writ­ing your tests first but let’s begin by look­ing at a sketch of my sug­gested imple­men­ta­tion for the batch emailer itself, so that there is some con­text for the test har­ness exam­ples that fol­low. We begin with a basic java.lang.Runnable for the batch task:

@Component public class SpringBatchMailerTask implements BatchMailerTask {

    private final EmailContentDAO emailContentDAO;

    private final EmailRenderer emailRenderer;

    private final JavaMailSender emailSender;

    private final SubscriberDAO subscriberDAO;

    @Autowired public SpringBatchMailerTask(EmailContentDAO emailContentDAO, EmailRenderer emailRenderer, JavaMailSender emailSender, SubscriberDAO subscriberDAO) {
        this.emailContentDAO = emailContentDAO;
        this.emailRenderer = emailRenderer;
        this.emailSender = emailSender;
        this.subscriberDAO = subscriberDAO;
    }

    @Transactional(propagation = Propagation.REQUIRES_NEW) public void processSubscriber(Subscriber subscriber) {
        EmailContent content = emailContentDAO.getEmailContent(subscriber);
        String body = emailRenderer.render(subscriber, content);
        MimeMessagePreparator mail = new BatchMailerMimeMessagePreparator(s.getEmail(), content.getFromEmail(), content.getSubject(), body, subscriber.isHtmlEmailSubscription());
        emailSender.send(mail);
    }

    public void run() {
        dao.executeForSubscribers(this);
    }

}

Whilst I usu­ally like to extract my inter­faces through refac­tor­ing, I have pre-emptively added in a BatchMailerTask inter­face in the code here to avoid rep­e­ti­tion. The inter­face looks like this:

public interface BatchMailerTask extends Runnable, SubscriberCallbackHandler {}

As you can see, the inter­face merely aggre­gates java.lang.Runnable and another sep­a­rate cus­tom SubscriberCallbackHandler row call­back inter­face that looks like this:

public interface SubscriberCallbackHandler {

    void processSubscriber(Subscriber subscriber);

}

The ratio­nale for this is sim­ple: because we are deal­ing with a poten­tially large, unbounded data set, there is no ques­tion of load­ing all the data into mem­ory as a col­lec­tion. We will need to iter­ate over any result set, deal­ing with each row singly to avoid poten­tial exces­sive mem­ory usage. The cus­tom call­back inter­face there­fore serves as a strongly-typed facade to (in this case) Spring’s RowCallbackHandler. This allows us to have a clean SubscriberDAO imple­men­ta­tion that, if we were using some sim­ple JDBC, might look some­thing like the following:

@Repository public class JdbcSubscriberDAO implements SubscriberDAO {

    private static final class SubscriberRowCallbackHandler implements RowCallbackHandler {

        private static final RowMapper<Subscriber> RM = new SubscriberRowMapper();

        private final SubscriberCallbackHandler callback;

        SubscriberRowCallbackHandler(SubscriberCallbackHandler callback) {
            this.callback = callback;
        }

        public void processRow(ResultSet rs) throws SQLException {
            callback.processSubscriber(RM.mapRow(rs, rs.getRow()));
        }

    }

    private JdbcTemplate t;

    private final String sql;

    @Autowired public JdbcSubscriberDAO(DataSource dataSource) throws IOException {
        t = new JdbcTemplate(dataSource);
        sql = IOUtils.toString(getClass().getResourceAsStream("/com/christophertownson/mail/dao/find-subscribers.sql"), "UTF-8");
    }

    public void executeForSubscribers(SubscriberCallbackHandler callback) {
        t.query(sql, new UserSavedQueryRowCallbackHandler(callback));
    }

}

And there you have it. Whilst we’re on the sub­ject of DAOs, let’s move on to look at the prox­y­ing and ran­dom gen­er­a­tion of data. Prox­y­ing is a clas­sic AOP use case, so I am going to use an aspect for this:

@Component @Aspect public class SubscriberDAOAdvisor {

    private JdbcTemplate db;

    private long numberOfEmailsToSend = 500000;

    private long realSubscriberCount;

    private boolean useRealSubscribersFirst = false;

    @Autowired public SubscriberDAOAdvisor(DataSource dataSource) {
        db = new JdbcTemplate(dataSource);
    }

    @Around("execution(* com.christophertownson.dao.SubscriberDAO.executeForSubscriber(..))") public Object feedDataToCallback(ProceedingJoinPoint pjp) throws Throwable {
        SubscriberCallbackHandler callback = (SubscriberCallbackHandler) pjp.getArgs()[0];
        long numberOfRandomSubscribersToGenerate = getNumberOfRandomSubscribersToGenerate();
        if (useRealSubscribersFirst) pjp.proceed();
        long sent = 0;
        while (sent < numberOfRandomSubscribersToGenerate) {
            callback.processSubscriber(Fixtures.randomSubscriber());
            sent++;
        }
        return null;
    }

    public void setNumberOfEmailsToSend(long numberOfEmailsToSend) {
        this.numberOfEmailsToSend = numberOfEmailsToSend;
    }

    public void setUseRealSubscribersFirst(boolean useRealSubscribersFirst) {
        this.useRealSubscribersFirst = useRealSubscribersFirst;
    }

    private long getNumberOfRandomSubscribersToGenerate(Date publishedBefore) throws Exception {
        if (!useRealSubscribersFirst) return numberOfEmailsToSend;
        realSubscriberCount = db.queryForLong(IOUtils.toString(getClass().getResourceAsStream("/count-subscribers.sql")));
        long numberOfRandomSubscribersToGenerate = numberOfEmailsToSend < realSubscriberCount ? numberOfEmailsToSend - realSubscriberCount : 0;
        return numberOfRandomSubscribersToGenerate;
    }

}

This class is state­ful but that is fine here because we get a new instance for each test case / task exe­cu­tion. Because we are using a call­back style, we inter­cept the first DAO call (to which the row call­back is passed): we can then feed the call­back either real data, fake ran­dom data, or a mix­ture of both (real data sup­ple­mented by fake data).

I shouldn’t really need to tell you what the Fixtures.randomSubscriber() does but, for your amuse­ment, I’ll show you what my basic, generic, ran­dom data gen­er­a­tor looks like so you get the gen­eral idea:

public final class Fixtures {

    private static final String[] TLD = { "biz", "com", "coop", "edu", "gov", "info", "net", "org", "pro", "co.uk", "gov.uk", "ac.uk" };

    public static boolean randomBoolean() {
        return randomLong() % 2 == 0;
    }

    public static Date randomDate() {
        return new Date(randomLong());
    }
    
    public static Date randomDate(Date start, Date end) {
        return new Date(randomLong(start.getTime(), end.getTime()));
    }

    public static String randomEmail() {
        return randomString(1) + "." + randomString(8) + "@" + randomString(8) + "." + randomTopLevelDomain();
    }

    public static Integer randomInt() {
        return randomInt(Integer.MIN_VALUE, Integer.MAX_VALUE);
    }

    public static Integer randomInt(int min, int max) {
        return randomLong(min, max).intValue();
    }

    public static Long randomLong() {
        return randomLong(Long.MIN_VALUE, Long.MAX_VALUE);
    }

    public static Long randomLong(long min, long max) {
        return min + (long) (Math.random() * ((max - min) + 1));
    }

    public static String randomString(int length) {
        byte[] bytes = new byte[length];
        for (int i = 0; i < length; i++) {
            bytes[i] = randomLong(97, 122).byteValue(); // let's just stick to lower-alpha, shall we?
        }
        return new String(bytes);
    }

    public static String randomTopLevelDomain() {
        return TLD[randomInt(0, TLD.length - 1)];
    }

    public static String randomHttpUrl() {
        return "http://" + randomString(8) + "." + randomTopLevelDomain();
    }

    private Fixtures() {}

}

Need­less to say, the com­plex type returned by Fixtures.randomSubscriber() would just need to be com­posed from amongst the rel­e­vant sim­ple types above.

Prox­y­ing and gen­er­a­tion of ran­dom data for the EmailContentDAO.getEmailContent(Subscriber) is even more straightforward:

@Component @Aspect public class EmailContentDAOAdvisor {

    private boolean useRealContent = false;

    @Around("execution(* com.christophertownson.dao.EmailContentDAO.getEmailContent(..))") public Object returnRandomEmailContent(ProceedingJoinPoint pjp) throws Throwable {
        Subscriber subscriber = (Subscriber) pjp.getArgs()[0];
        EmailContent content = null;
        if (useRealContent) content = (EmailContent) pjp.proceed();
        if (content == null) content = Fixtures.randomEmailContent(subscriber); // we can use subscriber values to "seed" random content, if necessary
        return content;
    }

    public void setUseRealContent(boolean useRealContent) {
        this.useRealContent = useRealContent;
    }
}

The final com­po­nent in our test har­ness is the SMTP server itself. As I said, for this I will be using a wrapped Dumb­ster server. My first cut looked a lit­tle like this:

@Component("smtpProxy") public class SmtpProxy {

    private MailAssertionListener[] assertionListeners = {};

    private final int port;

    private SimpleSmtpServer smtp;

    private int totalNumberOfEmailsSent = 0;

    @Autowired public SmtpProxy(@Value("${smtp.port}") int port) {
        this.port = port;
    }

    public void flush() {
        stop();
        @SuppressWarnings("unchecked") Iterator<SmtpMessage> messages = smtp.getReceivedEmail();
        while (messages.hasNext()) {
            SmtpMessage msg = messages.next();
            Email email = new Email(msg); // Email class here is a simple adapter for SmtpMessage
            totalNumberOfEmailsSent++;
            if (assertionListeners != null && assertionListeners.length > 0) {
                for (MailAssertionListener assertion : assertionListeners) {
                    assertion.doAssertion(email);
                }
            }
        }
        start();
    }

    public int getTotalNumberOfEmailsSent() {
        return totalNumberOfEmailsSent;
    }

    @Autowired(required = false) public void setAssertionListeners(MailAssertionListener[] assertionListeners) {
        this.assertionListeners = assertionListeners;
    }

    @PostConstruct public void start() {
        smtp = SimpleSmtpServer.start(port);
    }

    @PreDestroy public void stop() {
        smtp.stop();
    }

}

Even bet­ter than using Dumb­ster here would be to imple­ment as an exten­sion into a real, embed­d­a­ble SMTP server (if some­one would like to vol­un­teer an imple­men­ta­tion based on some­thing like James, please do!) because then it would not be nec­es­sary to flush like this at inter­vals. Nev­er­the­less, this basic imple­men­ta­tion is “good enough” for the time being. If we need to inject asser­tions, we can do so either by imple­ment­ing the MailAssertionListener inter­face and inject­ing either man­u­ally or via Spring (note: the asser­tions here must be true of all emails sent for a given test case, so divide up your test cases and injected asser­tions accordingly):

public interface MailAssertionListener {

    void doAssertion(Email sent);

}
@Component public class ValidEmailAddressAssertionListener {

    public void doAssertion(Email sent) {
        assertThat(EmailValidator.getInstance().isValid(sent.getRecipient()), is(true));
    }

}

Last, but by no means lest, we just need to tie our lit­tle test har­ness together with some of those oblig­a­tory pointy brackets:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:task="http://www.springframework.org/schema/task"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
        http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd">

    <context:property-placeholder location="classpath:test.properties" system-properties-mode="OVERRIDE" />

    <context:component-scan base-package="com.christophertownson" />

    <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl">
        <property name="host" value="localhost" />
        <property name="port" value="${smtp.port}" />
    </bean>

    <task:scheduler id="taskScheduler" />

    <task:scheduled-tasks scheduler="taskScheduler">
        <task:scheduled ref="smtpProxy" method="flush" fixed-delay="${smtp.proxy.flushInterval}" />
    </task:scheduled-tasks>

</beans>

All of the above puts us in a posi­tion to write a “soak” test case such as …

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = {"/applicationContext-test.xml"})
@Transactional
@TransactionConfiguration(defaultRollback = true)
public class BatchMailerTaskSoakTest {

    @Autowired private BatchMailerTask task;

    @Autowired private SmtpProxy smtp;

    @Autowired private SubscriberDAOAdvisor subscriberAdvisor;

    @Autowired private EmailContentDAOAdvisor emailAdvisor;

    @Test(timeout = 86400000) public void shouldBeAbleToSpamOneMillionPeoplePerDay() {
        subscriberAdvisor.setNumberOfEmailsToSend(1000000);
        subscriberAdvisor.setUseRealSubscribersFirst(true);
        emailAdvisor.setUseRealContentFirst(true);
        task.run(); // the test case is the batch task executor
        smtp.flush(); // one final flush before assertions are made
        assertThat(smtp.getTotalNumberOfEmailsSent(), is(1000000));
    }

}

Given that the test passes, we now know that our imple­men­ta­tion would be capa­ble of spam­ming at least 1,000,000 peo­ple per day with­out break­ing a sweat, so long as our real mail server were also up to the task … All we need to do now is check that our “unsub­scribe” func­tion­al­ity also scales accordingly!

Nat­u­rally, a test such as this is not the kind of thing you run as part of your reg­u­lar build. Run­ning poten­tially for a whole day is also some­what extreme. Nonethe­less, this kind of test can be extremely valu­able for test­ing long-running batch tasks. More­over, the kind of test har­ness you can get out of this can have more gen­eral applic­a­bil­ity. For exam­ple, with some func­tional addi­tions, I cur­rently use a ver­sion of the SmtpProxy in devel­op­ment envi­ron­ments to ensure that mail can never get out to real users: every­thing is either dumped as a file to an inbox folder or for­warded to a pre-configured email address (if the recip­i­ent is not con­tained within a recip­i­ent whitelist). This puts an end to such fool­ish­ness as code explic­itly check­ing to see whether it is in a “dev” envi­ron­ment and branch­ing accord­ingly because the environment-specific behav­iour that is desired in such cir­cum­stances is obtained in a man­ner that is com­pletely exter­nal to the appli­ca­tion itself which need know noth­ing about it (not even the con­fig­u­ra­tion need change) and sim­i­lar approaches can be adopted for pretty much any pro­to­col (FTP, HTTP etc).

On May 29, 2011 christopher wrote: Content Negotiation with Spring 3 MVC & Velocity

One of my pet sub­jects is con­tent nego­ti­a­tion. There are many ben­e­fits to a REST­ful approach to web appli­ca­tion archi­tec­ture but few which fill me with such a curi­ous delight as the spec­ta­cle of see­ing loads of dif­fer­ent rep­re­sen­ta­tions of the same under­ly­ing resource being returned at the flick of an Accept header. That prob­a­bly makes me a very weird and geeky per­son but what the hell: con­tent nego­ti­a­tion is fun! … espe­cially when it is achieved with prac­ti­cally zero effort.

Any­how, some of the approaches I have seen to con­tent nego­ti­a­tion seem a lit­tle “over­weight”, fre­quently involv­ing sig­nif­i­cant amounts of cod­ing in order to add sup­port for new rep­re­sen­ta­tions. To be fair, they are intended to pro­vide über solu­tions to every con­ceiv­able sce­nario — which is great when you encounter one of those 1%-of-the-time sit­u­a­tions. Nor­mally, how­ever, you will sim­ply be return­ing a new text-based for­mat. Surely that should be a breeze, right? Well, yes and no.

One of the great improve­ments to Spring in ver­sion 3 is its sup­port for REST and con­tent nego­ti­a­tion in par­tic­u­lar. It also pro­vides almost-out-of-the-box solu­tions for pro­duc­ing JSON and XML rep­re­sen­ta­tions of a view model that might oth­er­wise be ren­dered as HTML (i.e. when we don’t ask for either JSON or XML specif­i­cally). But some­times they don’t quite give you the results you want and cus­tomi­sa­tion of the out­put is nigh-on impos­si­ble, so you have to go back to writ­ing some code. Which is a pain. But hold on a sec­ond … am I not already using a mech­a­nism for seri­al­iz­ing a Java object graph as text in order to pro­duce HTML pages? Indeed I am. In my case, in my hobby project which I call “Appo­site” (see my post on end-to-end BDD test­ing), I am using Apache Veloc­ity

In the post on BDD test­ing, I setup a sin­gle test case which asserted that a “admin­is­tra­tor” could log in to an “admin­is­tra­tion dash­board”. Let’s take up that case and try not just to make it pass but also to make the result­ing page view­able in a num­ber of dif­fer­ent for­mats (e.g. atom, rss, xml, json, plain text etc). Really, I should be writ­ing BDD tests specif­i­cally to cover each of these “view as for­mat” sce­nar­ios but, as they are not really part of the appli­ca­tion require­ments and I am only doing it for demon­stra­tion pur­poses, I am going to skip that for now.

Once again, let’s take this step-by-step and begin by set­ting up our basic Spring web appli­ca­tion. First off, we need to add the Spring depen­den­cies to the pom.xml, if not already present:

    <properties>
        <spring.version>3.0.5.RELEASE</spring.version>
        <spring.security.version>${spring.version}</spring.security.version>
    </properties>

First up, under the DRY prin­ci­ple, I declare the Spring ver­sion num­ber as a property.

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-context</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-web</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.security</groupId>
            <artifactId>spring-security-core</artifactId>
            <version>${spring.security.version}</version>
        </dependency>

The Spring frame­work is well mod­u­larised, so there are a num­ber of dif­fer­ent depen­den­cies required. It can become a lit­tle con­fus­ing when try­ing to deter­mine which Spring depen­den­cies one actu­ally requires but this is prefer­able, in my view (and, obvi­ously, the view of the peo­ple at Spring), to includ­ing a sin­gle mas­sive jar con­tain­ing tonnes of stuff you don’t need. The key mod­ules in the list above are:

spring-context
Con­tains the Spring appli­ca­tion con­text classes and anno­ta­tions to enable you actu­ally con­fig­ure and cre­ate a Spring context.
spring-core
For the pur­poses of our web appli­ca­tion, the most impor­tant aspect of the spring-core.jar is that it is the home for the new, uni­fied con­ver­sion API intro­duced with Spring 3. This rep­re­sents a sig­nif­i­cant improve­ment over the pre­vi­ous PropertyEditor based sup­port for data binding.
spring-web
This is obvi­ously nec­es­sary for a web appli­ca­tion, con­tain­ing, as it does, most of the basic Servlet API inte­gra­tion code includ­ing the ContextLoaderListener and request map­ping annotations.
spring-webmvc
Con­tains a num­ber impor­tant sup­port classes and servlet API inte­gra­tions for cre­at­ing MVC web apps; notably the DispatcherServlet and the MvcNamespaceHandler (more on which below). Con­ven­tion­ally, this is also used to pro­vide view classes, such as Spring’s Veloc­ity and Freemarker imple­men­ta­tions. How­ever, I will be util­is­ing instead my own Veloc­ity inte­gra­tion library, because it is, of course, much bet­ter! (It is my spring-velocity library that makes the con­tent nego­ti­a­tion we are going to be using pos­si­ble so, whilst I won’t go into the imple­men­ta­tion in detail, I will note some key dif­fer­ences with the usual inte­gra­tion with Veloc­ity pro­vided by Spring out-of-the-box.)
spring-security-core
Spring Secu­rity will be used to pro­vide user authen­ti­ca­tion ser­vices since the test case I want to pass requires this.

Because I am using my own Spring-Velocity inte­gra­tion library, I also need to declare my maven repos­i­tory in the POM:

    <repositories>
        <repository>
            <id>christophertownson-com-public</id>
            <url>http://christophertownson.com/mvn/</url>
        </repository>
    </repositories>

And add the library as a dependency …

        <dependency>
            <groupId>com.christophertownson</groupId>
            <artifactId>spring-velocity</artifactId>
            <version>0.0.2</version>
            <exclusions>
                <exclusion>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

I am exclud­ing commons-logging (which is picked up as a tran­si­tive depen­dency via Veloc­ity and Spring) here because I am using slf4j so nei­ther need nor want it.

Finally, I also want to use the XStreamMarshaller and MappingJacksonJsonView to pro­duce default XML and JSON rep­re­sen­ta­tions, respec­tively, so I need to declare run­time depen­den­cies on XStream and Jackson-Mapper:

        <dependency>
            <groupId>com.thoughtworks.xstream</groupId>
            <artifactId>xstream</artifactId>
            <version>1.3.1</version>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.codehaus.jackson</groupId>
            <artifactId>jackson-mapper-asl</artifactId>
            <version>1.6.2</version>
            <scope>runtime</scope>
        </dependency>

Before we start, let’s remind our­selves of the test scenario:

Scenario: I cannot access the administration dashboard unless I am logged in
 
Given I am not logged in
When I go to the administration dashboard
Then I am asked to login
Then I enter the administrator credentials
Then I am redirected to the administration dashboard

Clearly, we are going to need some kind of “admin dash­board con­troller” to sat­isfy this test, so let’s begin with that:

public class DashboardAdminControllerTest {

    private DashboardAdminController controller;

    @Before public void setup() {
        controller = new DashboardAdminController();
    }

    @Test public void shouldReturnEmptyModelWhenIGetDashboard() {
        assertThat(controller.getDashboard(), equalTo((Map<String, Object>) new HashMap<String, Object>()));
    }

}

This sim­ple test should be suf­fi­cient for now: all it asserts is that there is a “get dash­board” method which, for now, just returns an empty model. That should be easy enough to pass …

@Controller public class DashboardAdminController {

    @RequestMapping(value = "/admin", method = RequestMethod.GET) public Map<String, Object> getDashboard() {
        return new HashMap<String, Object>();
    }

}

Some things to notice:

  • In the cre­ation of the first con­troller here, we are set­ting up a name­space and nam­ing con­ven­tion for “admin­is­tra­tion con­trollers” whereby they will live in the org.apposite.controller.admin pack­age and be named *AdminController.
  • The con­troller knows only about the model and noth­ing about view names (there are some unsat­is­fac­tory points to come where it will need to know about the lat­ter and I will have to com­pro­mise with the frame­work, but that is another story).
  • The unit test cov­ers only what the con­troller, as a class, knows about (i.e. the model). The con­fig­u­ra­tion (request map­ping, in this case) is pro­vided as meta­data on the class and implic­itly cov­ered by the end-to-end func­tional test.

We’re also going to need some kind of “login con­troller”. This is cross-cutting func­tion­al­ity but is per­formed by a “User”, so I’m going to cre­ate a UserController for this purpose:

public class UserControllerTest {

    private UserController controller;

    @Before public void setup() {
        controller = new UserController();
    }

    @Test public void shouldReturnEmptyModelWhenIGetLogin() {
        assertThat(controller.login(), equalTo((Map<String, Object>) new HashMap<String, Object>()));
    }

}
@Controller public class UserController extends AbstractController {

    @RequestMapping(value = "/users/login", method = RequestMethod.GET) public Map<String, Object> login() {
        return new HashMap<String, Object>();
    }

}

Because nei­ther con­troller is yet required to gen­er­ate any model data, they are both extremely simple.

Next, we need to pro­vide some way to map the request to a view tem­plate. For this pur­pose, Spring pro­vides the RequestToViewNameTranslator inter­face. I’m obvi­ously using TDD here (if you aren’t these days, what are you doing?!), so I begin with a test for my implementation:

public class ViewNameTranslatorTest {

    private ViewNameTranslator vnt;

    private MockHttpServletRequest req;

    @Before public void setup() {
        vnt = new ViewNameTranslator();
        vnt.setUriPattern("^/((?:admin/)?[a-zA-Z0-9-_]+)(?:\\.[a-zA-Z0-9]+)?(/|/([a-zA-Z0-9-_]+)(?:\\.[a-zA-Z0-9]+)?(/|/([a-zA-Z0-9-_]+)/?)?)?(?:\\.[a-zA-Z0-9]+)?");
        vnt.setEntityIdentifierPattern("^[0-9]+$");
        vnt.setEntityIndex(1);
        vnt.setActionIndex(3);
        vnt.setEntityActionIndex(5);
        vnt.setDefaultAction("list");
        vnt.setReadAction("read");
    }

    @Test public void shouldReturnAdministratorsDashboardView() throws Exception {
        givenRequest("GET", "/admin");
        thenViewNameIs("/admin/list");
    }

    @Test public void shouldReturnLoginFormView() throws Exception {
        givenRequest("GET", "/users/login");
        thenViewNameIs("/users/login");
    }

    private void givenRequest(String method, String uri) {
        givenRequest(method, "/apposite", uri);
    }

    private void givenRequest(String method, String contextPath, String uri) {
        req = new MockHttpServletRequest(method, contextPath + uri);
        req.setContextPath(contextPath);
    }

    private void thenViewNameIs(String name) throws Exception {
        assertThat(vnt.getViewName(req), is(name));
    }

}

Now, you are prob­a­bly (and should be) think­ing “What the hell is going on in that setup method?” I will admit that I am jumping-the-gun a lit­tle here, but it is worth doing, I think: one of my non-functional require­ments for this appli­ca­tion is that all URIs are pre­dictable on the basis of a pat­tern descrip­tion (i.e. a reg­u­lar expres­sion). In other words, it is con­ven­tional. This can become restric­tive but, on the whole, I feel that the con­sis­tency it lends to the appli­ca­tion is desir­able from both a developer’s and a user’s per­spec­tive: “usabil­ity” is a phe­nom­e­non that can be described in terms of “intu­itive­ness” which, in turn, can be described as a form of pre-reflective pat­tern recog­ni­tion. Usabil­ity is a vital con­sid­er­a­tion because, tele­o­log­i­cally, it describes a ten­dency to facil­i­tate, rather than hin­der, intent and action (whether that is of a devel­oper extend­ing a code base or a user attempt­ing to com­plete some sce­nario). There­fore, all my URIs will take the fol­low­ing form, where each ele­ment is optional:

  1. A name­space (e.g. “admin”)
  2. An entity name
  3. An entity identifier
  4. An action name (ide­ally, we would be able to encap­su­late the con­cept of an “action” entirely within the use of HTTP verbs but there are occa­sions where it is nec­es­sary to pro­vide URIs that include an action in order to eas­ily sup­port con­ven­tional human inter­ac­tion via a web browser).
  5. A file exten­sion (which can used to over­ride the content-type spec­i­fied by the Accept header where necessary).

My ViewNameTranslator imple­men­ta­tion will, con­se­quently, have some URI pat­terns defined with cap­ture group indexes set cor­re­spond­ingly so that it can parse out the rel­e­vant con­stituent parts of my URIs. Nonethe­less, the main thing to note is that the test asserts that the view name for the URI GET /admin will be /admin/list and that, for GET /users/login, the view name will be /users/login.

@Component public class ViewNameTranslator implements RequestToViewNameTranslator {

    private Integer actionIndex;

    private String defaultAction;

    private Integer entityActionIndex;

    private RegularExpression entityIdentifierPattern;

    private Integer entityIndex;

    private String readAction;

    private RegularExpression uriPattern;

    @Override public String getViewName(HttpServletRequest request) throws Exception {
        String uri = request.getRequestURI().replaceFirst(request.getContextPath(), "");
        if (!uriPattern.matches(uri)) return null;
        List<String> groups = uriPattern.groups(uri);
        String entity = groups.get(entityIndex);
        String action = groups.size() > actionIndex ? groups.get(actionIndex) : defaultAction;
        String entityAction = groups.size() > entityActionIndex ? groups.get(entityActionIndex) : null;
        String ext = FilenameUtils.getExtension(uri);
        return "/" + entity + "/" + (entityIdentifierPattern.matches(action) ? entityAction != null ? entityAction : readAction : action) + (isNotBlank(ext) ? "." + ext : "");
    }

    @Autowired public void setActionIndex(@Value("${org.apposite.view.ViewNameTranslator.actionIndex}") Integer actionIndex) {
        this.actionIndex = actionIndex;
    }

    @Autowired public void setDefaultAction(@Value("${org.apposite.view.ViewNameTranslator.defaultAction}") String defaultAction) {
        this.defaultAction = defaultAction;
    }

    @Autowired public void setEntityActionIndex(@Value("${org.apposite.view.ViewNameTranslator.entityActionIndex}") Integer entityActionIndex) {
        this.entityActionIndex = entityActionIndex;
    }

    @Autowired public void setEntityIdentifierPattern(@Value("${org.apposite.view.ViewNameTranslator.entityIdentifierPattern}") String entityIdentifierPattern) {
        this.entityIdentifierPattern = new RegularExpression(entityIdentifierPattern, Flag.CASE_INSENSITIVE);
    }

    @Autowired public void setEntityIndex(@Value("${org.apposite.view.ViewNameTranslator.entityIndex}") Integer entityIndex) {
        this.entityIndex = entityIndex;
    }

    @Autowired public void setReadAction(@Value("${org.apposite.view.ViewNameTranslator.readAction}") String readAction) {
        this.readAction = readAction;
    }

    @Autowired public void setUriPattern(@Value("${org.apposite.view.ViewNameTranslator.uriPattern}") String uriPattern) {
        this.uriPattern = new RegularExpression(uriPattern, Flag.CASE_INSENSITIVE);
    }

}

The things to note about this class are:

  1. It is anno­tated with @Component: it is a Spring bean. No bean ID is nec­es­sary. Sim­ply by hav­ing a Spring bean that imple­ments RequestToViewNameTranslator in your appli­ca­tion con­text, Spring will detect and use it as appropriate.
  2. The RegularExpression class I am util­is­ing is a helper class pro­vided by my lit­tle com­mons library which sim­ply wraps java.util.regex.Pattern to make it eas­ier to use.
  3. The var­i­ous con­fig­urable prop­erty val­ues are obtained from an application.properties file (this con­tains prop­er­ties that are inter­nal to the appli­ca­tion itself and are required but which might fea­si­bly be over­rid­den using another prop­er­ties file taken from the tar­get run­time envi­ron­ment. In accor­dance with the unit test, I there­fore have the fol­low­ing application.properties defined:
org.apposite.view.ViewNameTranslator.uriPattern = ^/((?:admin/)?[a-zA-Z0-9-_]+)(?:\\.[a-zA-Z0-9]+)?(/|/([a-zA-Z0-9-_]+)(?:\\.[a-zA-Z0-9]+)?(/|/([a-zA-Z0-9-_]+)/?)?)?(?:\\.[a-zA-Z0-9]+)?
org.apposite.view.ViewNameTranslator.entityIdentifierPattern = ^[0-9]+$
org.apposite.view.ViewNameTranslator.entityIndex = 1
org.apposite.view.ViewNameTranslator.actionIndex = 3
org.apposite.view.ViewNameTranslator.entityActionIndex = 5
org.apposite.view.ViewNameTranslator.defaultAction = list
org.apposite.view.ViewNameTranslator.readAction = read

Dear Spring peo­ple: can you please add an optional=true attribute to the @Value anno­ta­tion so that value injec­tion from a prop­er­ties file does not throw if left uncon­fig­ured? Then I could have defaults in the class and then only have to con­fig­ure when I need to over­ride. Thanks.

Next, let’s con­fig­ure some Veloc­ity basics and make some tem­plates for the admin dashboard:

com.christophertownson.spring.velocity.RedirectViewResolver.order = 0
com.christophertownson.spring.velocity.VelocityViewResolver.order = 2
org.springframework.web.servlet.view.ContentNegotiatingViewResolver.order = 1

First, I set the chain­ing order of the var­i­ous “view resolvers” that I will be using. The order is:

  1. Redi­rect view resolver comes first. We can always detect eas­ily if it is a redi­rect view (because the view name starts with redirect:. More­over, these need to be han­dled dif­fer­ently. There­fore, we get them out of the way with at the start of the chain so there is no need for fur­ther processing.
  2. Next we go to the ContentNegotiatingViewResolver so that it can mar­shall the request and con­struct a sorted set of can­di­date view names (in order of file exten­sion over­ride or Accept header pref­er­ence) on the basis of a media type map­ping (e.g. given an appro­pri­ate media types map­ping, a request result­ing in the view name /home with an Accept: text/json header might result in the view name set /home.json, /home, giv­ing sub­se­quent view resolvers or con­fig­ured default views the oppor­tu­nity to sat­isfy the request using the clients pre­ferred representation).
  3. Finally, we come to my cus­tom VelocityViewResolver. This will look for an exist­ing Veloc­ity tem­plate cor­re­spond­ing to the view name. This means, for exam­ple, that, when used in con­junc­tion with the ContentNegotiatingViewResolver, we could con­fig­ure a MappingJacksonJsonView on the lat­ter to serve a default JSON rep­re­sen­ta­tion (seri­al­ized object graph) but, should we wish to cus­tomise that rep­re­sen­ta­tion, we could sim­ply drop in a /home.json.vm tem­plate: the VelocityViewResolver would con­se­quently indi­cate to the ContentNegotiatingViewResolver that it could sat­isfy a request to rep­re­sent the resource “/home” as JSON and would be del­e­gated to in order to sat­isfy it accord­ingly in pref­er­ence to the MappingJacksonJsonView. Because it works off of the same media types map­ping as the ContentNegotiatingViewResolver, the result­ing VelocityView pro­duced by the VelocityViewResolver will dynam­i­cally be able to deter­mine the cor­rect content-type to use when stream­ing the con­tent back to the client.

Next a lit­tle con­fig­u­ra­tion that is unfor­tu­nately nec­es­sary only because Spring’s otherwise-extremely-handy @Value anno­ta­tion has no “optional” or “default” attributes:

com.christophertownson.spring.velocity.DefaultVelocityInitialisationStrategy.order = 0
com.christophertownson.spring.velocity.VelocityToolsInitialisationStrategy.order = 1
com.christophertownson.spring.velocity.VelocityToolsWebInitialisationStrategy.order = 2

com.christophertownson.spring.velocity.VelocityConfiguration.defaultMediaType = application/xhtml+xml
com.christophertownson.spring.velocity.VelocityConfiguration.defaultLayoutTemplate = /common/layouts/layout.vm
com.christophertownson.spring.velocity.VelocityConfiguration.layoutContextKey = layout
com.christophertownson.spring.velocity.VelocityConfiguration.prefix = /templates
com.christophertownson.spring.velocity.VelocityConfiguration.screenContentKey = screen_content
com.christophertownson.spring.velocity.VelocityConfiguration.suffix = .vm
com.christophertownson.spring.velocity.VelocityConfiguration.toolboxUrl = /WEB-INF/toolbox.xml

com.christophertownson.spring.velocity.SpringViewHelperRenderInterceptor.xhtml = true

This just sets the order of a bunch of “ini­tial­i­sa­tion strate­gies” and prop­er­ties for Veloc­ity. As I say, whilst they need to be configur-able, there should be lit­tle need to ever change many of the above val­ues because most are merely sen­si­ble defaults. If some­one out there knows of a way to option­ally inject val­ues from prop­er­ties files using Spring anno­ta­tions, I would dearly love to hear from you!

And then a lit­tle more veloc­ity con­fig­u­ra­tion so that it is really work­ing the way we want it to …

input.encoding = UTF-8
output.encoding = UTF-8

directive.foreach.maxloops = 1000
directive.set.null.allowed = true

resource.loader = webapp, classpath

classpath.resource.loader.description = Velocity Classpath Resource Loader
classpath.resource.loader.class = org.apache.velocity.runtime.resource.loader.ClasspathResourceLoader

webapp.resource.loader.description = Velocity Web Application Resource Loader
webapp.resource.loader.class = org.apache.velocity.tools.view.WebappResourceLoader
webapp.resource.loader.path = /WEB-INF
webapp.resource.loader.cache = false
webapp.resource.loader.modificationCheckInterval = 10

runtime.log.logsystem.class = org.apache.velocity.runtime.log.Log4JLogChute
runtime.log.logsystem.log4j.logger = org.apache.velocity
runtime.log.invalid.references = false

velocimacro.library = /templates/common/macros/macros.vm,/org/springframework/web/servlet/view/velocity/spring.vm
velocimacro.library.autoreload = true

I won’t detail what each of these options achieves except to point out that, on line 22, I spec­ify the use of 2 macro libraries: one con­tain­ing my own macros and the one pro­vided by Spring (which is very use­ful for form binding).

Any­way, now that we’re mostly con­fig­ured, let’s cre­ate an /admin/list.vm tem­plate to serve the default rep­re­sen­ta­tion of the admin dash­board page:

<h2>Administration Dashboard</h2>

There we go. Nice and basic, I think you’ll agree (it just matches the sim­ple asser­tion from the func­tional test that the head­ing will be “Admin­is­tra­tion Dashboard”).

I’m going to cre­ate my login form as a macro, because I thnk I can safely assume that, at some point, I will want to be able to put the form in mul­ti­ple pages:

#macro(loginForm)
<form class="login" method="post" action="$linkTool.relative('/j_spring_security_check')">
    <fieldset>
        #label('j_username' 'Username') <input type="text" id="j_username" name="j_username" maxlength="255" />
        #label('j_password' 'Password') <input type="password" id="j_password" name="j_password" maxlength="40" />
        #submitInput('Login')
    </fieldset>
</form>
#end

#macro(label $for $label)<label for="$for">$label</label>#end

#macro(submitInput $value)<input type="submit" value="$value" />#end

Con­se­quently, all I need to do to cre­ate my login page is to cre­ate the fol­low­ing template:

#loginForm()

Let’s quickly cre­ate a default lay­out template …

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" version="-//W3C//DTD XHTML 1.1//EN">
    <head>
        <meta http-equiv="content-type" content="application/xhtml+xml;charset=utf-8" />
        <title>Apposite</title>
    </head>
    <body>
        <div id="header"><h1>Apposite</h1></div>
        <div id="content">${screen_content}</div>
        <div id="footer"><p>&copy; Apposite 2011</p><!-- obligatory paranoid copyright notice --></div>
    </body>
</html>

… and we’re almost there. We just need a lit­tle applicationContext.xml and web.xml jiggery-pokery and we’re pretty much done:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:context="http://www.springframework.org/schema/context"
    xmlns:mvc="http://www.springframework.org/schema/mvc"
    xmlns:security="http://www.springframework.org/schema/security"
    xmlns:util="http://www.springframework.org/schema/util"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
        http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
        http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.3.xsd
        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd">

    <context:property-placeholder location="classpath:application.properties,classpath:environment.properties" system-properties-mode="OVERRIDE" />

    <context:component-scan base-package="org.apposite,com.christophertownson.spring.velocity" />
    
    <security:http auto-config="true" disable-url-rewriting="true" path-type="regex">
        <security:intercept-url pattern="/admin/?.*" access="${apposite.security.admin.role.name}" />
        <security:form-login login-page="/users/login" default-target-url="/" authentication-failure-url="/users/login"/>
        <security:logout logout-url="/users/logout"/>
    </security:http>
    
    <security:authentication-manager>
        <security:authentication-provider>
            <security:user-service>
                <security:user name="${apposite.security.root.user.name}" password="${apposite.security.root.user.password}" authorities="${apposite.security.admin.role.name}"/>
            </security:user-service>
        </security:authentication-provider>
    </security:authentication-manager>

    <mvc:annotation-driven />

    <mvc:default-servlet-handler />

    <util:properties id="velocityProperties" location="classpath:velocity.properties" />

    <bean id="velocityToolboxFactory" class="org.apache.velocity.tools.config.XmlFactoryConfiguration">
        <constructor-arg type="boolean" value="false" />
    </bean>

    <util:map id="mediaTypes">
        <entry key="json" value="application/json"/>
        <entry key="xml" value="application/xml"/>
        <entry key="rss" value="application/rss+xml" />
    </util:map>

    <bean class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver">
        <property name="order" value="${org.springframework.web.servlet.view.ContentNegotiatingViewResolver.order}"/>
        <property name="mediaTypes" ref="mediaTypes" />
        <property name="defaultViews">
            <list>
                <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView"/>
                <bean class="org.springframework.web.servlet.view.xml.MarshallingView">
                    <constructor-arg>
                        <bean class="org.springframework.oxm.xstream.XStreamMarshaller"/>
                    </constructor-arg>
                    <property name="contentType" value="application/xml;charset=UTF-8"/>
                </bean>
            </list>
        </property>
    </bean>

</beans>

It’s worth run­ning through this file step-by-step …

  1. On line 14, I load the prop­er­ties in order such that application.properties will be over­rid­den by environment.properties which will in turn be over­rid­den by sys­tem properties.
  2. On line 16, I load anno­tated com­po­nents from the name­spaces org.apposite (to get my Con­trollers) and com.christophertownson.spring.velocity (to get my Veloc­ity setup).
  3. On lines 18–30, I con­fig­ure a very basic Spring Secu­rity setup, using hard-coded users and pass­words in the clear. This can be improved in future. Note, how­ever, that I do exter­nalise the user and role name con­fig­u­ra­tion details into appli­ca­tion or envi­ron­ment prop­er­ties: this will make some things eas­ier as things progress.
  4. On line 32, I declare annotation-driven MVC because this is a great addi­tion to Spring that basi­cally sets up every­thing you need to use anno­tated con­trollers (as I am).
  5. On line 34, I declare the default servlet han­dler. Another good new XML short­hand, this sets up han­dling of sta­tic resources by the container’s default servlet (as it says on the tin).
  6. On line 36, I instan­ti­ate a java.util.Properties instance with the bean ID velocityProperties so that this is auto-wired into the default Veloc­ity setup achieved via the component-scan of the com.christophertownson.spring.velocity pack­age. Sim­i­larly, on lines 38–40, I instan­ti­ate a Veloc­ity tool­box fac­tory, so that I can use Veloc­ity tools using the new tool­box for­mats intro­duced with Veloc­ity Tools 2.
  7. On lines 42–6, I con­fig­ure the media types map­ping: this is a bi-directional map used by both the content-negotiating view resolver and the Veloc­ity view to deter­mine either (a) the file exten­sion to use for a requested content-type or, inversely, (b) the content-type to use for a given file exten­sion (falling back to a default content-type con­fig­ured in appli­ca­tion or envi­ron­ment prop­er­ties when no content-type file exten­sion is present). To add sup­port for new for­mats, all we need to do is add it to the map­ping and drop in cor­re­spond­ing tem­plates for any resources for which you want a rep­re­sen­ta­tion in that content-type to be available.
  8. Last, but by no means least, on lines 48–62, I instan­ti­ate the ContentNegotiatingViewResolver, con­fig­ur­ing it with the media types map and giv­ing it the MappingJacksonJsonView and XStreamMarshaller as default views (so that we can get a JSON or XML rep­re­sen­ta­tion of any URI, if we so desire).

To get the whole thing work­ing, so that the Spring appli­ca­tion con­text is fired-up when the built WAR is deployed or started, we just need a stan­dard web.xml that declares the rel­e­vant servlets and fil­ters for Spring and Spring Security:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
    <display-name>apposite</display-name>
    <description>A publishing application</description>

    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:org/apposite/applicationContext.xml</param-value>
    </context-param>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>

    <filter>
        <filter-name>springSecurityFilterChain</filter-name>
        <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
    </filter>

    <filter>
        <filter-name>httpMethod</filter-name>
        <filter-class>org.springframework.web.filter.HiddenHttpMethodFilter</filter-class>
    </filter>

    <filter>
        <filter-name>encoding</filter-name>
        <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
        <init-param>
            <param-name>encoding</param-name>
            <param-value>UTF-8</param-value>
        </init-param>
        <init-param>
            <param-name>forceEncoding</param-name>
            <param-value>true</param-value>
        </init-param>
    </filter>

    <filter-mapping>
        <filter-name>springSecurityFilterChain</filter-name>
        <servlet-name>apposite</servlet-name>
    </filter-mapping>

    <filter-mapping>
        <filter-name>httpMethod</filter-name>
        <servlet-name>apposite</servlet-name>
    </filter-mapping>

    <filter-mapping>
        <filter-name>encoding</filter-name>
        <servlet-name>apposite</servlet-name>
    </filter-mapping>

    <servlet>
        <servlet-name>apposite</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>

    <servlet-mapping>
        <servlet-name>apposite</servlet-name>
        <url-pattern>/</url-pattern>
    </servlet-mapping>

</web-app>

Note that the <servlet-name> is “appo­site” and not “spring” (or some­thing sim­i­larly generic) — it is a minor detail but this can result in some clearer log mes­sages dur­ing startup, espe­cially if you have a num­ber of Spring webapps run­ning in the same con­tainer. It also means that the DispatcherServlet will be look­ing for a WEB-INF/apposite-servlet.xml file: I could con­fig­ure it to point to the main app con­text file but, instead, I usu­ally opt to just stick an empty appli­ca­tion con­text file there — his­tor­i­cally, Spring rec­om­mend putting your con­troller and map­ping dec­la­ra­tions in this file but that becomes unnec­es­sary when you are using anno­tated con­trollers. I am not a fan of hav­ing mul­ti­ple appli­ca­tion con­text XML files, what­ever the sup­posed ratio­nale for divid­ing them up: if you find you are hav­ing to do too much “pointy-bracket-configuration”, then sim­ply hid­ing it across many files is not the answer!

In addi­tion to the DispatcherServlet (which dis­patches requests to the right con­troller), the DelegatingFilterProxy (which is here set­ting up the Spring Secu­rity fil­ter), and the ContextLoaderListener (which is start­ing the main appli­ca­tion con­text for access by the Spring Secu­rity fil­ter and dis­patcher servlet), I am also using the very use­ful HiddenHttpMethodFilter (to tun­nel POST or GET requests from browser form sub­mis­sions to more appro­pri­ate HTTP verbs) and the CharacterEncodingFilter (which does exactly what it says on the tin).

Now we are all ready to go and, given our end-to-end test­ing setup, we should be able to exe­cute mvn test -PFunctionalTests from a com­mand prompt at the project root and see the appli­ca­tion started up, fol­lowed by the web dri­ver test being run and passing.

As you may recall, I wanted to do a lit­tle more than just pass the test: I wanted to be able to get dif­fer­ent rep­re­sen­ta­tions of the same resource. Given the steps so far, it is pos­si­ble to view GET /admin in default JSON or XML views gen­er­ated by XStream or Jackson-Mapper sim­ply by adding a .xml/.json file exten­sion to the URI or (bet­ter) by issu­ing the request with an Accept: application/xml or Accept: application/json header. How­ever, as there is no real “model” (object graph) asso­ci­ated with this sim­ple page, you will begin to see some of the lim­i­ta­tions of these default views … but, sim­ply by drop­ping in a new tem­plate or two, we can selec­tively over­ride the use of these default XML/JSON views:

$screen_content

First we cre­ate a totally “neu­tral” lay­out (because oth­er­wise our default one would start try­ing to wrap our JSON or XML or what­ever inside an HTML page lay­out). Now we can cre­ate, for exam­ple, an XML tem­plate for the admin dash­board page (leav­ing aside the ques­tion of the use­ful­ness of such a rep­re­sen­ta­tion of this resource for the time being):

#set($layout="/common/layouts/nolayout.vm")<?xml version="1.0" encoding="UTF-8"?>
<admin><dashboard><title>Administration Dashboard</title></dashboard></admin>

Now, of course, that is not a stun­ningly use­ful exam­ple but I have no doubt that you can imag­ine a num­ber of bet­ter use cases your­self. Within the “Appo­site” appli­ca­tion, for instance, I have a con­cept of a “Cal­en­darEvent” within the domain model (which rep­re­sents an “adver­tised, pub­lic event sub­mit­ted by a user” as opposed to, say, an “appli­ca­tion event”). I use this form of content-negotiation to deliver HTML, Atom, RDF, RSS, and ICS rep­re­sen­ta­tions of events to the user from the same URI (with no addi­tional pro­gram­ming required), whilst also being able to pro­vide links for browsers within HTML pages by using the equiv­a­lent URI with the desired format’s file exten­sion. If you com­pare this approach to the frankly painful process of adding RSS/Atom sup­port using Spring’s own AbstractRssFeedView and Rome, I think you will be pleas­antly sur­prised. Object graphs are object graphs and text is text. Turn­ing the for­mer into the lat­ter should be a generic and abstract activ­ity that requires no pro­gram­ming, only a syn­tac­ti­cal descrip­tion of the trans­for­ma­tion (which is what a tem­plate is). Of course, there are more com­plex, binary for­mats that you may also wish to deliver for which this style of con­tent nego­ti­a­tion is not appro­pri­ate … but, then, you are not tied to it. Because the Veloc­ity View resolver lives at the end of the view resolver chain and will return null if it can­not locate an exist­ing tem­plate, you can add your View imple­men­ta­tions for more com­plex cases as default views on the ContentNegotiatingViewResolver and just use Veloc­ity selec­tively to deliver cus­tomised rep­re­sen­ta­tions of spe­cific resources. To be hon­est, I have not yet had to sat­isfy any use cases that can­not be met by this sim­ple method of con­tent nego­ti­a­tion for text-based formats.

On May 16, 2011 christopher wrote: End-to-End Testing With Maven, Jetty & JBehave

One of those seem­ingly triv­ial top­ics of debate amongst soft­ware devel­op­ers which is liable to irk me is the sub­ject of depen­den­cies. There is noth­ing more frus­trat­ing than check­ing out a project only only to dis­cover that it has no end of “exter­nals” and other assorted envi­ron­men­tal depen­den­cies that are out­side of the con­trol of the project itself. Ide­ally, the struc­ture and build sup­port for an exe­cutable project (e.g. a web appli­ca­tion) should make it pos­si­ble for it to be run “out-of-the-box” because three sig­nif­i­cant costs are incurred where this is not the case:

  1. Time-to-start” for any new devel­oper is increased.
  2. The appli­ca­tion becomes more frag­ile through expo­sure to change in exter­nal dependencies.
  3. End-to-end func­tional, in-container test­ing of the appli­ca­tion becomes cor­re­spond­ingly more com­plex, the setup process for which now has to effec­tively doc­u­ment and keep pace with changes to exter­nal sys­tems (which, in prac­tice, are often not known until after your tests start fail­ing, lead­ing to wasted time debug­ging the cause of failure).

The argu­ment I some­times hear against my approach is that it attempts to cre­ate a mono­lithic sys­tem (the “one ring to rule them all” syn­drome) and that sep­a­ra­tion into “mod­ules” helps to cre­ate smaller, more man­age­able appli­ca­tions (smaller: maybe; more man­age­able: def­i­nitely not, in my view). “Mod­u­lar­i­sa­tion” and “service-oriented” approaches need not incur the loss of compile-time safety (which is an advan­tage of a lan­guage like Java) nor the frag­men­ta­tion of inte­gra­tion con­cerns. To demon­strate this, and for my own enjoy­ment and edu­ca­tion, I have recently been work­ing in my spare time on putting together a small web appli­ca­tion that I call “Appo­site”. In this and sub­se­quent posts, I would like to share with you what I see as some of the key struc­tural and archi­tec­tural approaches that I am adopt­ing, begin­ning with the assump­tion that the entire appli­ca­tion must always remain exc­etable as a stand­alone arte­fact using just the build script so that it is pos­si­ble to do both end-to-end func­tional test­ing of the appli­ca­tion as part of the build and to make it pos­si­ble to do “live cod­ing” (where changes in com­piled code are quickly vis­i­ble within a run­ning instance.

The appli­ca­tion itself is a very con­ven­tional Java web appli­ca­tion build using the now-standard stack of Spring and Hiber­nate. For builds, I am using Maven. For test­ing, I am using JUnit (of course), Web­Driver, and JBe­have (whilst arguably not as good as cucum­ber it is eas­ier to use with Java projects and fits more nicely with my “out-of-the-box” non-functional require­ments). As you can see, hardly ground­break­ing stuff … but some­thing I still see done “wrong” in so many places and by so many peo­ple (which is some­what inve­vitable by virtue of its popularity).

Get­ting the basic project struc­ture in place

Let’s begin from the very basics, assum­ing a stan­dard Maven web appli­ca­tion project structure:

  • appo­site
    • pom.xml
    • src
      • main
        • java
        • resources
        • webapp
          • WEB-INF
            • web.xml
      • test
        • java
        • resources

In the pom.xml, we declare the test­ing depen­den­cies we are going to be using, start­ing with JUnit:

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.8.2</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.jbehave</groupId>
            <artifactId>jbehave-core</artifactId>
            <version>3.3.2</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.seleniumhq.selenium</groupId>
            <artifactId>selenium-common</artifactId>
            <version>${selenium.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.seleniumhq.selenium</groupId>
            <artifactId>selenium-firefox-driver</artifactId>
            <version>${selenium.version}</version>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>commons-logging</groupId>
                    <artifactId>commons-logging</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.seleniumhq.selenium</groupId>
            <artifactId>selenium-support</artifactId>
            <version>${selenium.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>3.0.5.RELEASE</version>
            <scope>test</scope>
        </dependency>

Func­tional tests are inher­ently slower to run that unit tests and we do not nec­es­sar­ily want to run them all the time. There­fore, we want to exe­cute them only dur­ing Maven’s inte­gra­tion test phase and, even then, only when we spec­ify a func­tional test pro­file. To achieve this, we begin by adding the maven-failsafe-plugin to the build sec­tion of our POM:

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <version>2.8</version>
            </plugin>

By default, this will run JUnit tests that match the pat­tern **/*IT.java dur­ing the inte­gra­tion test phase of the build. You can stick with the default, how­ever I pre­fer the slightly more descrip­tive nam­ing con­ven­tion **/*FunctionalTest.java — that can yield slightly over-long test names but it is at least blind­ingly clear what sort of test your test class is! To ensure my pre­ferred test nam­ing con­ven­tion does not con­flict with the stan­dard sure­fire plu­gin defaults, I con­fig­ure excludes and includes in the surefire-plugin:

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.8.1</version>
                <configuration>
                    <includes>
                        <include>**/*Test.java</include>
                    </includes>
                    <excludes>
                        <exclude>**/*FunctionalTest.java</exclude>
                    </excludes>
                </configuration>
            </plugin>

By default in Maven, the webapp source folder is not on the test class­path. It makes this kind of in-container func­tional test­ing much eas­ier if it is. To place the webapp folder on the class­path, you can use the build-helper-maven-plugin:

            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>build-helper-maven-plugin</artifactId>
                <version>1.5</version>
                <executions>
                    <execution>
                        <id>add-test-resource</id>
                        <phase>generate-test-sources</phase>
                        <goals>
                            <goal>add-test-resource</goal>
                        </goals>
                        <configuration>
                            <resources>
                                <resource>
                                    <directory>src/main/webapp</directory>
                                </resource>
                            </resources>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

Next in the build plu­g­ins we need to declare and con­fig­ure the jetty-maven-plugin so that we can fire-up the entire web appli­ca­tion using mvn jetty:run:

            <!-- You need to specify -Djetty.port=${port} if 8080 is already bound on the build machine -->
            <plugin>
                <groupId>org.mortbay.jetty</groupId>
                <artifactId>jetty-maven-plugin</artifactId>
                <version>7.2.0.v20101020</version>
                <dependencies>
                    <!-- you can declare what would commonly be your container-provided dependencies here, such as log4j etc -->
                </dependencies>
                <configuration>
                    <webAppConfig>
                        <contextPath>/${project.artifactId}</contextPath>
                    </webAppConfig>
                    <jettyConfig>src/test/resources/jetty.xml</jettyConfig>
                    <useTestClasspath>true</useTestClasspath>
                    <scanIntervalSeconds>10</scanIntervalSeconds>
                    <stopKey>${project.artifactId}-stop</stopKey>
                    <stopPort>9999</stopPort>
                </configuration>
            </plugin>

Note the use of the jetty.xml jet­ty­Con­fig there: I use this to declare a JNDI data­source (this needs to be a jetty server con­fig — using a jetty web appli­ca­tion con­fig will result in hor­rific mem­ory leaks around data­base con­nec­tions with the reg­u­lar restarts that you may well want if you are doing “live cod­ing” against a run­ning jetty instance using this build con­fig) so that all my app has to know is the JNDI name and the actual details of this will always be encap­su­lated within the con­tainer (in this case, the test har­ness). In Appo­site, I am using an in-memory HSQLDB data­base for test­ing, so I declare c3p0 and HSQLDB as container-provided depen­den­cies of the jetty plu­gin and my jetty.xml looks like this:

<?xml version="1.0"  encoding="UTF-8"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
<Configure class="org.eclipse.jetty.server.Server">
    <New class="org.eclipse.jetty.plus.jndi.Resource">
        <Arg>jdbc/apposite</Arg>
        <Arg>
            <New class="com.mchange.v2.c3p0.ComboPooledDataSource">
                <Set name="driverClass">org.hsqldb.jdbcDriver</Set>
                <Set name="jdbcUrl">jdbc:hsqldb:mem:apposite</Set>
                <Set name="user">sa</Set>
                <Set name="password"></Set>
            </New>
        </Arg>
    </New>
</Configure>

The final touch in the POM is to setup a func­tional tests pro­file so that we can exe­cute the in-container tests using mvn test -PFunctionalTests (or -PWhateverYouCallYourProfile):

        <profile>
            <id>FunctionalTests</id>
            <activation>
                <activeByDefault>false</activeByDefault>
            </activation>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-failsafe-plugin</artifactId>
                        <configuration>
                            <includes>
                                <include>**/*FunctionalTest.java</include>
                            </includes>
                        </configuration>
                        <executions>
                            <execution>
                                <id>integration-test</id>
                                <goals>
                                    <goal>integration-test</goal>
                                </goals>
                            </execution>
                            <execution>
                                <id>verify</id>
                                <goals>
                                    <goal>verify</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    <plugin>
                        <groupId>org.mortbay.jetty</groupId>
                        <artifactId>jetty-maven-plugin</artifactId>
                        <executions>
                            <execution>
                                <id>start-jetty</id>
                                <phase>pre-integration-test</phase>
                                <goals>
                                    <goal>run</goal>
                                </goals>
                                <configuration>
                                    <scanIntervalSeconds>0</scanIntervalSeconds>
                                    <daemon>true</daemon>
                                </configuration>
                            </execution>
                            <execution>
                                <id>stop-jetty</id>
                                <phase>post-integration-test</phase>
                                <goals>
                                    <goal>stop</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                </plugins>
            </build>
        </profile>

In this pro­file, we tell the jetty plu­gin to fire up our webapp just before the inte­gra­tion test phase starts (and to stop it once it is com­plete) and inform the fail­safe plu­gin that it should exe­cute tests that match the nam­ing con­ven­tion **/*FunctionalTest.java (you could, of course, avoid hav­ing to have this bit of con­fig­u­ra­tion by sim­ply accept­ing the default **/*IT.java con­ven­tion … but I have some­thing of a dis­like of “pub­lic abbre­vi­a­tions” like this). Note also that the scanIntervalSeconds prop­erty is set to 0: we do not want jetty acci­den­tally detect­ing some change on the class­path due to code gen­er­at­ing some resource there and restart­ing mid-test as a con­se­quence. Set­ting this prop­erty to 0 ensures this by turn­ing off the jetty plu­gin change scan. This over­rides the set­ting in the build plu­gin con­fig­u­ra­tion (where we set it to 10 sec­onds) which was intended to have the oppo­site effect: when we do “live cod­ing” against a run­ning jetty instance, we want it to pick-up and deploy our code changes regularly.

As usual with Maven, there is a bit of an excess of pointy brack­ets here … but once you have a use­ful project struc­ture you can always turn it into an arche­type, thus avoid­ing the need to have to recre­ate it by hand every time.

Cre­at­ing a “frame­work” for the func­tional tests

We now have a web appli­ca­tion (albeit with no actual code) that we can fire up and run from our build script and which will auto­mat­i­cally run cer­tains tests in-container dur­ing the build if and when we so choose. Before we plough ahead, we should stop and give a lit­tle thought to how we want to divide up our func­tional tests and what kind of sup­port infra­struc­ture they might ben­e­fit from.

The first thought that occurred to me at this point is “I’m using Spring already so surely I must be able to re-utilise it to make man­ag­ing my test code eas­ier?” This is indeed pos­si­ble but, if you are using annotation-based Spring con­text con­fig­u­ra­tion (as I am), then it is a very good idea to use a com­pletely sep­a­rate name­space for your func­tional test “con­text” to ensure there is no chance of it becom­ing mixed up with your real appli­ca­tion. In the case of Appo­site, my appli­ca­tion name­space is org.apposite. There­fore, rather than use org.apposite.bdd (or sim­i­lar), I opted for bdd.org.apposite: no chance of a con­flict, as it is not a sub­set of the appli­ca­tion name­space. I began by mak­ing a min­i­mal appli­ca­tion con­text con­fig that would pro­vide pretty much all the sup­port­ing infra­struc­ture I would need for my tests:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">

    <context:property-placeholder location="classpath:application.properties,classpath:environment.properties" system-properties-mode="OVERRIDE" />

    <context:component-scan base-package="bdd.org.apposite" />

    <bean id="web" class="org.openqa.selenium.firefox.FirefoxDriver" scope="prototype" destroy-method="close" />

</beans>

This file achieves the following:

  1. Pro­vides access to “appli­ca­tion” and “envi­ron­ment” prop­er­ties (of the real appli­ca­tion) from src/main/resources/application.properties and src/test/resources/environment.properties, respec­tively, so that these val­ues can be utilised to con­struct test cases and asser­tions. Note that the environment.properties defines prop­er­ties spe­cific to the con­tainer or the envi­ron­ment and would nor­mally be pro­vided by the tar­get deploy­ment con­tainer. The ver­sion in src/test/resources there­fore repli­cates this require­ment for the test con­tainer whilst also serv­ing as a form of doc­u­men­ta­tion, thus help­ing me to keep these two dis­tinct areas of con­fig­u­ra­tion entirely seper­ate whilst main­tain­ing runnabil­ity “out-of-the-box”.
  2. Instan­ti­ates anno­tated com­po­nents within the test name­space only via context:component-scan.
  3. Pro­vides access to Web­Driver (I’m using the fire­fox dri­ver here). Notice that the scope is prototype so that each test will get it’s own new instance. Here I also spec­ify a destroy-method — that is a bit of “belt & braces” para­noia to try and ensure that we do not leave Fire­fox win­dows open on com­ple­tion of a test case (it doesn’t actu­ally achieve that due to the nature of the Spring bean life­cy­cle in rela­tion to the test exe­cu­tion, but out of pure super­sti­tion I felt it bet­ter defined than not, if you know what I mean).

Next, I started think­ing about how I wanted to divide up my tests. This is worth doing if you are using some­thing like JBe­have because one of the first things you will need to do is to tell it how to load “story files”. Con­se­quently, know­ing which story files to load is impor­tant. If you sim­ply load all your story files in one go (which is an option) you will have one mas­sive func­tional test. That may not nec­es­sar­ily be a prob­lem, but it does limit things some­what, espe­cially in terms of report­ing, and may quickly become unwieldy for any­thing but the small­est and most sim­ple of appli­ca­tions. In line with gen­eral BDD guide­lines, I wanted to split my tests up into “func­tional areas” within which one or more user sce­nar­ios could be encap­su­lated (for exam­ple, “reg­is­tra­tion”). I opted to go for a one-to-one rela­tion between a func­tional test class and story file because then, once a basic test exe­cu­tion mech­a­nism was in place, I would sim­ply be able to write the story file and cre­ate a sim­ple test class that spec­i­fied the story file to run. I would then see these as indi­vid­u­ally exe­cuted tests within both the Maven inte­gra­tion test phase and in the JBe­have report­ing. I felt the cost of hav­ing to pro­duce at least one “no code” class per story file was off­set by the flex­i­bil­ity this approach would pro­vide. After writ­ing a few tests, I extracted the fol­low­ing through a process of refactoring:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath:bdd/org/apposite/functional-tests.xml")
public abstract class AbstractFunctionalTest extends Embedder {

    @Autowired private GenericApplicationContext ctx;

    private final String includes;

    protected AbstractFunctionalTest(String storyFile) {
        includes = "bdd/org/apposite/" + storyFile;
    }

    @Test public void runStories() {
        useCandidateSteps(new InstanceStepsFactory(configuration().useStoryReporterBuilder(new StoryReporterBuilder().withCodeLocation(codeLocationFromClass(getClass())).withFormats(CONSOLE, TXT, XML, HTML)), ctx.getBeansWithAnnotation(Steps.class).values().toArray()).createCandidateSteps());
        runStoriesAsPaths(new StoryFinder().findPaths(codeLocationFromClass(getClass()), includes, ""));
    }

}

This class extends org.jbehave.core.embedder.Embedder, which is the key thing in enabling it to become an exe­cutable JUnit test that runs JBe­have sto­ries. (We also have to over­ride some of JBehave’s frankly ter­ri­ble report­ing defaults!) It utilises the Spring test­ing frame­work and the test con­text described above to allow autowiring of JBe­have “steps” (which are just POJOs anno­ta­tion with @Given, @When, and @Then meth­ods that are matched against the cor­re­spond­ing lines from a story file. I cre­ated a cus­tom Spring com­po­nent stereo­type called @Steps:

@Target({ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Component
public @interface Steps {
    String value() default "";
}

All Spring beans anno­tated with the @Steps anno­ta­tion are pre­sented to JBe­have as can­di­date steps for the exe­cu­tion of a story file which is spec­i­fied by a con­crete sub­class. At present, there is no clever fil­ter­ing of can­di­date steps and all story files are assumed at the very least to live within the bdd.org.apposite name­space. You could prob­a­bly be much more sophis­ti­cated if nec­es­sary, but this has proved suf­fi­cient for my require­ments so far.

Writ­ing the tests

Now we are in a posi­tion to write a basic story file. Let’s start with a “secu­rity” fea­ture, for exam­ple: “I should not be able to access the admin­is­tra­tion dash­board unless I am logged in and have suf­fi­cient priv­i­leges”. Whilst per­haps not the best exam­ple of a “func­tional area” (because “secu­rity” is in real­ity an con­stituent com­po­nent of other func­tional areas and cuts across them) it will nonethe­less suf­fice for now:

Scenario: I cannot access the administration dashboard unless I am logged in

Given I am not logged in
When I go to the administration dashboard
Then I am asked to login
Then I enter the administrator credentials
Then I am redirected to the administration dashboard

This is an extremely basic exam­ple of a story file. For a full descrip­tion of all the pos­si­ble fea­tures of story files in JBe­have, check out their doc­u­men­ta­tion (which, whilst it appears com­pre­hen­sive, does not win any prizes for clar­ity). Noneth­less, I’m sure you get the gist: each sce­nario con­sists of some pre­con­di­tions, an oper­a­tion, and some post­con­di­tions. You can have as many sce­nar­ios per “story file” as you like, sep­a­rated by scenario dec­la­ra­tions (this is why I opted to use the *.stories exten­sion rather than the more com­mon *.story — the for­mer made more gram­mat­i­cal sense to me).

Next, because I opted for a one-to-one rela­tion between story files and exe­cutable tests, you need to cre­ate a very sim­ple class that will indi­cate that this story file needs to be run:

public class SecurityFunctionalTest extends AbstractFunctionalTest {

    public SecurityFunctionalTest() {
        super("security.stories");
    }

}

Bingo! We have a behaviour-driven func­tional test. “But where the hell is all the logic?” I am sure you are ask­ing (if you are, it is, of course, the right ques­tion … and I shall move onto that now).

Organ­is­ing respon­si­bil­i­ties into page objects

The good peo­ple behind Web­Driver (and almost any­one else who has done this kind of test­ing) rightly rec­om­mend that you take the impor­tant step of describ­ing your web appli­ca­tion in terms of “page objects”. A page object should, loosely speak­ing, cor­re­spond to the response from a given URI and encap­su­late the “ser­vices” (forms, links, key infor­ma­tion) that it pro­vides. In the test case above, I am inter­ested in two pages:

  1. A “logout” page (this is the least obvi­ous but bear in mind that we need to encap­su­late access to a URI that will ensure that we are not logged in to com­plete the first step)
  2. The “admin­is­tra­tion dash­board” page
  3. The “login” page

Clearly, there are going to be many things that are com­mon to all pages (even if you have a very inco­her­ent user inter­face). For exam­ple, at the very least, you should be able to “visit” all pages. Also, for test­ing pur­poses, we should be able to assert what page we are cur­rently on. There­fore, I will cut to the chase and begin with an abstract super­class for them all that can be used to describe these com­mon features:

public abstract class AbstractPage {

    private static final int DEFAULT_PORT = 8080;

    private String url;

    private WebDriver web;

    protected AbstractPage(String uri, WebDriver web) {
        this.web = web;
        int port = System.getProperty("jetty.port") != null ? Integer.valueOf(System.getProperty("jetty.port")) : DEFAULT_PORT;
        url = "http://localhost:" + port + "/apposite" + uri;
    }

    public void assertIsCurrentPage() {
        assertThat(isCurrentPage(), is(true));
    }

    public abstract boolean isCurrentPage();

    public final void visit() {
        web.get(url);
    }

There is a bug in maven-surefire-plugin < 2.6 which will pre­vent the sys­tem prop­erty being avail­able to the test here, so will have to hard-code the port on which you run your func­tional tests or upgrade. See SUREFIRE-121.

Our abstract page is respon­si­ble for man­ag­ing the Web­Driver instance (which it requires for instan­ti­a­tion), and coor­di­nat­ing what port, host, and con­text path we are run­ning on (the lat­ter two hard­coded here for the sake of sim­plic­ity but eas­ily exter­nal­is­able through sys­tem prop­er­ties if nec­es­sary). This means that con­crete page instances only need to spec­ify what URI they have (with­out hav­ing to con­sider con­text paths etc) and the abstract super­class can per­form all the actual “get­ting” and so forth.

Next, we define our actual, con­crete pages:

public class AdministrationDashboardPage extends AbstractPage {

    @FindBy(css = "h2") private WebElement heading;

    public AdministrationDashboardPage(WebDriver web) {
        super("/admin", web);
    }

    @Override public boolean isCurrentPage() {
        return "Administration Dashboard".equals(heading.getText());
    }

}

This first page def­i­n­i­tion is extremely sim­ple. Using the “finder” anno­ta­tions pro­vided by the selenium-support arte­fact, we use the pres­ence of some head­ing text to deter­mine whether or not we are actu­ally on the admin dash­board page. I will come back to the @FindBy anno­ta­tion in a lit­tle while.

public class LogoutPage extends AbstractPage {

    public LogoutPage(WebDriver web) {
        super("/users/logout", web);
    }

    @Override public boolean isCurrentPage() {
        return false;
    }

}

The logout page is even more sim­ple because we should only ever need to “visit” it and we should never actu­ally be “on it”. It is just a URI.

public class LoginPage extends AbstractPage {

    @FindBy(css = "#j_username") private WebElement username;

    @FindBy(css = "#j_password") private WebElement password;

    public LoginPage(WebDriver web) {
        super("/users/login", web);
    }

    public void enterUsername(String username) {
        this.username.sendKeys(username);
    }

    public void enterPassword(String password) {
        this.password.sendKeys(password);
    }

    public void login() {
        password.submit();
    }

    @Override public boolean isCurrentPage() {
        return username != null && password != null;
    }

}

The login page shows a lit­tle more about how page objects are intended to be used in that it encap­su­lates access to crit­i­cal form ele­ments (again, injected using the Web­Driver anno­ta­tions) in what can be viewed as “ser­vice meth­ods”. This means that any change in the page should only ever require an update to one code loca­tion. How­ever, it does also place con­sid­er­able impor­tance on being able to accu­rately iden­tify what “a page” really is in your appli­ca­tion (this becomes more com­plex when you are deal­ing with asyn­chro­nous JavaScript mak­ing calls to HTTP “ser­vices” from within a supposed-page — these ser­vices are, in effect, pages them­selves even if the human end-user never sees them in-the-raw — so keep an open mind about the def­i­n­i­tion of a page there!)

Ora­gan­is­ing logic into steps objects

Finally, we need to cre­ate imple­men­ta­tions for the “steps” which are detailed in our “story file” (one of the nice fea­tures of JBe­have is that you can tell it not to fail tests where imple­men­ta­tions have not yet been done — mark­ing these as “pend­ing” in the reports — this way one bunch of peo­ple can get busy writ­ing sto­ries which do not have to be com­mit­ted at the same time as the imple­men­ta­tions, open­ing the pos­si­bil­ity of these two tasks being com­pleted by sep­a­rate groups; e.g. “busi­ness ana­lysts” on the one hand and soft­ware devel­op­ers on the other).

Again, there are cer­tainly going to be a num­ber of com­mons aspects to these steps objects, so you can begin with an abstract class. Mine cur­rently looks like this:

@Scope(BeanDefinition.SCOPE_PROTOTYPE) public abstract class AbstractSteps {

    protected final WebDriver web;

    protected AbstractSteps(WebDriver web) {
        this.web = web;
    }

    @AfterScenario public void closeWebDriver() {
        web.close();
    }

    protected final <T extends AbstractPage> T initPage(Class<T> pageClass) {
        return PageFactory.initElements(web, pageClass);
    }

}

Notice that, because my steps are going to be man­aged by the functional-tests.xml Spring con­text, I can define them as pro­to­type using @Scope so that we get a fresh instance wher­ever it is required with a cor­re­spond­ingly fresh instance of the pro­to­type Web­Driver bean, which is autowired in. I define only one com­mon method at present, which pro­vides a strongly-typed means to instan­ti­ate a Web­Driver page object (to use the @FindBy Web­Driver anno­ta­tions, you have to instan­ti­ate the page using the PageFactory.initElements(WebDriver, Object) method). Addi­tion­ally, I make absolutely sure that the fire­fox win­dow gets closed by using a JBe­have @AfterScenario method to close it (this is almost directly equiv­a­lent to JUnit’s @After anno­ta­tion). Now I am in a good posi­tion to start writ­ing steps classes that shouldn’t need to worry about too much extra­ne­ous fluff. Let’s take a look at the SecuritySteps that I wrote to imple­ment my basic story file described above:

@Steps public class SecuritySteps extends AbstractSteps {

    private String username;

    private String password;

    @Autowired public SecuritySteps(WebDriver web) {
        super(web);
    }

    @Given("I am not logged in")
    public void iAmNotLoggedIn() {
        initPage(LogoutPage.class).visit();
    }

    @When("I go to the administration dashboard")
    public void iGoToTheAdministrationDashboard() {
        initPage(AdministrationDashboardPage.class).visit();
    }

    @Then("I am asked to login")
    public void iAmAskedToLogin() {
        initPage(LoginPage.class).assertIsCurrentPage();
    }

    @Then("I enter the administrator credentials")
    public void iEnterTheAdministratorCredentials() {
        LoginPage p = initPage(LoginPage.class);
        p.enterUsername(username);
        p.enterPassword(password);
        p.login();
    }

    @Then("I am redirected to the administration dashboard")
    public void iAmRedirectedToTheAdministrationDashboard() {
        initPage(AdministrationDashboardPage.class).assertIsCurrentPage();
    }

    @Autowired public void setUsername(@Value("${apposite.security.root.user.name}") String username) {
        this.username = username;
    }

    @Autowired public void setPassword(@Value("${apposite.security.root.user.password}") String password) {
        this.password = password;
    }

}

Some things to notice about this class:

  1. It is anno­tated with my cus­tom @Steps anno­ta­tion so that the AbstractFunctionalTest will pick it up from the appli­ca­tion con­text and present it as can­di­date steps for the story file.
  2. Each method (exclud­ing the set­ters) cor­re­sponds to one of the state­ments from the story file: they are matched on the anno­ta­tion text. There are far more funky things one can do in terms of para­me­ter injec­tion here — this is just the most basic exam­ple possible.
  3. Because I decided to use Spring to man­age my steps classes, I am able not only to autowire any Web­Driver instance I choose to use, but also con­fig­u­ra­tion val­ues from the appli­ca­tion or envi­ron­ment prop­er­ties files (using @Value annotations)

Con­clu­sion

All of this is might seem like quite a lot of code to achieve the sim­ple goal of run­ning in-container func­tional tests. Nat­u­rally, there are ways of doing it with less code. How­ever, most of the above is sim­ply infra­struc­ture which can be extracted and shared between mul­ti­ple projects and is intended to pro­vide a har­ness that min­imises the amount of work required to add test cases, which, being the most numer­ous “code type” should ide­ally require the least amount of actual code indi­vid­u­ally. Both steps and page objects might fea­si­bly con­sid­ered as a form of “shared library”: re-use should most def­i­nitely be a goal and, once you begin to get a more com­pre­hen­sive col­lec­tion, you should not be required to always write new page objects or step imple­men­ta­tions in order to write new test cases (but don’t move them out of the project until you absolutely have to!)

Con­se­quently, with a lit­tle up-front work and thought, it is pos­si­ble to reduce the costs of adding test cases over time. This is often cited as a goal. Sadly, it is too often the case in prac­tice that a some­what lack­adaisi­cal approach at the out­set leads to this aim being con­founded: test har­nesses become brit­tle, test code becomes more com­plex than the code it tests and has bugs, tests start to blink, the whole thing becomes dis­or­gan­ised and confused.

My aim through­out has been to bal­ance flex­i­bil­ity with reg­u­lar­ity and pre­dictabil­ity: make the con­ven­tions clear and repeat­able but not too rigid. From the per­spec­tive of the build cycle, one of the keys to achiev­ing that pre­dictabil­ity is mak­ing sure that devel­op­ers can always run these kinds of test when­ever they need to (reduc­ing the time to feed­back). Hav­ing a build infra­struc­ture such as the one described here helps devel­op­ers to avoid not only “break­ing the build” but also “break­ing the functionality”.

Clearly, there is code to write in order to actu­ally make this test pass — which I shall hope­fully come on to in sub­se­quent posts — but that is one of the really nice things about BDD and func­tional test­ing of this kind: if you are using an agile method­ol­ogy (which you should be), then your JBe­have sto­ries, whilst not absolutely equiv­a­lent, should bear a close rela­tion to your iter­a­tion story cards. From this per­spec­tive, they can serve as a “def­i­n­i­tion of done” for any given ver­ti­cal slice of func­tion­al­ity: you can pro­ceed using TDD at a unit test level — the tests will fail & fail & fail — but, then, once all the var­i­ous units are com­plete, your end-to-end test will pass and you know you are done … or, I should say, you know you are “good enough” because you shouldn’t for­get to do a lit­tle refac­tor­ing and code-tidying before you con­sider it truly complete.

On May 14, 2011 christopher wrote: A Different Way to Integrate Velocity with Spring

I have often thought that the inte­gra­tion with Veloc­ity cur­rently pro­vided by Spring, whilst ade­quate, could be sig­nif­i­cantly improved so, recently, I spent a fun week­end cre­at­ing a cus­tom inte­gra­tion wrap­per, which I am call­ing spring-velocity for using Veloc­ity with Spring (sur­pris­ingly enough).

I had 4 main aims in this exercise:

  1. Sup­port the changes to Veloc­ity & Veloc­ity Tools (esp. tools) since 1.5 and 2.0 (new tool­box for­mat, no dep­re­ca­tion mes­sages etc)
  2. Pro­vide a stan­dard, Spring-style means to utilise Spring con­text sup­port to aug­ment veloc­ity tools infra­struc­ture (@ViewHelper anno­ta­tion as a com­po­nent stereo­type which can have a Spring @Scope spec­i­fied auto­mat­i­cally added to Veloc­ity context)
  3. Most impor­tantly: work nicely with ContentNegotiatingViewResolver so that a sin­gle velocity-based Spring View can be used to gen­er­ate mul­ti­ple text-based for­mats sim­ply by drop­ping in addi­tional tem­plates (+ spec­i­fy­ing in ContentNegotiatingViewResolver’s medi­aTypes map)
  4. Get all of the above to work trans­par­ently, out-of-the-box via anno­ta­tions, and cor­rectly deter­mine what sort of “con­text” the app is run­ning in (e.g. web app [access to servlet con­text] or plain token replacer with/without toolbox)

The source code and bina­ries can be down­loaded from my maven repos­i­tory.

It is cur­rently released under the GPL3 licence. How­ever, if any­one in the “real world” finds it use­ful and this licence becomes too restric­tive, I may switch to an Apache licence (in line with both Spring and Veloc­ity) in a future release, if there are any. Sim­i­larly, if it seems to other peo­ple that this is some­thing that may be gen­er­ally use­ful, I would prob­a­bly also move the source to github.

Any thoughts or feed­back would be most wel­come … and, of course, feel free to use.