Embed a Groovy Web Console in a Java Spring App

| Comments

Having a web-based Groovy console for your Java Spring webapp can be invaluable for developing and testing. It might also be appropriate for your production app as long as it’s properly secured. The Grails world has long had the Console Plugin and I’ve seen firsthand how useful a console can be on earlier Grails projects.

Not every project I get to work on is in Grails though, but I wanted to have the same power in my Spring/Java applications.

I’ll demonstrate how to integrate this with a REST-based web service, but the same core Service code could be used for other website interaction types.

Logging to Splunk in Key/Value Pairs

| Comments

Splunk is a log aggregation tool with a very powerful query language that lets you easily mine the data from your logs. Taking an hour or two to learn the query language (which has some similarity to SQL) will greatly increase the usefulness of Splunk.

One of the Splunk logging best practices is to write out your logs into comma delimited key/value pairs that let Splunk interpret your data as queryable fields.

key1="value", key2="other value"...

Doing that will make key1 and key2 fields that can be queried and reported on.

If you don’t do this, you can still create fields out of unstructured data, but you have to use a relatively ugly regular expression syntax to create fields, ex if the log format is “Here is Key1 value and Key2 other value”

... | rex field=_raw "Here is Key1 (?<key1>.*) and Key2 (?<key2>.*)"

Here is a simple static method that will write out data given to it in vararg format as an appropriately escaped and delimited String:

package com.naleid.utils;

import org.apache.commons.lang3.StringEscapeUtils;
import java.util.Map;

public class LogUtil {

    public static String toSplunkString(Object... mapInfo) {
        return toSplunkString(varArgMap(mapInfo));
    }

    public static Map varArgMap(Object... mapInfo) {
        final Map result = new LinkedHashMap<>();
        if (mapInfo.length % 2 == 1) {
            throw new IllegalArgumentException("arguments must be even in number");
        }
        for (int i = 0; i < mapInfo.length; i += 2) {
            final Object o1 = mapInfo[i];
            if (!(o1 instanceof String)) {
                throw new IllegalArgumentException("odd arguments must be String values so they can be keys");
            }
            final Object o2 = mapInfo[i + 1];
            result.put((String) o1, o2);
        }
        return result;
    }

    public static  String toSplunkString(Map map) {
        StringBuilder buffer = new StringBuilder()

        for (Map.Entry entry : map.entrySet()) {
            if (buffer.length() > 1) {
                buffer.append(", ");
            }
            buffer.append(entry.getKey()==null?null:entry.getKey().toString())
                    .append("=\"")
                    .append(StringEscapeUtils.escapeJava(entry.getValue()==null?null:entry.getValue().toString()))
                    .append("\"");
        }
        return buffer.toString();
    }

}

Using that, when you write a log statement, you can now pass in a list of varargs (which will be transformed into a Map) and have it logged out in Splunk standard key/value format.

import static com.naleid.util.LogUtil;

...

log.info(toSplunkString("tag", "userPurchase", "purchaseId", 23, "userId", 123, "productId", 456, "price", 45.34));

Would output something like:

06:48:04,081 INFO [com.naleid.service.MyPurchaseService] tag="userPurchase", purchaseId="23", userId="123", productId="456", price="45.34"

This would let us write a Splunk query to show us things like number of purchases and average purchase price by user:

userPurchase | stats count, avg(price) by userId

splunk_user_purchases_with_total

or the number of purchases per product and that products total amount sold, sorted from highest to lowest

userPurchase | stats count as "# purchases", sum(price) as "total $" by productId | sort -"total $" 

splunk_purchase_count_with_total

You can also have alerts set up to automatically generate reports that are e-mailed periodically or create graphical dashboards.

The Splunk Search Reference and the Quick Reference Guide PDF (which is slightly outdated but still useful) are great references while you’re learning the query syntax. You can do much more powerful things than what I’m showing here.

The Splunk Search Examples shows a number of queries (and screenshots of results) along these lines.

In my next post (coming soon), I’ll show how to create a StopWatchAspect that automatically logs timing information for all of your service methods using the Splunk formatting. Then you can report and show timing statistics for your code and see what methods are taking the most time and how a method’s performance has changed over time.

Saving JSON Client-side to an S3 Bucket

| Comments

A co-worker came up with an interesting problem today. What’s the cheapest and easiest way to save relatively low traffic text content without having to create a server side component for it.

After thinking about it for a bit, I thought about using an public-writable S3 bucket and letting the client-side JavaScript PUT the JSON to the bucket. With a little bit of research and playing around, I was able to make it work.

This assumes you have an EC2 account with Amazon. If you do, you can log in to the console. Then, go to the S3 section of the console and create a new bucket that will hold the uploaded JSON files.

With that bucket selected, go to “Properties”, open up the “Permissions” tab and click on the “Add CORS Configuration” button. Put something like this in there:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://fiddle.jshell.net</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

But be sure to change the AllowedOrigin to whatever host name you’ll be doing your uploading from (I use jsfiddle for testing javascript so it’s a good one to test with).

Then, you’ll want to “Add a bucket policy” to the bucket to allow the world to upload to the bucket. Here is the one I came up with (replace the bucket name with yours):

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPuts",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
        },
                {
            "Sid": "DenyGetsToAllButMyIP",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": "YOUR.IP.ADD.RESS/32"
                }
            }
        }
    ]
}

This policy has the effect of removing anyone’s ability to download or view the file who isn’t at whatever IP you put in the NotIpAddress CIDR address. This is an alternative to setting up an IAM policy that lets a specific user or set of users view the file, which I’ve found to be quite fiddly to get working.

If you’re having trouble with this policy, you can use the AWS Policy Generator to generate your own.

Then, just write your javascript to upload the JSON (this requires jQuery, replace YOUR-BUCKET-NAME in the upload path and set the data to whatever JSON you want it to be, you can tweak this fiddle):

var value = "foobar";
// generate random guid for filename
var guid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
    var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
    return v.toString(16);
});

// CHANGE THIS TO YOUR BUCKET NAME
var uploadPath = 'http://YOUR-BUCKET-NAME.s3.amazonaws.com/' + guid + ".json";

console.log(uploadPath);

$.ajax({
    type: "PUT",
    url: uploadPath,
    dataType: 'json',
    async: false,
    data: JSON.stringify({ "value": value })
});

Because this relies on CORS, it only works in IE8+, but for most applications, that should be OK.

This solution isn’t appropriate for all uses, but for low-traffic, relatively insecure stuff (like website “feedback” forms) it should be fine. There is also the possibility that someone malicious could upload whatever they want to your bucket (and make you pay the charges for it). Hopefully, setting the policy so that it’s only world-writable mitigates the potential for misuse.

Calling GruntJS Tasks From Gradle

| Comments

Gradle is a great build tool with a large community for developing JVM-based applications, but one area that it seems to be lacking strong support is in front-end tooling. The Node.js community’s strength is front-end tooling with a number of very nice build tools including Grunt, Yeoman and Brunch.

There are a couple of Gradle plugins that people have created around JavaScript and CSS processing, but even the authors of those plugins seem to have punted and moved to node.js-based tools for front-end work.

I’m using Grunt on my latest project to help out with packaging, minification, and concatenating files (all through RequireJS’s r.js optimization), linting (through JSHint) and css pre-processing (via LESS).

I wanted a way for our build process to be able to assemble our .war files in a single step so I needed to figure out how to weld these two tools together.

The quick and dirty way of doing it would just be to either hard code some “grunt”.execute().text lines, or to have each grunt task be an Exec task:

task requirejs(type: Exec) {
    commandLine 'grunt', 'requirejs'
}

One other limitation that we had was that some of our developers (and our build machine slaves) are on Windows boxes, and other developers are using OSX. On Windows, the grunt command is spelled grunt.cmd and once I started having to repeat OS checks everywhere things started to feel less DRY and more hacky.

After a little research, I was able to figure out how to create a custom Gradle Exec subclass that keeps things clean:

import org.apache.tools.ant.taskdefs.condition.Os
import org.gradle.api.tasks.Exec

...

task requirejs(type: GruntTask) {
    gruntArgs = "requirejs"
}

task jslint(type: GruntTask) {
    gruntArgs = "lint"
}

...

class GruntTask extends Exec {
    private String gruntExecutable = Os.isFamily(Os.FAMILY_WINDOWS) ? "grunt.cmd" : "grunt"
    private String switches = "--no-color"

    String gruntArgs = "" 

    public GruntTask() {
        super()
        this.setExecutable(gruntExecutable)
    }

    public void setGruntArgs(String gruntArgs) {
        this.args = "$switches $gruntArgs".trim().split(" ") as List
    }
}

You can either put the GruntTask class directly in your build.gradle file (where it won’t have a package) or else in a directory under buildSrc/src/main/groovy where it should automatically be included in your build. It’s probably better for organization purposes to have it in buildSrc, but there seems to be a performance impact to gradle needing to check that directory is current all the time.

With the GruntTask class in place, you can treat your Grunt tasks just like Gradle tasks, including making things like your war task depend on requirejs running first, or you can pass your tasks in explicitly:

gradle clean lint requirejs war

Overriding Backbone.js Sync to Allow Cross Origin Resource Sharing (CORS) withCredentials

| Comments

So I’m apparently starting a series of Backbone.js posts wherein I’m documenting all the BackboneJS/JavaScript stuff I’m figuring out that I couldn’t find easily in the googles.

Today’s installment is how to globally override Backbone’s sync method to allow Cross Origin Resource Sharing (CORS) requests so that you forward the current cookie with security credentials.

There are a number of results on stack overflow that answer either how to set it for a specific request, how to set up the server side, or just that generally you need to override sync, but none of them show you how to do it withCredentials.

Per html5rocks:

The .withCredentials property will include any cookies from the remote domain in the request, and it will also set any cookies from the remote domain. Note that these cookies still honor same-origin policies, so your JavaScript code can’t access the cookies from document.cookie or the response headers. They can only be controlled by the remote domain.

This code will uses the javascript proxy pattern to get a reference to the original sync function, and create a new wrapper method to provide advice on the original function:

(function() {

  var proxiedSync = Backbone.sync;

  Backbone.sync = function(method, model, options) {
    options || (options = {});

    if (!options.crossDomain) {
      options.crossDomain = true;
    }

    if (!options.xhrFields) {
      options.xhrFields = {withCredentials:true};
    }

    return proxiedSync(method, model, options);
  };
})();

Also, this is the client-side implementation of CORS, you’ll still need to implement the appropriate changes on the server side to make this work. These seem to be much better documented to me (see the “Adding CORS Support to the Server” section of the html5rocks documentation).

Getting CoffeeScript Compilation Working in Gradle

| Comments

Gradle 1.2 includes support for compiling CoffeeScript but it’s not well documented, there’s nothing on the gradle website and all I was able to find after a bunch of googling was a gradle-dev thread where Luke Daley announces the functionality.

Based on that thread, I was able to come up with this sample gradle file that let me compile my .coffee source files into javascript as part of a build:

import org.gradle.plugins.javascript.coffeescript.CoffeeScriptCompile

apply plugin: 'coffeescript-base'

repositories {
  mavenCentral()
  maven {
    url: "http://repo.gradle.org/gradle/repo"
  }
}

task compileCoffee(type: CoffeeScriptCompile) {
  source fileTree('src/main/coffee')
  destinationDir file('build/js')
}

To integrate this into a war file, you’d need to extend it a little further to make the war task depend on the compileCoffee task, and then tell it to include the output in build/js in the root of the war.

I wasn’t able to use this approach on my current project for a couple of reasons related to restricted maven repos, and ended up using require-cs plugin for require.js AMD module framework as I was already using it.

DRY Testing of Require.js Based Backbone Apps Using Jasmine

| Comments

EDIT: 2/15/2013: Scratch this way of doing it. I didn’t fully understand how requirejs worked in tests when I wrote this. You should instead just be using a define around your tests, which should work very similarly to the rest of your requirejs code under test. Like this:

define(['models/Todo', 'views/CountView'], function(Todo, View){
  describe('View :: Count Remaining Items', function() {
    var todos, view, mockData;
 
    beforeEach(function(Todo, View) {
      todos = new Todo.Collection();
      view = new View({collection: that.todos});
      mockData = { title: 'Foo Bar', timestamp: new Date().getTime() };
      $('#sandbox').html(that.view.render().el);
    });
    ...

Old (not optimal) way:

I’ve recently started a new backbone.js application that uses require.js modules to keep it organized and clean.

Doing development with modules is a strong trend in the backbone.js world. @addyosmani’s Developing Backbone.js Applications ebook recommends using requirejs and it’s also comes baked in to Backbone Boilerplate.

There are quite a few examples on how to use require.js modules with backbone, but very few that actually show how to use it in combination with tests, specifically with Jasmine, a BDD test framework that I’ve really come to like. I did find one post by Uzi Kilon that was helpful.

He gives a nice overview of the problem as well as extensive documentation on how he solves it by calling require in beforeEach:

describe('View :: Count Remaining Items', function() {
 
  beforeEach(function() {
    var flag = false,
        that = this;
 
    require(['models/Todo', 'views/CountView'], function(Todo, View) {
      that.todos = new Todo.Collection();
      that.view = new View({collection: that.todos});
      that.mockData = { title: 'Foo Bar', timestamp: new Date().getTime() };
      $('#sandbox').html(that.view.render().el);
      flag = true;
    });
 
    waitsFor(function() {
      return flag;
    });
 
  });
  ...

This strategy sets up a flag boolean variable that is only set to true once the require is satisfied and the data in beforeEach is set up. This is monitored with Jasmine’s waitsFor.

In my testing, this works great, but as the number of beforeEach methods I had started to grow, I needed to repeat this code in every one. It was bothering me and I wanted to DRY it up.

The solution I came up with was to create a new file called jasmine-require.js:

/* 
    utility global functions for jasmine, global to match existing jasmine global functions 

    It's expected that the last parameter is a function that you want to execute 
    within the context of the require all preceding parameters are passed to the require method

    The most likely way to call this is:

    waitsForRequire(['require_dep1', 'require_dep2',...], function(dep1, dep2) { 
        ...code that gets dependencies...
    }) 
*/
var waitsForRequire = function () {
  var argv = Array.prototype.slice.call(arguments),
      done = false;

  var callback = typeof _.last(argv) === 'function' ? argv.pop() : function(){};

  return function () {
    require.apply(null, argv.concat(function () {
      callback.apply(null, arguments);
      done = true;
    }));

    waitsFor(function () { return done; });
  };
};

I then modify the SpecRunner.js shim dependencies for jasmine so that jasmine-html also depends on this new file:

require.config({
  baseUrl: "/js/",
  urlArgs: 'cb=' + Math.random(),
  paths: {
    jquery: 'lib/jquery-1.8.0',
    underscore: 'lib/underscore-1.3.3',
    backbone: 'lib/backbone-0.9.2',
    'backbone.localStorage': 'lib/backbone.localStorage',
    jasmine: '../test/lib/jasmine',
    'jasmine-html': '../test/lib/jasmine-html',

    'jasmine-require': '../test/lib/jasmine-require', // ADDED!

    spec: '../test/jasmine/spec/'
  },
  shim: {
    underscore: {
      exports: "_"
    },
    backbone: {
      deps: ['underscore', 'jquery'],
      exports: 'Backbone'
    },
    'backbone.localStorage': {
      deps: ['backbone'],
      exports: 'Backbone'
    },
    jasmine: {
      exports: 'jasmine'
    },
    'jasmine-html': {

      deps: ['jasmine', 'jasmine-require'],  // ADDED jasmine-require!

      exports: 'jasmine'
    }
  }
});

Then, instead of having to repeat myself, I can DRY my beforeEach up to this:

describe('View :: Count Remaining Items', function() {
  var todos, view, mockData;
 
  beforeEach(waitsForRequire(['models/Todo', 'views/CountView'], function(Todo, View) {
      todos = new Todo.Collection();
      view = new View({collection: that.todos});
      mockData = { title: 'Foo Bar', timestamp: new Date().getTime() };
      $('#sandbox').html(that.view.render().el);
  }));
  ...

I’m not normally a fan of global JavaScript functions, but this fits with how jasmine works with having describe, it, beforeEach, etc all as global functions in tests. If you don’t like this you could instead add this in a namespace (or attach it to the jasmine object with _.extend(jasmine, { waitsForRequire: function(){…body above…}});).

ClojureWest 2012 Overview

| Comments

Just noticed that I never put up a link to the presentation I gave at the ClojureMN group a few months ago on what I thought was interesting at ClojureWest 2012 in San Jose earlier this year. Overall a solid conference with a good mix of hardcore topics mixed with pragmatic and practical Clojure usage.

Alex Miller always puts on a great conference (he also runs the StrangeLoop conferences). StrangeLoop 2011 was the best conference I’ve been to so far and I’m looking forward to StrangeLoop 2012 next month.

Git Core Concepts Presentation at GR8Conf US 2012

| Comments

I gave a presentation earlier today on Git at the Groovy and Grails GR8Conf US 2012 conference.

The GR8Conf was named for the 8 groovy-based technologies starting with the letter “G” that were popular when the conference first started 2 years ago (I think they were Groovy, Grails, Gradle, Griffon, Gant, GPars, Gaelyk, and…? Maybe GContracts or Geb?). I don’t think that Git was one of the 8 technologies the conference was named for, but it probably should be. All of the ones listed have repositories out on GitHub and you need to know Git to be able to contribute and checkout the source.

The presentation is titled “Git Core Concepts…or: how I learned to stop worrying and love the reflog” and it can be found out on GitHub.

There’s also a repository with all of the presentations collected for the entire conference that Shaun Jurgemeyer (one of the main conference organizers) is putting together.

Thanks to Shaun and everyone else for putting on a fun conference this year. It was fun seeing a lot of familiar people and putting some faces to those I’ve only interacted with virtually in the past.