Debugging Grails Forked Mode

| Comments

Recent versions of grails introduce forked execution as the default mode.

There are benefits to forked execution (reducing memory strain, classpath isolation, and removing metaclass conflicts), but there are also downsides with the current implementation. The biggest of which is that it makes debugging more painful.

You can no longer simply run the “debug” task in IntelliJ and have it stop at your breakpoints. If you do that, you’ll be debugging the parent launcher process, not your grails application so your breakpoints will never be exercised.

The standard way to get debugging working with forked execution is to use the --debug-fork flag on your grails run-app or grails test-app command. Then, create a remote debugger task in your IDE and attach it once the grails app opens up port 5005.

The Problem

This sucks (IMO) for 3 reasons:

1. It assumes you know you want to debug before you start the app up.

Forget to set the flag and you need to bounce your server.

2. Under the covers --debug-fork uses the suspend=y debug flag which causes grails to halt starting up till you attach a debugger.

If you get distracted while grails is starting up, you’ll often come back to a halted process that still has 90% of the startup to do before it’s ready to serve the app.

3. You can’t attach a debugger till grails forks the process and actually opens up port 5005.

This often takes 10+ seconds to compile all your code, launch the process and finally open the port.

All of this means that you need to babysit your grails app while it starts up, or forego debugging unless you know you need it.

My Solution

The easiest way to solve this is to turn forked execution off (if everyone on your team is willing to give up the benefits). This is easily done by modifying your BuildConfig.groovy so that the grails.project.fork section is empty:

grails.project.fork = []

What if you (or your team) are not willing to give up forked execution mode? You can eliminate most of the downsides with a few tweaks.

1. Always Be Debugging

There really isn’t any performance penalty in dev mode to just always run with the debug flags enabled and this lets you connect a debugger at-will. You don’t need to remember to start a different run target (or run a different command). You can change your BuildConfig.groovy to always run in debug mode with the debug: true map variable, ex:

grails.project.fork = [
    test: [maxMemory: 768, minMemory: 64, debug: true, maxPerm: 256, daemon:true],

Unfortunately, while that will always debug, it also has the suspend=y flag hard coded into it, so you’ll pause execution on every run which violates issue #2.

2. Use the suspend=n debug flag so that grails doesn’t pause till you connect a debugger.

This is harder to configure than you’d expect. As mentioned above, both the --debug-fork command line switch and debug: true flag in BuildConfig.groovy cause suspend=y flag to be used. To get around this, you need to specify the actual JVM debug flags in your BuildConfig.groovy, ex:

// jvmArgs make it so that we can run in forked mode without having to use the `--debug-fork` flag
// and also has suspend=n so that it will start up without forcing you to connect a remote debugger first
def jvmArgs = ['-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005']

grails.project.fork = [
    test: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, daemon:true, jvmArgs: jvmArgs],
    run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false, jvmArgs: jvmArgs],
    war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false, jvmArgs: jvmArgs],
    console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, jvmArgs: jvmArgs]

With that, we’ve solved problems #1 and #2. Our grails app will always be running in debug mode and will also not pause halfway through startup to force us to always connect a debugger. We can connect a debugger whenever we want to actually debug.

3. Monitor the port before starting the remote debugger

If you create a new remote debug target in intellij and run it at the same time as you run your app or tests, it’ll fail because the debug port isn’t open yet. If you do it manually, you need to wait for the log message saying that the debug port is open. That’s more babysitting that we can avoid through a little shell script trickery.

Here is a shell script that uses the nc (netcat) command to monitor a localhost port, and will only continue once the port is available. Save it as somewhere in your path:

#!/usr/bin/env bash

function usage {
    echo "usage: ${0##*/} "
    echo "ex: ${0##*/} 5005"

if [ -z $PORT_NUMBER ]; then
    exit 1

echo "waiting for port $PORT_NUMBER to open up"

while ! nc -z localhost $PORT_NUMBER; do sleep 0.1; done;

Now, create a new IntelliJ remote debug task:

IntelliJ Remote Debug

have it use this script as an “external tool” to monitor port 5005 before it tries to connect.

External Tool Creation

Now, you can start up your grails run-app task and immediately start the debug task without having to wait for the debugger port to be open:

Run and Debug in IntelliJ at the same time

You might see an error in the grails log about how the debugger disconnected, but that was actually netcat connecting then quitting out right away and it’s harmless.

Now you have all of the pieces necessary to make grails forked execution pretty painless while still getting all of it’s benefits.

Declaring Closures in Swift

| Comments

The Swift Programming Language was just announced yesterday at Apple’s WWDC 2014 conference. After downloading the iBook of “The Swift Programming Language” and the beta of Xcode 6, I was playing around with the language, but had some difficulting finding a clear on the synax for declaring closures. All I could find were text descriptions, and formal grammar definitions. Both of those take too long for my brain to decode so I whipped up a few concrete examples:

Closures with Named and Explicitly Typed Parameters Return Value

The most verbose syntax for a closure specifies the parameters along with their types and the return type of the closure.

{(param: Type) -> ReturnType in expression_using_params}    


[3, 4, 5].map({(val: Int) -> Int in val * 2})                     //> [6, 8, 10]    

[1, 2, 3].reduce(0, {(acc: Int, val: Int) -> Int in acc + val})   //> 6

As everything is explicit, you can assign each of these to a constant with let:

let timesTwo = {(val: Int) -> Int in val * 2}
[3, 4, 5].map(timesTwo)       //> [6, 8, 10]    

let sumOf = {(acc: Int, val: Int) -> Int in acc + val}
[1, 2, 3].reduce(0, sumOf)    //> 6

Closures with Named Parameters and Implicit Types

If you’re using your closures as inline parameters to a function, the types can be inferred so you don’t need to explicitly set them.

{(params) in expression_using_params}


[3, 4, 5].map({(val) in val * 2})                 //> [6, 8, 10]

[1, 2, 3].reduce(0, {(acc, val) in acc + val})    //> 6

Closures with Positionally Named Parameters

If you don’t care about naming your variables for clarity, you can just use 0-based positional arguments (similar to shell script positional args).

[3, 4, 5].map({$0 * 2})           //> [6, 8, 10]

[1, 2, 3].reduce(0, {$0 + $1})    //> 6

Implicit Closure Parameter Limitations

The compiler needs to have enough information about how you’re using the closures so that it can infer the types, so this works because we’re multiplying by 2, so it knows that $0 must also be an Int:

let timesTwo = {$0 * 2}
[3, 4, 5].map(timesTwo)

But this throws an exception because it can’t tell for sure what type of arguments $0 and $1 are:

let sumOf = {$0 + $1}     // FAILS with "could not find an overload for '+' that accepts the supplied arguments let sumOf = {$0 + $1}"
[1, 2, 3].reduce(0, sumOf)

At the least, you need to provide named parameters and their types:

let sumOf = {(acc: Int, val: Int) in acc + val}
[1, 2, 3].reduce(0, sumOf)        //> 6

Auto-Refreshing Grails Applications That Leverage the Grails Resources Plugin

| Comments

If you’re using the Grails Resources Plugin (like 82% of all grails applications currently) and you’re leveraging its ability to bundle resources together, you’ve probably noticed the delay between when you save a file and when a browser refresh actually shows the change.

This is annoying and can hurt your flow while developing. You’re never sure if the manual refresh that you just did on your browser actually has the change in it or not.

Let LiveReload Refresh the Browser for You

LiveReload is a simple application, but it really speeds up my development when I’m iterating on changes to a website. Especially when I’m doing things like tweaking CSS and HTML.

Once you have the application installed, you tell it what directories you want it to monitor. Whenever it sees a change to a file in that directory that has a “known” extension, it tells your browser to refresh the page automatically.

That sounds simple, and not all that powerful right? You could just cmd-tab over to your browser and cmd-R to refresh. But avoiding that keeps you in a flow and keeps your cursor and attention on the right things.


The easiest way is to download it from the App Store (it’s $9.99, but worth it IMO).

You’ll also want to add the Chrome LiveReload Plugin. This gives you a button to activate LiveReload for that tab.

LiveReload Chrome Plugin Button

When the plugin is enabled, it injects a small piece of JavaScript into your page that listens to the LiveReload application. When LiveReload sees a file change, it tells the browser plugin to refresh.


Now that you have the app and browser plugin installed, you’ll want to launch LiveReload. Then go into the options and enter any additional extensions that you’d like to monitor (it comes with many web defaults, but I add gsp for Grails development):

LiveReload options

Next, drag in any directories that conain files to monitor into LiveReload’s left pane.

For grails, if you’re NOT using the resources plugin’s “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • web-app – all JavaScript and CSS changes

if you ARE using grails resources “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • ~/.grails/<grails_version>/projects/<project_name>/tomcat

Where grails_version is something like 2.2.4, and project_name is the name of your grails app. This is where the compiled bundles are placed. You don’t want to add the web-app directory as LiveReload will double refresh your browser. Once when it sees the initial JS/CSS change, and a 2nd time when the bundle is compiled.

Using LiveReload

Now that you’ve got LiveReload monitoring for changes, fire up your browser and browse to your application. Then hit the LiveReload chrome plugin button:

LiveReload options

That will connect it to the app:

LiveReload options

and now any changes in the monitored directories will cause the “enabled” browser tab to automatically refresh.

I develop with a couple of monitors and using LiveReload lets me have my browser open on one monitor, and my code editor in the other. As soon as I save the file (and grails resources finishes compiling), I see the change on my web browser monitor without any additional input.

Non-OSX alternatives

If you’re on Windows or Linux and can’t use LiveReload (or if you don’t want to spend the $10), I’ve heard good things about I haven’t used it personally, but understand that it has a similar feature-set.

How to Use P4merge as a 4-pane, 3-way Merge Tool With Git and

| Comments

Last year, I blogged about how I was using kdiff3 as my merge tool with git, mercurial and Git Tower.

Since then, I’ve had a number of troubles with kdiff3 around it’s usability and font handling that make it difficult to use (the text you click on isn’t the text your editing, very painful).

That made me look around for alternatives and I found that the freely available p4merge tool from Perforce is probably the best option. It has a more native mac feel and properly handles fonts.

Embed a Groovy Web Console in a Java Spring App

| Comments

Having a web-based Groovy console for your Java Spring webapp can be invaluable for developing and testing. It might also be appropriate for your production app as long as it’s properly secured. The Grails world has long had the Console Plugin and I’ve seen firsthand how useful a console can be on earlier Grails projects.

Not every project I get to work on is in Grails though, but I wanted to have the same power in my Spring/Java applications.

I’ll demonstrate how to integrate this with a REST-based web service, but the same core Service code could be used for other website interaction types.

Logging to Splunk in Key/Value Pairs

| Comments

Splunk is a log aggregation tool with a very powerful query language that lets you easily mine the data from your logs. Taking an hour or two to learn the query language (which has some similarity to SQL) will greatly increase the usefulness of Splunk.

One of the Splunk logging best practices is to write out your logs into comma delimited key/value pairs that let Splunk interpret your data as queryable fields.

key1="value", key2="other value"...

Doing that will make key1 and key2 fields that can be queried and reported on.

If you don’t do this, you can still create fields out of unstructured data, but you have to use a relatively ugly regular expression syntax to create fields, ex if the log format is “Here is Key1 value and Key2 other value”

... | rex field=_raw "Here is Key1 (?<key1>.*) and Key2 (?<key2>.*)"

Here is a simple static method that will write out data given to it in vararg format as an appropriately escaped and delimited String:

package com.naleid.utils;

import org.apache.commons.lang3.StringEscapeUtils;
import java.util.Map;

public class LogUtil {

    public static String toSplunkString(Object... mapInfo) {
        return toSplunkString(varArgMap(mapInfo));

    public static Map varArgMap(Object... mapInfo) {
        final Map result = new LinkedHashMap<>();
        if (mapInfo.length % 2 == 1) {
            throw new IllegalArgumentException("arguments must be even in number");
        for (int i = 0; i < mapInfo.length; i += 2) {
            final Object o1 = mapInfo[i];
            if (!(o1 instanceof String)) {
                throw new IllegalArgumentException("odd arguments must be String values so they can be keys");
            final Object o2 = mapInfo[i + 1];
            result.put((String) o1, o2);
        return result;

    public static  String toSplunkString(Map map) {
        StringBuilder buffer = new StringBuilder()

        for (Map.Entry entry : map.entrySet()) {
            if (buffer.length() > 1) {
                buffer.append(", ");
        return buffer.toString();


Using that, when you write a log statement, you can now pass in a list of varargs (which will be transformed into a Map) and have it logged out in Splunk standard key/value format.

import static com.naleid.util.LogUtil;

..."tag", "userPurchase", "purchaseId", 23, "userId", 123, "productId", 456, "price", 45.34));

Would output something like:

06:48:04,081 INFO [com.naleid.service.MyPurchaseService] tag="userPurchase", purchaseId="23", userId="123", productId="456", price="45.34"

This would let us write a Splunk query to show us things like number of purchases and average purchase price by user:

userPurchase | stats count, avg(price) by userId


or the number of purchases per product and that products total amount sold, sorted from highest to lowest

userPurchase | stats count as "# purchases", sum(price) as "total $" by productId | sort -"total $" 


You can also have alerts set up to automatically generate reports that are e-mailed periodically or create graphical dashboards.

The Splunk Search Reference and the Quick Reference Guide PDF (which is slightly outdated but still useful) are great references while you’re learning the query syntax. You can do much more powerful things than what I’m showing here.

The Splunk Search Examples shows a number of queries (and screenshots of results) along these lines.

In my next post (coming soon), I’ll show how to create a StopWatchAspect that automatically logs timing information for all of your service methods using the Splunk formatting. Then you can report and show timing statistics for your code and see what methods are taking the most time and how a method’s performance has changed over time.