Ratpack Executions: Async + Serial, Not Parallel

| Comments

Developers familiar with Ratpack know that it is a non-blocking and asynchronous framework that’s built on top of Netty. It uses a small pool of “compute” threads (by default 2 * <# of CPUs>) to do all of the non-blocking processing of thousands of requests a second.

The documentation (and blog posts and Dan Woods’ excellent Learning Ratpack) all discuss another benefit of Ratpack: serialized execution of asynchronous code.

Even though I’d read about Ratpack’s serial execution model, I had not fully internalized the consequences of that feature of Ratpack until I dug in for myself. My previous async programming had been NodeJS and Scala-based and I was using that as my mental model for how Ratpack would behave.

In those other systems, “async” and “parallel” were mostly interchangeable. If you need to make 3 async GET requests, you map over the urls and fire off the async requests. All of the requests are likely sent before any of them have responded.

Ratpack doesn’t work this way. If you make 3 async GET requests, Ratpack will wait for the first one to be completed before sending off the second one.

To understand why this is, we need to discuss some of the details of Ratpack’s architecture.

In Ratpack, Work is Done on One of Two Thread Pools

1. “Compute” Thread Pool

This thread pool is where all requests are handled, and are where all async, non-blocking code in your app executes. Under the covers, it is a Netty epoll EventLoopGroup, so it is very fast as long as you don’t run any blocking operations on it (on non-linux boxes it uses NIO instead of epoll).

The compute thread pool size by default is 2 * # of CPUs. Though you can easily change it with a config value:

ratpack {
    serverConfig {
        threads 8

2. “Blocking” Thread Pool

The Blocking thread pool is unbounded in size (till you run out of memory). It uses a cached thread pool so that it can re-use previously threads when they are available again.

No work is done on a blocking thread unless you explicitly ask. Any blocking operations (like ones using traditional database drivers or file IO), should wrap their call in a Blocking method.

Blocking methods create a Java 8 CompletableFuture that is registered to notify a Ratpack Promise running on the current compute thread when the Future is completed.

It can be useful when testing async code to print out the Thread.currentThread().name to understand which thread your code is running on.

This simple Ratpack app uses the compute and blocking thread pools (full app):

  handlers {
    all { Context context ->

      println "A. Original compute thread: ${Thread.currentThread().name}"

      Blocking.exec { ->
        context.render "hello from blocking" // pretend blocking work
        println "B. Blocking thread : ${Thread.currentThread().name}"

      println "C. Original compute thread: ${Thread.currentThread().name}"

Which prints

A. Original compute thread: ratpack-compute-1-2
C. Original compute thread: ratpack-compute-1-2
B. Blocking thread : ratpack-blocking-3-1

This Blocking code is executed after the original thread, Ratpack detects that no response has yet been rendered in the original thread and that work has been scheduled on a thread in the blocking pool that it needs to register a callback for.

This code will always print A … C … B, serial behavior is guaranteed by Ratpack.

Requests are Processed in a Pipeline of Async Execution Segments

When a Ratpack app starts, it creates an ExecController which is in charge of running all the Execution segments during request processing.

If you do not have any asynchronous calls, each request will run in a single execution segment that runs on a compute thread.

If you do have asynchronous calls (including blocking calls which become asynchronous via Blocking), the request is processed in multiple execution segments, each of which is encapsulated in a Ratpack Promise. (full app)

  handlers {
    all { Context context ->

      println "A. Original compute thread: ${Thread.currentThread().name}"

      Promise.async { downstream ->
        println "B. Promise thread : ${Thread.currentThread().name}"
        downstream.success("hello from async promise")
      }.then { result ->
        context.render result

      println "C. Original compute thread: ${Thread.currentThread().name}"

The output shows that the async Promise runs after the original handler code, but it’s execution stays on the same compute thread:

A. Original compute thread: ratpack-compute-1-2
C. Original compute thread: ratpack-compute-1-2
B. Promise thread : ratpack-compute-1-2

Registering an ExecInterceptor Lets You See the Segments of Execution

Ratpack allows you to register an ExecInterceptor to view the segments of execution (and create metrics).

If we create this ExecInterceptor that captures time at the execution and segment level:

public class LoggingExecInterceptor implements ExecInterceptor {
  void intercept(Execution execution, ExecInterceptor.ExecType execType, Block executionSegment) throws Exception {
    ExecutionTimer timer = ExecutionTimer.startExecutionSegment(execution)
    try {
    } finally {
      println "${Thread.currentThread().name} - $timer - ${execType}"

and register it in this app:

  bindings {
    bindInstance(new LoggingExecInterceptor())
  handlers {
    all { Context context ->
      final String executionId = context.get(ExecutionTimer).id.toString()

      println "${Thread.currentThread().name} - $executionId - A. Original compute thread"
      context.render "hello from compute"

You’ll see interceptor output with one COMPUTE thread println because we did not have any async or blocking work in our app, all of the work is done in a single execute segment:

ratpack-compute-1-4 - 7a265c2b-82b7-4c23-9f0d-92130fff5c26 - A. Original compute thread
ratpack-compute-1-4 - 7a265c2b-82b7-4c23-9f0d-92130fff5c26 - segment time: 1 execution time: 1ms - COMPUTE

Adding a blocking call to our app (full app):

   handlers {
    all { Context context ->
      final String executionId = context.get(ExecutionTimer).id.toString()

      println "${Thread.currentThread().name} - $executionId - A. Original compute thread"

      Blocking.exec { ->
        context.render "hello from blocking" // pretend blocking work
        println "${Thread.currentThread().name} - $executionId - B. Blocking thread"

      println "${Thread.currentThread().name} - $executionId - C. Original compute thread"

gives us this output:

ratpack-compute-1-6 - f04d95cd-2043-47ae-8fc7-0600085eb399 - A. Original compute thread
ratpack-compute-1-6 - f04d95cd-2043-47ae-8fc7-0600085eb399 - C. Original compute thread
ratpack-compute-1-6 - f04d95cd-2043-47ae-8fc7-0600085eb399 - segment time: 0 execution time: 0ms - COMPUTE
ratpack-blocking-3-1 - f04d95cd-2043-47ae-8fc7-0600085eb399 - B. Blocking thread
ratpack-blocking-3-1 - f04d95cd-2043-47ae-8fc7-0600085eb399 - segment time: 1 execution time: 1ms - BLOCKING
ratpack-compute-1-6 - f04d95cd-2043-47ae-8fc7-0600085eb399 - segment time: 0 execution time: 1ms - COMPUTE

Notice that it adds an extra trailing COMPUTE execution after the BLOCKING one? Ratpack registered our Blocking call to notify an execution segment Promise on our original thread (ratpack-compute-1-6) when it was complete.

That kind of monitoring is how Ratpack knows when an execution is finished. If you spawn your own threads outside of a Promise, Ratpack has no idea that your work exists and you won’t get the behavior that you’re probably looking for.

Parallelized Code Must Notify the Original Compute Thread

Normally, running your non-blocking async work in serial fashion on the same compute thread is fast enough.

If you really want something to run in parallel, you can ask for that work to be scheduled on a different compute thread, but you have to notify the original thread that the work is complete (full app):

  handlers {
    all { Context context ->
      final String executionId = context.get(ExecutionTimer).id.toString()

      println "${Thread.currentThread().name} - $executionId - A. Original compute thread"

      Promise.async({ Downstream downstream ->
        println "${Thread.currentThread().name} - $executionId - B1. Inside async promise, same thread still"

        // ask for an execution to be scheduled on another compute thread
        Execution.fork().start({ forkedExec ->
          println "${Thread.currentThread().name} - $executionId - C. Forked work on another thread"
          downstream.success("hello from fork")

        println "${Thread.currentThread().name} - $executionId - B2. After fork().start()"

      }).then { result ->
        println "${Thread.currentThread().name} - $executionId - D. `then` notifies original compute thread"
        context.render result

The output shows that the original compute thread has an execution segment that runs last. It is notified of the work that was done on that other thread by the call to downstream.success:

ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - A. Original compute thread
ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - B1. Inside async promise, same thread still
ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - B2. After fork().start()
ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - segment time: 1 execution time: 1ms - COMPUTE
ratpack-compute-1-7 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - C. Forked work on another thread
ratpack-compute-1-7 - 83505b78-455a-47c1-8012-486b163d587f - segment time: 0 execution time: 0ms - COMPUTE
ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - D. `then` notifies original compute thread
ratpack-compute-1-6 - edd2b6d3-54f2-43cf-87af-41fc6752cde5 - segment time: 1 execution time: 2ms - COMPUTE

Parallelizing Promise Streams

There isn’t much built-in syntax sugar for working with parallelism using Promises, partially because many apps don’t need it. As of this blog post, there’s an open issue on github to make this better in future versions of Ratpack.

If you need to parallelize your request handling right now, your best option is to use the RxJava integration. This makes RxJava Observables work on top of Ratpack’s Execution model.

RxJava/Promise Streams are Processed in Serial by Default

All work that ratpack does within an execution is on the same thread, and the work is fully serial. This has implications if you’re trying to do something like make a microservice that makes HTTP requests to multiple back-end services for each request it receives.

Your requests will all be made one after the other, even though you are using fully non-blocking http APIs.

Demonstrating this takes a bit of setup. Here is stub embedded ratpack app that our application under test will use as a back-end service.

Each GET request to http://localhost:<port>/:sleepFor> will sleep and then return back to the caller. We sleep on a Blocking thread so we don’t hold up our compute threads as sleep is blocking!

EmbeddedApp stubApp = GroovyEmbeddedApp.of {
  handlers {
    get(":sleepFor") {
      Integer sleepFor = context.pathTokens['sleepFor'].toInteger() ?: 1
      Blocking.exec { ->
        println "Stub Sleep App GET Request, sleep for: $sleepFor seconds"
        sleep(sleepFor * 1000)
        context.render sleepFor.toString()

Our application under test will have an Observable stream of 3 URIs that will each do a non-blocking, async call to our stub sleep application above.

It will then collect the results from each request and render out a response to the original caller to the app (full app):

  handlers {
    all { Context context ->
      HttpClient httpClient = context.get(HttpClient)
      final String executionId = context.get(ExecutionTimer).id.toString()

      // create a List of URIs to the stub app above that will ask it to sleep
      // for N seconds before returning the number of seconds it was asked to sleep
      final List REQUEST_SLEEP_URIS = [3, 2, 1].collect {

      println "${Thread.currentThread().name} - $executionId - A. Original compute thread"

      // Iterate over all uris, make async http request for each and collect the results to render out
      Observable.from(REQUEST_SLEEP_URIS) // stream of URIs
        .flatMap { uri ->
          println "${Thread.currentThread().name} - $executionId - B. GET: $uri"
          RxRatpack.observe(httpClient.get(uri))  // async http request to "sleep" service
        .map { it.body.text } // get the body text for each http result
        .toList()             // collect into a single list and then subscribe to it
        .subscribe({ List responses ->
          println "${Thread.currentThread().name} - $executionId - C. Subscribe final result"
          context.render responses.join(", ")

We’re asking for the requests in REQUEST_SLEEP_URIS to each sleep 3, 2, and 1 seconds before returning results. We can see from the output that it took slightly over 6 seconds (3+2+1) for our request to be fulfilled, and that stub app did not get the 2nd request till the execution segment for the first request had been completed.

ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - A. Original compute thread
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - B. GET: http://localhost:50735/3
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - B. GET: http://localhost:50735/2
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - B. GET: http://localhost:50735/1
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - segment time: 2 execution time: 2ms - COMPUTE
Stub Sleep App GET Request, sleep for: 3 seconds
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - segment time: 0 execution time: 3016ms - COMPUTE
Stub Sleep App GET Request, sleep for: 2 seconds
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - segment time: 1 execution time: 5024ms - COMPUTE
Stub Sleep App GET Request, sleep for: 1 seconds
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - C. Subscribe final result
ratpack-compute-1-4 - 64949f0f-6010-4eb0-abd7-b655769809e7 - segment time: 1 execution time: 6029ms - COMPUTE

Also notice that all work in the app under test was done on the same COMPUTE thread: ratpack-compute-1-4.

This kind of behavior is a good default for Ratpack to have as it makes things very predictable and easy to reason about. There are cases though where you might really need additional performance for a single request.

Parallelism Must be Explicitly Requested

If you want your reactive stream to be processed in parallel, but the work is still async non-blocking work, you can add the forkEach and bindExec methods into your stream.

forkEach will schedule each observable value to be run on the next available compute thread.

bindExec works like a thread “join” operation. It converts the stream into a Ratpack Promise and then back into an observable. This brings processing of that value back to the original thread. If you don’t include an explicit bindExec, Ratpack will take care of bringing the execution back to the main thread for the subscriber automatically.

If we add forkEach and bindExec into our stream from above (full app):

  .forkEach()           // <-- run in parallel on different compute thread
  .flatMap { uri ->
    println "${Thread.currentThread().name} - $executionId - B. GET: $uri"
  .map { it.body.text } 
  .bindExec()           // <-- bind forked thread results to original compute thread
  .subscribe({ List responses ->
    println "${Thread.currentThread().name} - $executionId - C. Subscribe final result"
    context.render responses.join(", ")

You’ll see that our request time reduces to slightly over 3 seconds, the longest sleep time that we were using:

ratpack-compute-1-7 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - A. Original compute thread
ratpack-compute-1-7 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - segment time: 2 execution time: 2ms - COMPUTE
ratpack-compute-1-8 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - B. GET: http://localhost:51763/3
ratpack-compute-1-9 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - B. GET: http://localhost:51763/2
ratpack-compute-1-10 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - B. GET: http://localhost:51763/1
ratpack-compute-1-9 - 41bb05d8-82b8-45bc-8f2b-b3f9330ee61a - segment time: 2 execution time: 2ms - COMPUTE
ratpack-compute-1-8 - 9d0a9729-641a-416b-bd1b-cf04e1aa16b1 - segment time: 2 execution time: 2ms - COMPUTE
ratpack-compute-1-10 - cd2b08e4-625c-48a3-8141-227c6b496ae4 - segment time: 2 execution time: 2ms - COMPUTE
Stub Sleep App GET Request, sleep for: 3 seconds
Stub Sleep App GET Request, sleep for: 2 seconds
Stub Sleep App GET Request, sleep for: 1 seconds
ratpack-compute-1-10 - cd2b08e4-625c-48a3-8141-227c6b496ae4 - segment time: 1 execution time: 1061ms - COMPUTE
ratpack-compute-1-9 - 41bb05d8-82b8-45bc-8f2b-b3f9330ee61a - segment time: 0 execution time: 2060ms - COMPUTE
ratpack-compute-1-7 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - C. Subscribe final result
ratpack-compute-1-8 - 9d0a9729-641a-416b-bd1b-cf04e1aa16b1 - segment time: 1 execution time: 3037ms - COMPUTE
ratpack-compute-1-7 - 6537dfd0-732a-4599-b82c-7f48bf1c5a42 - segment time: 1 execution time: 3038ms - COMPUTE

We’ve gone from 4 execution segments in the original (serial) execution to 8 execution segments (3 more for forking each URI onto the new compute threads, and one for collecting the returned results).

Asking for parallel execution of your streams means that a single request could be handled more quickly, but you are likely reducing the number of transactions per second that your app can handle.

You shouldn’t parallelize your code without first running performance and load tests to determine that you get an actual boost.

You are also giving up some of the ordering guarantees that Ratpack gives you as a default, it can make your code harder to reason about, but only within the forked part of the stream.

Other Notes About RxJava/Ratpack

If you’re familiar with RxJava, you might have seen information on using a Scheduler along with the scheduleOn and observeOn methods. We don’t have direct access to a scheduler of ratpack’s compute/blocking thread pools so we can’t use these methods to get our work done in parallel. Currently, forkEach/bindExec is the best way to get your Observable code to run in parallel.

Understanding Ratpack Executions for Yourself

If you’re new to async/non-blocking programming, there will be a bit of a learning curve. Even if you’ve worked with previous frameworks, every framework has it’s own behavior that you need to learn, and Ratpack is no different. I’ve linked to full running groovy scripts for each of the sample applications above. I think the best way to internalize how Ratpack works is to dive in and play around with some examples for yourself.

Hopefully this post has helped given you some tools and places to start exploring for yourself.

I’d also highly recommend joining the Ratpack Slack Channel, I’ve gotten a huge amount of help from Ratpack team members as well as others in the community. Simply lurking there has been extremely valuable, and I’ve always gotten a great response to getting my questions answered.

Determining System Properties With Gradle Tasks

| Comments

I had a need for a gradle task to determine at runtime a system property that would be passed to another task. Googling for an answer to what I thought was an easy problem came up empty and it took me a few hours to figure out the appropriate incantation to get it to do what I wanted so I thought I’d memorialize this for someone (possbily future me, who’s googling for this after I’ve forgotten this solution).

This is a contrived example, but it demonstrates what I needed.

If I have the following Java file that is getting configuration injected via System.getProperty():

package com.naleid;

public class Example {
  public static void main(String[] args) throws Exception {
    System.out.println("Hello from " + System.getProperty("external.ip",  "localhost"));

If my external IP is, it should print out Hello from when I execute gradle run.

If that system property is known at the time that I execute my gradle task, I want to be able to call the run task with ./gradlew -Dexternal.ip= run.

example run when passing in system property:

% gradle -Dexternal.ip= run
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:setupRun SKIPPED
Hello from


Total time: 0.642 secs

If that system property isn’t known when I execute gradle run, I want gradle to figure it out for me and pass it in to the run task as a system property.

% gradle run
:compileJava UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
Hello from


Total time: 0.739 secs

Here is the buiild.gradle file that gives us the desired behavior.

apply plugin: 'java'
apply plugin: 'application'

repositories {

dependencies { }

mainClassName = "com.naleid.Example"

task setupRun(type:Exec) {
  onlyIf { !System.getProperty('external.ip') }

  workingDir './'
  commandLine 'curl', '-s', 'icanhazip.com'
  standardOutput = new ByteArrayOutputStream()

  // must define it before using it
  ext {
    externalIp = null

  // after the curl command has run and exited, we have the IP in populated standard out
  doLast {
    externalIp = standardOutput.toString().trim()

run {
  dependsOn setupRun

  // must be inside a doFirst, without it, this gets evaluated before the value is populated by the task
  doFirst {
    systemProperties['external.ip'] = setupRun.externalIp ?: System.getProperty('external.ip')

This build.gradle file uses curl -s icanhazip.com to determine the current IP. It changes the run task (provided by the application plugin) so that it dependsOn setupRun. This causes setupRun to be executed before the run task.

The setupRun task is an Exec task and we define an ext dynamic property in it (initialized to null). After the curl command completes, we take the standard ouput and set the dynamic externalIp property equal to it’s output. We also check to see if the property has already been defined (via -Dexternal.ip given to gradle). If it is, we don’t run the task.

The run task has a doFirst block where we are able to get access to the systemProperties that will be passed to the java “main” class. Here we can set up our external.ip system property and set it either to the setupRun.externalIp dynamic variable attached to the task, or fall through to the system property given to gradle by the user.

A working example of this is out in a github repo. Just clone the repo locally and execute ./gradlew run to see your current IP injected by the gradle task.

Intro to Elixir Presentation

| Comments

Last year I got into Elixir. A language built on the Erlang BEAM VM. It’s a functional programming language that has the best combination of expressivness, fault-tolerance, and power that I’ve ever seen. I’ve used it on a number of small projects and hope to use it on a paying gig one of these days.

I gave an “Intro to Elixir” presentation to a few different audiences and the slides are available on github.

Debugging Grails Forked Mode

| Comments

Recent versions of grails introduce forked execution as the default mode.

There are benefits to forked execution (reducing memory strain, classpath isolation, and removing metaclass conflicts), but there are also downsides with the current implementation. The biggest of which is that it makes debugging more painful.

You can no longer simply run the “debug” task in IntelliJ and have it stop at your breakpoints. If you do that, you’ll be debugging the parent launcher process, not your grails application so your breakpoints will never be exercised.

The standard way to get debugging working with forked execution is to use the --debug-fork flag on your grails run-app or grails test-app command. Then, create a remote debugger task in your IDE and attach it once the grails app opens up port 5005.

The Problem

This sucks (IMO) for 3 reasons:

1. It assumes you know you want to debug before you start the app up.

Forget to set the flag and you need to bounce your server.

2. Under the covers --debug-fork uses the suspend=y debug flag which causes grails to halt starting up till you attach a debugger.

If you get distracted while grails is starting up, you’ll often come back to a halted process that still has 90% of the startup to do before it’s ready to serve the app.

3. You can’t attach a debugger till grails forks the process and actually opens up port 5005.

This often takes 10+ seconds to compile all your code, launch the process and finally open the port.

All of this means that you need to babysit your grails app while it starts up, or forego debugging unless you know you need it.

My Solution

The easiest way to solve this is to turn forked execution off (if everyone on your team is willing to give up the benefits). This is easily done by modifying your BuildConfig.groovy so that the grails.project.fork section is empty:

grails.project.fork = []

What if you (or your team) are not willing to give up forked execution mode? You can eliminate most of the downsides with a few tweaks.

1. Always Be Debugging

There really isn’t any performance penalty in dev mode to just always run with the debug flags enabled and this lets you connect a debugger at-will. You don’t need to remember to start a different run target (or run a different command). You can change your BuildConfig.groovy to always run in debug mode with the debug: true map variable, ex:

grails.project.fork = [
    test: [maxMemory: 768, minMemory: 64, debug: true, maxPerm: 256, daemon:true],

Unfortunately, while that will always debug, it also has the suspend=y flag hard coded into it, so you’ll pause execution on every run which violates issue #2.

2. Use the suspend=n debug flag so that grails doesn’t pause till you connect a debugger.

This is harder to configure than you’d expect. As mentioned above, both the --debug-fork command line switch and debug: true flag in BuildConfig.groovy cause suspend=y flag to be used. To get around this, you need to specify the actual JVM debug flags in your BuildConfig.groovy, ex:

// jvmArgs make it so that we can run in forked mode without having to use the `--debug-fork` flag
// and also has suspend=n so that it will start up without forcing you to connect a remote debugger first
def jvmArgs = ['-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005']

grails.project.fork = [
    test: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, daemon:true, jvmArgs: jvmArgs],
    run: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false, jvmArgs: jvmArgs],
    war: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, forkReserve:false, jvmArgs: jvmArgs],
    console: [maxMemory: 768, minMemory: 64, debug: false, maxPerm: 256, jvmArgs: jvmArgs]

With that, we’ve solved problems #1 and #2. Our grails app will always be running in debug mode and will also not pause halfway through startup to force us to always connect a debugger. We can connect a debugger whenever we want to actually debug.

3. Monitor the port before starting the remote debugger

If you create a new remote debug target in intellij and run it at the same time as you run your app or tests, it’ll fail because the debug port isn’t open yet. If you do it manually, you need to wait for the log message saying that the debug port is open. That’s more babysitting that we can avoid through a little shell script trickery.

Here is a shell script that uses the nc (netcat) command to monitor a localhost port, and will only continue once the port is available. Save it as wait_for_port.sh somewhere in your path:

#!/usr/bin/env bash

function usage {
    echo "usage: ${0##*/} "
    echo "ex: ${0##*/} 5005"

if [ -z $PORT_NUMBER ]; then
    exit 1

echo "waiting for port $PORT_NUMBER to open up"

while ! nc -z localhost $PORT_NUMBER; do sleep 0.1; done;

Now, create a new IntelliJ remote debug task:

IntelliJ Remote Debug

have it use this script as an “external tool” to monitor port 5005 before it tries to connect.

External Tool Creation

Now, you can start up your grails run-app task and immediately start the debug task without having to wait for the debugger port to be open:

Run and Debug in IntelliJ at the same time

You might see an error in the grails log about how the debugger disconnected, but that was actually netcat connecting then quitting out right away and it’s harmless.

Now you have all of the pieces necessary to make grails forked execution pretty painless while still getting all of it’s benefits.

Declaring Closures in Swift

| Comments

The Swift Programming Language was just announced yesterday at Apple’s WWDC 2014 conference. After downloading the iBook of “The Swift Programming Language” and the beta of Xcode 6, I was playing around with the language, but had some difficulting finding a clear on the synax for declaring closures. All I could find were text descriptions, and formal grammar definitions. Both of those take too long for my brain to decode so I whipped up a few concrete examples:

Closures with Named and Explicitly Typed Parameters Return Value

The most verbose syntax for a closure specifies the parameters along with their types and the return type of the closure.

{(param: Type) -> ReturnType in expression_using_params}    


[3, 4, 5].map({(val: Int) -> Int in val * 2})                     //> [6, 8, 10]    

[1, 2, 3].reduce(0, {(acc: Int, val: Int) -> Int in acc + val})   //> 6

As everything is explicit, you can assign each of these to a constant with let:

let timesTwo = {(val: Int) -> Int in val * 2}
[3, 4, 5].map(timesTwo)       //> [6, 8, 10]    

let sumOf = {(acc: Int, val: Int) -> Int in acc + val}
[1, 2, 3].reduce(0, sumOf)    //> 6

Closures with Named Parameters and Implicit Types

If you’re using your closures as inline parameters to a function, the types can be inferred so you don’t need to explicitly set them.

{(params) in expression_using_params}


[3, 4, 5].map({(val) in val * 2})                 //> [6, 8, 10]

[1, 2, 3].reduce(0, {(acc, val) in acc + val})    //> 6

Closures with Positionally Named Parameters

If you don’t care about naming your variables for clarity, you can just use 0-based positional arguments (similar to shell script positional args).

[3, 4, 5].map({$0 * 2})           //> [6, 8, 10]

[1, 2, 3].reduce(0, {$0 + $1})    //> 6

Implicit Closure Parameter Limitations

The compiler needs to have enough information about how you’re using the closures so that it can infer the types, so this works because we’re multiplying by 2, so it knows that $0 must also be an Int:

let timesTwo = {$0 * 2}
[3, 4, 5].map(timesTwo)

But this throws an exception because it can’t tell for sure what type of arguments $0 and $1 are:

let sumOf = {$0 + $1}     // FAILS with "could not find an overload for '+' that accepts the supplied arguments let sumOf = {$0 + $1}"
[1, 2, 3].reduce(0, sumOf)

At the least, you need to provide named parameters and their types:

let sumOf = {(acc: Int, val: Int) in acc + val}
[1, 2, 3].reduce(0, sumOf)        //> 6

Auto-Refreshing Grails Applications That Leverage the Grails Resources Plugin

| Comments

If you’re using the Grails Resources Plugin (like 82% of all grails applications currently) and you’re leveraging its ability to bundle resources together, you’ve probably noticed the delay between when you save a file and when a browser refresh actually shows the change.

This is annoying and can hurt your flow while developing. You’re never sure if the manual refresh that you just did on your browser actually has the change in it or not.

Let LiveReload Refresh the Browser for You

LiveReload is a simple application, but it really speeds up my development when I’m iterating on changes to a website. Especially when I’m doing things like tweaking CSS and HTML.

Once you have the application installed, you tell it what directories you want it to monitor. Whenever it sees a change to a file in that directory that has a “known” extension, it tells your browser to refresh the page automatically.

That sounds simple, and not all that powerful right? You could just cmd-tab over to your browser and cmd-R to refresh. But avoiding that keeps you in a flow and keeps your cursor and attention on the right things.


The easiest way is to download it from the App Store (it’s $9.99, but worth it IMO).

You’ll also want to add the Chrome LiveReload Plugin. This gives you a button to activate LiveReload for that tab.

LiveReload Chrome Plugin Button

When the plugin is enabled, it injects a small piece of JavaScript into your page that listens to the LiveReload application. When LiveReload sees a file change, it tells the browser plugin to refresh.


Now that you have the app and browser plugin installed, you’ll want to launch LiveReload. Then go into the options and enter any additional extensions that you’d like to monitor (it comes with many web defaults, but I add gsp for Grails development):

LiveReload options

Next, drag in any directories that conain files to monitor into LiveReload’s left pane.

For grails, if you’re NOT using the resources plugin’s “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • web-app – all JavaScript and CSS changes

if you ARE using grails resources “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • ~/.grails/<grails_version>/projects/<project_name>/tomcat

Where grails_version is something like 2.2.4, and project_name is the name of your grails app. This is where the compiled bundles are placed. You don’t want to add the web-app directory as LiveReload will double refresh your browser. Once when it sees the initial JS/CSS change, and a 2nd time when the bundle is compiled.

Using LiveReload

Now that you’ve got LiveReload monitoring for changes, fire up your browser and browse to your application. Then hit the LiveReload chrome plugin button:

LiveReload options

That will connect it to the app:

LiveReload options

and now any changes in the monitored directories will cause the “enabled” browser tab to automatically refresh.

I develop with a couple of monitors and using LiveReload lets me have my browser open on one monitor, and my code editor in the other. As soon as I save the file (and grails resources finishes compiling), I see the change on my web browser monitor without any additional input.

Non-OSX alternatives

If you’re on Windows or Linux and can’t use LiveReload (or if you don’t want to spend the $10), I’ve heard good things about Fire.app. I haven’t used it personally, but understand that it has a similar feature-set.

How to Use P4merge as a 4-pane, 3-way Merge Tool With Git and Tower.app

| Comments

Last year, I blogged about how I was using kdiff3 as my merge tool with git, mercurial and Git Tower.

Since then, I’ve had a number of troubles with kdiff3 around it’s usability and font handling that make it difficult to use (the text you click on isn’t the text your editing, very painful).

That made me look around for alternatives and I found that the freely available p4merge tool from Perforce is probably the best option. It has a more native mac feel and properly handles fonts.