Declaring Closures in Swift

| Comments

The Swift Programming Language was just announced yesterday at Apple’s WWDC 2014 conference. After downloading the iBook of “The Swift Programming Language” and the beta of Xcode 6, I was playing around with the language, but had some difficulting finding a clear on the synax for declaring closures. All I could find were text descriptions, and formal grammar definitions. Both of those take too long for my brain to decode so I whipped up a few concrete examples:

Closures with Named and Explicitly Typed Parameters Return Value

The most verbose syntax for a closure specifies the parameters along with their types and the return type of the closure.

{(param: Type) -> ReturnType in expression_using_params}    

examples:

[3, 4, 5].map({(val: Int) -> Int in val * 2})                     //> [6, 8, 10]    

[1, 2, 3].reduce(0, {(acc: Int, val: Int) -> Int in acc + val})   //> 6

As everything is explicit, you can assign each of these to a constant with let:

let timesTwo = {(val: Int) -> Int in val * 2}
[3, 4, 5].map(timesTwo)       //> [6, 8, 10]    

let sumOf = {(acc: Int, val: Int) -> Int in acc + val}
[1, 2, 3].reduce(0, sumOf)    //> 6

Closures with Named Parameters and Implicit Types

If you’re using your closures as inline parameters to a function, the types can be inferred so you don’t need to explicitly set them.

{(params) in expression_using_params}

examples:

[3, 4, 5].map({(val) in val * 2})                 //> [6, 8, 10]

[1, 2, 3].reduce(0, {(acc, val) in acc + val})    //> 6

Closures with Positionally Named Parameters

If you don’t care about naming your variables for clarity, you can just use 0-based positional arguments (similar to shell script positional args).

[3, 4, 5].map({$0 * 2})           //> [6, 8, 10]

[1, 2, 3].reduce(0, {$0 + $1})    //> 6

Implicit Closure Parameter Limitations

The compiler needs to have enough information about how you’re using the closures so that it can infer the types, so this works because we’re multiplying by 2, so it knows that $0 must also be an Int:

let timesTwo = {$0 * 2}
[3, 4, 5].map(timesTwo)

But this throws an exception because it can’t tell for sure what type of arguments $0 and $1 are:

let sumOf = {$0 + $1}     // FAILS with "could not find an overload for '+' that accepts the supplied arguments let sumOf = {$0 + $1}"
[1, 2, 3].reduce(0, sumOf)

At the least, you need to provide named parameters and their types:

let sumOf = {(acc: Int, val: Int) in acc + val}
[1, 2, 3].reduce(0, sumOf)        //> 6

Auto-Refreshing Grails Applications That Leverage the Grails Resources Plugin

| Comments

If you’re using the Grails Resources Plugin (like 82% of all grails applications currently) and you’re leveraging its ability to bundle resources together, you’ve probably noticed the delay between when you save a file and when a browser refresh actually shows the change.

This is annoying and can hurt your flow while developing. You’re never sure if the manual refresh that you just did on your browser actually has the change in it or not.

Let LiveReload Refresh the Browser for You

LiveReload is a simple application, but it really speeds up my development when I’m iterating on changes to a website. Especially when I’m doing things like tweaking CSS and HTML.

Once you have the application installed, you tell it what directories you want it to monitor. Whenever it sees a change to a file in that directory that has a “known” extension, it tells your browser to refresh the page automatically.

That sounds simple, and not all that powerful right? You could just cmd-tab over to your browser and cmd-R to refresh. But avoiding that keeps you in a flow and keeps your cursor and attention on the right things.

Installation

The easiest way is to download it from the App Store (it’s $9.99, but worth it IMO).

You’ll also want to add the Chrome LiveReload Plugin. This gives you a button to activate LiveReload for that tab.

LiveReload Chrome Plugin Button

When the plugin is enabled, it injects a small piece of JavaScript into your page that listens to the LiveReload application. When LiveReload sees a file change, it tells the browser plugin to refresh.

Configuration

Now that you have the app and browser plugin installed, you’ll want to launch LiveReload. Then go into the options and enter any additional extensions that you’d like to monitor (it comes with many web defaults, but I add gsp for Grails development):

LiveReload options

Next, drag in any directories that conain files to monitor into LiveReload’s left pane.

For grails, if you’re NOT using the resources plugin’s “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • web-app – all JavaScript and CSS changes

if you ARE using grails resources “bundles” add:

  • grails-app/views – changes to gsp files & fragments
  • ~/.grails/<grails_version>/projects/<project_name>/tomcat

Where grails_version is something like 2.2.4, and project_name is the name of your grails app. This is where the compiled bundles are placed. You don’t want to add the web-app directory as LiveReload will double refresh your browser. Once when it sees the initial JS/CSS change, and a 2nd time when the bundle is compiled.

Using LiveReload

Now that you’ve got LiveReload monitoring for changes, fire up your browser and browse to your application. Then hit the LiveReload chrome plugin button:

LiveReload options

That will connect it to the app:

LiveReload options

and now any changes in the monitored directories will cause the “enabled” browser tab to automatically refresh.

I develop with a couple of monitors and using LiveReload lets me have my browser open on one monitor, and my code editor in the other. As soon as I save the file (and grails resources finishes compiling), I see the change on my web browser monitor without any additional input.

Non-OSX alternatives

If you’re on Windows or Linux and can’t use LiveReload (or if you don’t want to spend the $10), I’ve heard good things about Fire.app. I haven’t used it personally, but understand that it has a similar feature-set.

How to Use P4merge as a 4-pane, 3-way Merge Tool With Git and Tower.app

| Comments

Last year, I blogged about how I was using kdiff3 as my merge tool with git, mercurial and Git Tower.

Since then, I’ve had a number of troubles with kdiff3 around it’s usability and font handling that make it difficult to use (the text you click on isn’t the text your editing, very painful).

That made me look around for alternatives and I found that the freely available p4merge tool from Perforce is probably the best option. It has a more native mac feel and properly handles fonts.

Embed a Groovy Web Console in a Java Spring App

| Comments

Having a web-based Groovy console for your Java Spring webapp can be invaluable for developing and testing. It might also be appropriate for your production app as long as it’s properly secured. The Grails world has long had the Console Plugin and I’ve seen firsthand how useful a console can be on earlier Grails projects.

Not every project I get to work on is in Grails though, but I wanted to have the same power in my Spring/Java applications.

I’ll demonstrate how to integrate this with a REST-based web service, but the same core Service code could be used for other website interaction types.

Logging to Splunk in Key/Value Pairs

| Comments

Splunk is a log aggregation tool with a very powerful query language that lets you easily mine the data from your logs. Taking an hour or two to learn the query language (which has some similarity to SQL) will greatly increase the usefulness of Splunk.

One of the Splunk logging best practices is to write out your logs into comma delimited key/value pairs that let Splunk interpret your data as queryable fields.

key1="value", key2="other value"...

Doing that will make key1 and key2 fields that can be queried and reported on.

If you don’t do this, you can still create fields out of unstructured data, but you have to use a relatively ugly regular expression syntax to create fields, ex if the log format is “Here is Key1 value and Key2 other value”

... | rex field=_raw "Here is Key1 (?<key1>.*) and Key2 (?<key2>.*)"

Here is a simple static method that will write out data given to it in vararg format as an appropriately escaped and delimited String:

package com.naleid.utils;

import org.apache.commons.lang3.StringEscapeUtils;
import java.util.Map;

public class LogUtil {

    public static String toSplunkString(Object... mapInfo) {
        return toSplunkString(varArgMap(mapInfo));
    }

    public static Map varArgMap(Object... mapInfo) {
        final Map result = new LinkedHashMap<>();
        if (mapInfo.length % 2 == 1) {
            throw new IllegalArgumentException("arguments must be even in number");
        }
        for (int i = 0; i < mapInfo.length; i += 2) {
            final Object o1 = mapInfo[i];
            if (!(o1 instanceof String)) {
                throw new IllegalArgumentException("odd arguments must be String values so they can be keys");
            }
            final Object o2 = mapInfo[i + 1];
            result.put((String) o1, o2);
        }
        return result;
    }

    public static  String toSplunkString(Map map) {
        StringBuilder buffer = new StringBuilder()

        for (Map.Entry entry : map.entrySet()) {
            if (buffer.length() > 1) {
                buffer.append(", ");
            }
            buffer.append(entry.getKey()==null?null:entry.getKey().toString())
                    .append("=\"")
                    .append(StringEscapeUtils.escapeJava(entry.getValue()==null?null:entry.getValue().toString()))
                    .append("\"");
        }
        return buffer.toString();
    }

}

Using that, when you write a log statement, you can now pass in a list of varargs (which will be transformed into a Map) and have it logged out in Splunk standard key/value format.

import static com.naleid.util.LogUtil;

...

log.info(toSplunkString("tag", "userPurchase", "purchaseId", 23, "userId", 123, "productId", 456, "price", 45.34));

Would output something like:

06:48:04,081 INFO [com.naleid.service.MyPurchaseService] tag="userPurchase", purchaseId="23", userId="123", productId="456", price="45.34"

This would let us write a Splunk query to show us things like number of purchases and average purchase price by user:

userPurchase | stats count, avg(price) by userId

splunk_user_purchases_with_total

or the number of purchases per product and that products total amount sold, sorted from highest to lowest

userPurchase | stats count as "# purchases", sum(price) as "total $" by productId | sort -"total $" 

splunk_purchase_count_with_total

You can also have alerts set up to automatically generate reports that are e-mailed periodically or create graphical dashboards.

The Splunk Search Reference and the Quick Reference Guide PDF (which is slightly outdated but still useful) are great references while you’re learning the query syntax. You can do much more powerful things than what I’m showing here.

The Splunk Search Examples shows a number of queries (and screenshots of results) along these lines.

In my next post (coming soon), I’ll show how to create a StopWatchAspect that automatically logs timing information for all of your service methods using the Splunk formatting. Then you can report and show timing statistics for your code and see what methods are taking the most time and how a method’s performance has changed over time.

Saving JSON Client-side to an S3 Bucket

| Comments

A co-worker came up with an interesting problem today. What’s the cheapest and easiest way to save relatively low traffic text content without having to create a server side component for it.

After thinking about it for a bit, I thought about using an public-writable S3 bucket and letting the client-side JavaScript PUT the JSON to the bucket. With a little bit of research and playing around, I was able to make it work.

This assumes you have an EC2 account with Amazon. If you do, you can log in to the console. Then, go to the S3 section of the console and create a new bucket that will hold the uploaded JSON files.

With that bucket selected, go to “Properties”, open up the “Permissions” tab and click on the “Add CORS Configuration” button. Put something like this in there:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>http://fiddle.jshell.net</AllowedOrigin>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

But be sure to change the AllowedOrigin to whatever host name you’ll be doing your uploading from (I use jsfiddle for testing javascript so it’s a good one to test with).

Then, you’ll want to “Add a bucket policy” to the bucket to allow the world to upload to the bucket. Here is the one I came up with (replace the bucket name with yours):

{
    "Version": "2008-10-17",
    "Statement": [
        {
            "Sid": "AllowPuts",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
        },
                {
            "Sid": "DenyGetsToAllButMyIP",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": "YOUR.IP.ADD.RESS/32"
                }
            }
        }
    ]
}

This policy has the effect of removing anyone’s ability to download or view the file who isn’t at whatever IP you put in the NotIpAddress CIDR address. This is an alternative to setting up an IAM policy that lets a specific user or set of users view the file, which I’ve found to be quite fiddly to get working.

If you’re having trouble with this policy, you can use the AWS Policy Generator to generate your own.

Then, just write your javascript to upload the JSON (this requires jQuery, replace YOUR-BUCKET-NAME in the upload path and set the data to whatever JSON you want it to be, you can tweak this fiddle):

var value = "foobar";
// generate random guid for filename
var guid = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
    var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
    return v.toString(16);
});

// CHANGE THIS TO YOUR BUCKET NAME
var uploadPath = 'http://YOUR-BUCKET-NAME.s3.amazonaws.com/' + guid + ".json";

console.log(uploadPath);

$.ajax({
    type: "PUT",
    url: uploadPath,
    dataType: 'json',
    async: false,
    data: JSON.stringify({ "value": value })
});

Because this relies on CORS, it only works in IE8+, but for most applications, that should be OK.

This solution isn’t appropriate for all uses, but for low-traffic, relatively insecure stuff (like website “feedback” forms) it should be fine. There is also the possibility that someone malicious could upload whatever they want to your bucket (and make you pay the charges for it). Hopefully, setting the policy so that it’s only world-writable mitigates the potential for misuse.

Calling GruntJS Tasks From Gradle

| Comments

Gradle is a great build tool with a large community for developing JVM-based applications, but one area that it seems to be lacking strong support is in front-end tooling. The Node.js community’s strength is front-end tooling with a number of very nice build tools including Grunt, Yeoman and Brunch.

There are a couple of Gradle plugins that people have created around JavaScript and CSS processing, but even the authors of those plugins seem to have punted and moved to node.js-based tools for front-end work.

I’m using Grunt on my latest project to help out with packaging, minification, and concatenating files (all through RequireJS’s r.js optimization), linting (through JSHint) and css pre-processing (via LESS).

I wanted a way for our build process to be able to assemble our .war files in a single step so I needed to figure out how to weld these two tools together.

The quick and dirty way of doing it would just be to either hard code some “grunt”.execute().text lines, or to have each grunt task be an Exec task:

task requirejs(type: Exec) {
    commandLine 'grunt', 'requirejs'
}

One other limitation that we had was that some of our developers (and our build machine slaves) are on Windows boxes, and other developers are using OSX. On Windows, the grunt command is spelled grunt.cmd and once I started having to repeat OS checks everywhere things started to feel less DRY and more hacky.

After a little research, I was able to figure out how to create a custom Gradle Exec subclass that keeps things clean:

import org.apache.tools.ant.taskdefs.condition.Os
import org.gradle.api.tasks.Exec

...

task requirejs(type: GruntTask) {
    gruntArgs = "requirejs"
}

task jslint(type: GruntTask) {
    gruntArgs = "lint"
}

...

class GruntTask extends Exec {
    private String gruntExecutable = Os.isFamily(Os.FAMILY_WINDOWS) ? "grunt.cmd" : "grunt"
    private String switches = "--no-color"

    String gruntArgs = "" 

    public GruntTask() {
        super()
        this.setExecutable(gruntExecutable)
    }

    public void setGruntArgs(String gruntArgs) {
        this.args = "$switches $gruntArgs".trim().split(" ") as List
    }
}

You can either put the GruntTask class directly in your build.gradle file (where it won’t have a package) or else in a directory under buildSrc/src/main/groovy where it should automatically be included in your build. It’s probably better for organization purposes to have it in buildSrc, but there seems to be a performance impact to gradle needing to check that directory is current all the time.

With the GruntTask class in place, you can treat your Grunt tasks just like Gradle tasks, including making things like your war task depend on requirejs running first, or you can pass your tasks in explicitly:

gradle clean lint requirejs war

Overriding Backbone.js Sync to Allow Cross Origin Resource Sharing (CORS) withCredentials

| Comments

So I’m apparently starting a series of Backbone.js posts wherein I’m documenting all the BackboneJS/JavaScript stuff I’m figuring out that I couldn’t find easily in the googles.

Today’s installment is how to globally override Backbone’s sync method to allow Cross Origin Resource Sharing (CORS) requests so that you forward the current cookie with security credentials.

There are a number of results on stack overflow that answer either how to set it for a specific request, how to set up the server side, or just that generally you need to override sync, but none of them show you how to do it withCredentials.

Per html5rocks:

The .withCredentials property will include any cookies from the remote domain in the request, and it will also set any cookies from the remote domain. Note that these cookies still honor same-origin policies, so your JavaScript code can’t access the cookies from document.cookie or the response headers. They can only be controlled by the remote domain.

This code will uses the javascript proxy pattern to get a reference to the original sync function, and create a new wrapper method to provide advice on the original function:

(function() {

  var proxiedSync = Backbone.sync;

  Backbone.sync = function(method, model, options) {
    options || (options = {});

    if (!options.crossDomain) {
      options.crossDomain = true;
    }

    if (!options.xhrFields) {
      options.xhrFields = {withCredentials:true};
    }

    return proxiedSync(method, model, options);
  };
})();

Also, this is the client-side implementation of CORS, you’ll still need to implement the appropriate changes on the server side to make this work. These seem to be much better documented to me (see the “Adding CORS Support to the Server” section of the html5rocks documentation).