Fork me on GitHub


Music, software, life… and stuff.

[ Twitter ] [ GitHub ] [ LinkedIn ]

Ratpack update: more than a micro framework

It’s been a long time since I posted anything about Ratpack (or much else for that matter), so an update is well overdue.

We are steadily iterating on the codebase and documentation, but the inadequate growth in documentation doesn’t really capture what has happened over the past 12 and a bit months nor what Ratpack is shaping up to be.

The first thing to say is that Ratpack is not just me. The project has been fortunate enough to attract many contributions and some new core committers, particularly Rus Hart and David M. Carr. Thanks to everyone who has taken the time to contribute.

When I picked up Ratpack, its goal was to be a port to Groovy of the popular Ruby framework Sinatra. The situation is now very different, and none of the original code remains. My original intention was to do a small amount of work on the project for a friend, but I quickly became intrigued about the possibilities of creating a new tool for HTTP applications. 18 months later here we are. The website and documentation intentionally makes no reference to Sinatra, as Ratpack has really become something else.

First of all, Ratpack is not a Groovy framework. It’s purely implemented in Java 7, soon to be Java 8. It is however designed with Groovy in mind, and has particular (optional) support for Groovy by way of the ratpack-groovy library that makes some APIs more concise through use of Groovy’s closures. The ratpack-groovy library also brings in additional features such as Groovy based templates, building responses with MarkupBuilder and reading/writing JSON via Groovy’s built in JSON support.

One thing to note about the Groovy integration is that it’s cutting edge. Ratpack has been pushing the boundaries of Groovy’s support for static typing and type inference. What this means in practice is that Ratpack’s use of Groovy is 100% strongly typed. You can use @CompileStatic everywhere and still write amazingly concise code thanks to new features such as @DelegatesTo and improved type inference for Closure-to-SAM-type coercion. A good example of this can be seen in the code examples for the RxRatpack.observe() method). There are Java and Groovy examples given there, both 100% statically compiled and strongly typed. No need to declare any explicit types in that Groovy example thanks to Groovy’s type inference. What this also means is that IDEs (particularly IntelliJ IDEA) fully understand all of your Groovy code and can provide amazing editing support on par with working with Java code. As the API is completely static, refactoring and usage analysis just works without any special Ratpack plugins for the IDE.

Which brings us to documentation. We’ve put a lot of work into our documentation infrastructure, and to the content. One of the goals of the project is to have high quality, always accurate, documentation. The only way to keep the code samples in the documentation accurate over time is to test them, which is what we do. Every single code snippet in the manual and Javadoc is statically compiled and executed on every code change. Over time we are adding more and more examples, particularly to the Javadoc. I personally find that looking at some code that uses a feature is usually far more useful than textual descriptions. We’ve got a long way to go with the documentation, but it is improving every release. On a project that only evolves by people contributing their spare time, adding to the documentation is always a challenge. If you’re interested in contributing to the docs (that would be great!), or are interested in more detail about how it works you can checkout our docs on the docs.

Ratpack initially started off in the Sinatra inspired genre of “smallest possible hello world app”. While one thing about Ratpack is that it’s very convenient to write small apps in (using either Groovy or Java), this is no longer the goal. Now, the goal is scalability in every sense of the word. Scalable application complexity, and scalable performance.

A problem of any software development project is dealing with framework boundaries and constraints as your codebase inevitably gets more complex, i.e. scaling the complexity. All those productivity features that were so useful at the start of the project start to get in the way and force undesirable design choices. Ratpack tries to address this problem in the following ways:

  1. A small set of pervasive abstractions (e.g. parsers, handlers and renderers)
  2. Composition through functions (e.g. request processing is just traversal through a graph of handler functions)
  3. Adapt, don’t abstract (e.g. don’t abstract over JSON handling, adapt different libraries)

What this means in practice is that Ratpack somewhat bucks the opinionated convention approach that is popular in frameworks. You have much freedom to wire an application together out of different libraries and techniques. This often means that you will have to do a little bit more wiring with Ratpack than other more opinionated tools, but it also means that you are in full control. This is critically important as your application grows. However, this doesn’t mean you are completely on your own. Ratpack’s integration with Google Guice and its Registry abstraction provide a way to stitch things together without a lot of busy work.

Scalable performance mostly comes down to Ratpack being non blocking via Netty (though we also keep a close eye on and measure Ratpack’s internal efficiency). There are many articles on the web about the performance and scale benefits of non blocking server applications so I won’t rehash that here. What I will say is that Ratpack is completely non blocking and provides mechanisms that take some of the complexity and pain out of asynchronous programming. In particular, Ratpack integrates with RxJava. RxJava support building complex, asynchronous and non blocking, processing out of composable functions without callbacks. RxJava works very well with Ratpack.

Another key area where Ratpack shines is testing. Ratpack makes it easy to unit test handlers and handler chains. It also makes it easy to integration/functional test your entire application (from within your IDE with no special plugins), but also importantly you can do the same kind of testing on arranged subsets of your application by creating mini embedded applications at test time (here and here). The embedded app support is also very useful for creating test time doubles of HTTP services that your app depends on. You can very easily create a small Ratpack app that mimics the service your application calls out to. It’s also extremely easy to use Geb to browser test your app, and there also integration with Groovy Remote Control for injecting code into the application under test for setting up test data and simulating failure conditions. Much more documentation is needed on the test time support, as it really is a very strong feature of Ratpack. Simple and fast test execution, within the IDE, without the need for IDE plugins or command line tools.

If you haven’t seen them, it’s also worth having a look at our docs on Gradle build integration and deploying to Heroku.

There is still much more to say about Ratpack. I haven’t mentioned the integrations with Pac4j, Handlebars templates, CodaHale Metrics, Jackson, Hystrix and more. Hopefully you get the idea that Ratpack is about more than very small “Hello World” applications. Documentation for all of these features will arrive eventually.

With regards to the Roadmap, there is no ETA for 1.0. We will continue to release a new version on the first of each month for the foreseeable future. Once we are happy with the core API (and documentation) we will go 1.0 and freeze the API in terms of breaking changes. The more people use Ratpack and the more we get feedback, the quicker this is going to happen. Also, we are always looking for more contributors.

Hopefully this, admittedly long, rant about Ratpack has convinced you that it might be worth a look at, or worth a second look if you haven’t checked it out in a while.

Posted: Jun 5th, 2014 @ 10:23 am

Tags: #software  #ratpack  #java  #groovy  #web  


Marcin Erdmann is now the Lead Developer of the Geb project

I’m happy to announce that I’m handing over the “lead” hat for the Geb project to my good friend Marcin Erdmann. Marcin has been a serious contributor to Geb (and other projects that I work on) for the past few years and has been doing an excellent job.

I’ll still be heavily involved in Geb development for now and into the future, but I’ll be proposing changes to Marcin instead of the other way around.

I created Geb during November 2009. I have learned a lot about running projects, communities and delivering tools to developers from the process. I’ve also been fortunate enough to speak about Geb at conferences around the world. However, it’s now time to let someone else take the reigns.

This means that yet again I will fail to take a personal project across the 1.0 milestone. Hopefully I can break this pattern with Ratpack.

I’ve got no doubt that Marcin will do a great job as Lead and that this is a good outcome for users of Geb.

Posted: May 15th, 2014 @ 6:01 pm


Jetty or Vert.x for Ratpack?

Update (2013-02-06): It turns out I had some bad assumptions about Vert.x. It’s not practical for embedding (for Ratpack’s purposes) and requires a full Vert.x runtime/platform environment. This is a deal breaker. So… this means it’s looking like Jetty… or maybe better still… Netty.

Last year, the effervescent Tim Berglund made me aware of Ratpack, which could have been accurately described as a Groovy version or Ruby’s Sinatra micro web framework. I became interested in it primarily as a way to write example apps for Geb examples/demos/classes and ultimately the Geb website itself.

First things first, why not Grails? Grails is awesome, that’s not in question. It’s just more than needed for these simple apps. What’s more, I needed it to be happy to be part of a Gradle toolkit (I’m still working on the grails-gradle-plugin) and I wanted something light. I also am attracted to new shiny things.

Ratpack fit the bill of what I was looking for in concept. It was originally developed by a bunch of people about two years ago, then stalled when the lead (who I only know as bleedingwolf) formally announced he was no longer working on it. Sometime after this Tim picked it up and became the official maintainer (with bleedingwolf’s blessing). The official GitHub home of the project (after a few moves) is now

When I started looking at it, I wanted to make some changes (if you’ve ever worked with me this won’t surprise you). Given that the project had been dormant for a while, I started doing this (with Tim’s blessing) without any real regard for backwards compatibility. There were some fundamental issues that, given where the project was at, just made more sense to thoroughly sort out. Two major functional changes were made:

  1. You didn’t need to restart the app for changes to take effect (when changing the routing file, which is really the application)
  2. Back away from J2EE and just embed Jetty (i.e. don’t try to produce a WAR file)

Along with this I worked on improving the Gradle plugin to reflect that it’s now a standalone app instead of a WAR. This means basing the plugin on the Gradle Application plugin rather than the WAR plugin. This made it simple to build a standalone, self contained, portable app.

A little while ago I started to wonder about different models for Ratpack than J2EE, servlets and all that noise. Ratpack was already divorced from that stuff, in that you never saw it, but it was still based on Jetty and was fundamentally implemented as a servlet. I started looking at Vert.x. As an experiment, I took a branch and set about transplanting Ratpack on top of Vert.x. I’m happy with the outcome.

The question now, is what to go forward with. I don’t think maintaining two versions going forward is the right thing to do and I certaintly don’t want to do that. Underpinning all of this is a certain amount of trust in the async IO argument. That is, this async model will ultimately lead to more performant, more scalable applications. I’m taking that at face value because people smarter than me have made this argument. On the other hand, there is no doubt (at least in my mind) that async based programming is more difficult. That’s the price you pay. Ratpack on top of Vert.x can take away some of the danger of async programming make it less error prone (and I’d say it already does).

What does Ratpack give you over raw Vert.x?

  1. Templating - fully async rendering, and statically compiled (optional) using Groovy 2.1 (and indy)
  2. Error handling (i.e. 500 pages) - this is no small thing in an async world
  3. Not found handling (i.e. 404 pages)
  4. Routing - Vert.x already has this, but Ratpack’s is integrated with 404 and 500 handling (and runtime reloadable)
  5. Static file serving - Goes above what Vert.x gives you and support HTTP caching headers
  6. Session support
  7. Runtime reloading - for routes, and if you’re using the Gradle plugin for all of your application code (via SpringLoaded)
  8. Higher level abstractions - for example Request and Response
  9. Decent integration with a capable build tool (Gradle)

With the Jetty version, the goal is to hide Jetty and servlet stuff in general. For the Vert.x version, this would not be the case. The point is not to abstract over Vert.x and hide it. It’s to add some convenience for doing small web apps while fully leveraging Vert.x’s runtime features (e.g. messaging, sockjs). Unsure how Vert.x modules would play at this point.

Here’s how I see the pros/cons of Jetty or Vert.x as the basis for Ratpack.


  1. Performance (though Jetty is still very fast by all accounts)
  2. Fewer dependencies
  3. Embedded message bus
  4. Embedded SockJS support, with message bus spanning to client
  5. Built in clustering (for eventing at least)
  6. No J2EE (abstractions not needed for this kind of thing)
  1. Async programming has challenges (debugging being one)
  2. Less trusted than Jetty
  3. No WAR deployment (this is appealing to some)
  4. No built in session support (I’ve added my own for Ratpack) and some other stuff that Jetty gives


  1. Well known, trusted
  2. All of the HttpServletRequest convenience (header parsing etc.)
  3. Familiar, threaded model
  4. Can potentially use servlet filters and all that junk
  5. WAR packaging
  1. No message bus
  2. More work to get SockJS or any server/client messaging (by no means impossible though)
  3. Clustering becomes more complicated
  4. More dependencies (more weight)
  5. Not async (assuming that impacts performance in the general sense)

There are probably more, but that’s how I see it right now.

Left to my own devices, I’d keep going with the Vert.x version because I find it the most interesting to work on. However, I’d also prefer to work on something that’s useful to other people. That’s why I’d like your opinion on the matter. Should Ratpack build on Jetty (and J2EE generally) or Vert.x?

There’s precious little documentation available right now on either the Jetty or Vert.x version of Ratpack. I’d like to sort out this question before investing in such a thing. The best that is available right now is a (partial) port of the Groovy Web Console to Vert.x Ratpack on GitHub. There’s a readme there for playing with it. Of course, there’s always the Ratpack source (the master branch is Jetty, the vertx branch Vert.x).

There’s also another (very unstable, may disappear) app that I’m working on (to explore Shiro and Vert.x’s event bus) on GitHub. This is also a shake out of predominantly using Java instead of Groovy with Ratpack. Don’t take it too seriously.

Posted: Feb 1st, 2013 @ 11:56 pm

Tags: #software  #groovy  


Gr8Conf EU, now with more me

I’m excited to be presenting a GR8Conf EU this year. This will be my first time at GR8 Conf and I’m looking forward to meeting some of the Groovy community that I’ve not had the opportunity to meet before.

I’ll be presenting on Geb, as both an introduction for the uninitiated and to discuss the new features that have appeared in the last few months. It will be an interesting change to present this material to a crowd already comfortable with Groovy; quite different to my upcoming session at SeleniumConf.

Of course, I’ll also be there representing Gradleware and helping Peter give a 3 hour Gradle Bootcamp that will get anybody up and running with Gradle. I’ll also be doing another (new) Gradle session on releasing software. This will be a tutorial style presentation that will arm you with everything you need to build, test and release open source software to Maven Central with Gradle. There will also be a dive into the Spock and Geb builds to look at how they managing building and testing Grails plugins as part of their builds and automating their release.

Hope to see you there!

Posted: Apr 3rd, 2012 @ 8:17 pm

Tags: #software  #groovy  #conference  


In response to Rob’s post on functional testing.

Rob, a Geb committer and all around cool dude, recently posted some thoughts on function testing.

The following is what I intended to be a comment on Rob’s blog, but it became too long so I decided to post it here. You’ll notice that the language is a bit weird as in some places I am addressing Rob directly. It should make sense (no guarantees) if you pretend you are reading it as a comment on Rob’s blog.

This is not a flame war, though that would be fun. Rob’s a friend of mine and a respected colleague. This is just a discussion and exchange of ideas.

Without further ado, the comment…

I’ll try to be as objective as I can, but obviously as someone who is invested in Geb I come with a bit of a bias.

You seem to be mixing up several different issues, which is understandable as the post seems born out of frustration.

For my mind there are three discrete parts:

They lines blur, but I think it’s helpful to think about each part separately. Geb itself only lives in the first arena strictly speaking. How it works in the execution and debugging environment are largely aspects of the associated tooling, with the exception of static diagnostic information.

So a couple of points:

“I agree that as much testing as possible should be done at as low a level as possible”

What’s usually missing from this sentiment is the why. Too many people take this stance because testing at a low level is easier. This, in isolation, is not the right way to think about it. The end goal is always working software, not easy tests. Easy tests work towards working software by making it more likely that tests will be created in the first place and maintained. They are low cost/low value. So for me, low level testing is fine until the accuracy compromist becomes too high. If it doesn’t create a lot of confidence that the system will work, either because it assumes to much or is not backed up by higher level tests, then the developer is fooling themselves. Passing tests is not the end goal, working software is.

It’s with that in mind that I usually say to people that we need to “suck it up” to a certain extent with functional tests. You should identify if there is value in having automated tests that verify the end to end behaviour of your system, get a feel for what that value is and act accordingly. Then you can make cost/benefit judgements on how much effort to put into functional testing. Avoiding it because it’s not as easy as unit testing while not taking the value into account is irresponsible, in my opinion. However, we definitely should harness the frustration and desire for simplicity to make it easier.

“Even assuming you can optimise so that the application is running and you can run the test from an IDE then Selenium has to start up a browser instance, queries execute, views need to render, etc”

Selenium doesn’t have to start up a browser instance, it’s possible to use an running browser instance. Also, on modern development machines we should be talking about seconds at most to run queries and render views etc.

As for the Grails specific issues, you’re going to face the slow application startup when working with Grails no matter what your development environment. In theory, there’s no difference between different development environments here, but the difference comes in in execution environments. However, it’s possible to create an execution environment for Geb tests (run in IDE against a running application) that is equivalent to the Selenium IDE experience.

“Geb’s output when a waitFor condition fails is just Condition did not pass in x seconds.”

This is indeed painful. Even with implicit power asserts it doesn’t get much better. Providing static diagnostic information for functional tests (i.e. non dynamic like an interactive debugging session) is one of the unsolved problems in this area. I’m not aware of any tool that has a good approach to this. With Geb our approach is to just dump the browser state, but that information is generally not rich enough. Hopefully someone comes up with a good solution.

“Selenese is by no means great in this regard (a humble Condition timed out isn’t much help) but at least you can step back with the Selenium IDE and watch the damn thing not working much more easily.”

I think much more easily is a stretch. You can do the same thing with a Geb/WebDriver test. The only difference being is that the test method is your smallest level of granularity for re-running a part of the test. That’s a clear advantage that Selenium IDE has; the ability to run a selected portion of the story at any granularity. It seems like this would break down for most stories though as they require context that needs to be run as well.

“The most productive I’ve ever been when writing functional tests has been when using Selenium IDE.”

(and the rest of this paragraph and the next)

There’s only one point you make here that can’t be replicated with Geb/WebDriver in an IDE: selecting a portion of the test at any granularity and just running that. Everything else is also possible in an IDE, and is the exact same process you would use to debug other code. These are all points on the execution/debugging environment not the development environment.

“Despite these significant failings writing tests in Selenium IDE is very effective. Maintaining a suite of such tests is another matter. Working on a long-running project the failings of Selenese tests start to increase logarithmically.”

This is the point for me. Experience has shown that the majority of the cost of a functional test suite comes in maintenance, not initial development. Therefore the sensible thing to do is to optimise for this first and development speed second. Maintenance should be foremost in the developer’s mind when developing functional test. The PageObject pattern and inline test data are the two best techniquest that I am aware of that help here.

“I’m convinced that the goal of writing tests in the same language as the application is a pretty vapid one. Working inside one’s comfort zone is all very well but too many times I’ve seen things tested using Selenium or Geb that would be better tested with a unit-level JavaScript test.”

It’s not necessarily about using the same language, it’s about integration and sharing data structures. Equating functional testing to black box testing is incorrect I think. I have no problem with stimulating the system as a user, but poking inside to verify internal state. To me, that’s more productive and accurate.

JavaScript unit tests aren’t a replacement for functional tests. You shouldn’t be using functional tests to unit test JavaScript, but having JavaScript unit tests doesn’t relieve you from having functional tests. It can mean you need less though. If a test doesn’t emulate what a user does (including the user environment) to a reasonable degree of accuracy it is not a functional test.

“As a Grails developer I’ve looked enviously at how fast tests run in Rails apps but that’s nothing compared to watching a couple of hundred Jasmine tests run in under a second.”

I think you’re comparing apples and oranges here. I’d have little faith in a system that only has JavaScript unit tests.

“I was at one time convinced that the ability to use loops and conditional statements in Groovy made it a more suitable testing language than Selenese but honestly, how often are such constructs really required for tests?”

In my experience, these constructs are more useful when modelling the concept with appropriate abstractions. As interaction with web pages becomes richer (gestures, dragging etc.), I think we’ll see it become more useful there. That is, one logical interaction may have looping/branching logic internally but we want to model it as one abstraction.

“Building Geb content definitions with deeply nested layers of Module types is time consuming & difficult.”

I agree. However, when you put the effort in and do this right you get maintainable tests plus a wealth of predefined abstractions to draw on in future tests. That is, high cost/high value. With my Geb hat on, I think we have the best story in this area in terms of provided constructs. What we lack is the shared knowledge and experience to deploy these constructs effectively. I think every tool that tries to do this has the same problem though. In a word, inexperience.

“I can’t help thinking the page object model approach is coming at the problem from the wrong angle. Instead of abstracting the UI shouldn’t we be abstracting the behaviour?”

I think you need both. You can’t do away with abstracting the UI.

Geb’s in a tight spot here in that it has no execution model. This makes it awkward to introduce behaviour abstractions as we’d have to wrestle the execution control away from the execution framework we are integrating with. Ultimately, this abstraction belongs in JUnit/Spock/whatever. Spock is missing this capability, and that’s a known problem.

“The most impressive Selenium extension I’ve seen is Steve Cresswell’s Natural Language Extensions that layers something like JBehave’s feature definition language on top of Selenese.”

Not being a Selenese expert, this seems like it would give you the the behaviour abstractions you desire but you’d have to either largely abandon UI abstraction or maintain that separately.

I’d love to see someone start a project adapting something like Fitnesse or some other natural language based tool to Geb. You’d get the same behaviour abstraction benefits, but get to use Geb’s page modelling to maintain the UI abstractions. Hopefully someone starts this.

As for cucumber, if you want natural language authoring I think it’s the choice. There are posts out there about people layering it on top of Geb.

I’m very skeptical of FuncUnit, but not well educated about it. It’s not clear to me how it fits into an automated build and it seems like it would have a very high mental context switching cost. You also give up on data structure sharing entirely.

That’s it.

So the take home for me is that we need to spend more effort on the execution and debugging environments for Geb. Perhaps this will be the focus of the 0.7.x series which will be starting soon.

Posted: Nov 25th, 2011 @ 12:49 am

Tags: #geb  #software  


Archive · RSS · Theme by Autumn