Freelancing Gods 2015

15 Mar 2015

So you want to run a Rails Camp?

Bikers with tents and beer

I’ve had a few people ask me lately about what’s involved in running a Rails Camp (I’ve had the honour/naïvety to run a few), so I figure it’s worth writing down all of my thoughts here as an easy reference.

First, for those not familiar with Rails Camps – they’re long weekends for Rubyists and Ruby-curious to gather, socialise, hack on side/open-source projects, build cool things, listen to talks, and just generally have fun. They’re usually held at pretty low-key venues – sleeping arrangements are dorm rooms with bunk beds.

They began in Australia in 2007 – Ben Askins organised the first, many others have stepped up to run more since (two every year in Australia), and they’ve played such an important role in bringing the Australian Ruby community together and helping us grow bigger, stronger and smarter.

The general format we’ve followed in Australia and New Zealand is as follows:

  • Arrive Friday afternoon, depart Monday morning – so there’s two full days for people to enjoy, plus a decent afternoon/evening to settle in.
  • We usually organise buses from the local airport and city centre to take people to the camp on the Friday, and then take them back again on Monday morning.
  • Some camps have been as small as 30 people, and others as large as 150 people. There is no ‘correct’ number – starting at the smaller end of the scale is probably wise for the first event in an area.
  • A very relaxed schedule – sometimes talks are sourced beforehand, though more often it’s left until the camp happens. People can go to talks, or hack, or socialise, or sleep, or whatever they like. Talks usually happen on just the Saturday and Sunday – there’s definitely no time on Monday (that’s just breakfast, cleaning up, and goodbyes), and Friday night the focus is very much on socialising and hacking.
  • Generally food is catered – the first few in Australia we organised food ourselves, which went well enough, but it’s another level of stress, and since we switched to paying for caterers, that’s worked really well for us.
  • Venues are generally a big hall or two (ideally one for hacking, and one for talks – if there’s a third spare for werewolf and other games, or for dining, even better, though we often have food and hacking in the same space, which isn’t the end of the world), plus dorm rooms for sleeping. Often at the Australian camps there’s a handful of people who opt to sleep in tents, but the majority opt for the dorm bunk beds.
  • Often there’s a chance on the Sunday evening for people to show off what they’ve hacked on over the weekend – prizes are optional (sometimes we have them, sometimes we don’t – it’s certainly not a competition, just a chance to do cool things and share them).
  • We don’t provide Internet access, but do we set up a local wifi network to allow everyones’ computers to talk to each other. Back when we first started, it was pre-iPhone and the idea of tethering for the Internet was unheard of. These days, at most camps – if there is cell phone reception – people will tether when they need to get online, but sometimes camps are in locations where there’s not even cell phone reception, and it’s arguably even better :)

Rails Camp

This is most certainly the AU/NZ model – and it’s what the previous UK and US camps have followed too. From what I understand, the other European Rails Camps (particularly in Germany) are closer to a BarCamp model, which is a more structured unconference style. As far as I know, the AU & NZ camps are the only ones still regularly happening – indeed, we’re coming up to #17 in June here in Australia.

From a cost perspective – Rails Camps in Australia and New Zealand are generally somewhere between $200 and $350 (AUD/NZD) per person, which includes all meals and accommodation. Discounts are often offered for women (the upcoming Rails Camp in Australia has 20% off for women, because women in Australia sadly are generally paid about 20% less than men), and for students.

Philip Arndt offers the following expense breakdown from the recent New Zealand camps:

Typically, for about 80 people, the venue costs about $5000-6000 and the food costs about $8000-9000. Drinks (alcoholic, and non alcoholic) end up costing about $4000 but I’d recommend avoiding this for the first event and just getting people to bring their own. T-shirts end up costing about $20 for each person but this is often sponsored too.

Mountain DJ

Keep in mind that those values are in New Zealand dollars, and alcohol there and in Australia is more expensive than many other parts of the world.

And while getting sponsorship is super helpful, I’d recommend aiming for ticket costs to cover food and accommodation – thus, sponsorship isn’t so critical (and if you find support, then that just makes the event even better).

When it comes to collecting money, it’s nice to have an organisation backing you (and taking on the insurance as well, ideally). This was not the case for the first several camps in Australia and New Zealand (they were run from people’s personal bank accounts, and any profits were passed onto the next organisers), but that provided part of the incentive to create Ruby Australia and Ruby New Zealand.

Alongside all of this, I recommend noting having a read of my thoughts on running events and creating welcoming spaces – things like Codes of Conduct are highly recommended for Rails Camps. Just because they’re more relaxed compared to proper conferences doesn’t mean you should skip such key elements to create safe events. Better yet, read what people like Ashe Dryden have to say.

I’ll try to keep this post up-to-date with any other thoughts on the matter (and certainly, I welcome input both from other organisers and those considering organising). If you’re interested in organising, it’s highly recommended that you attend a Rails Camp somewhere first – it’s much easier to get a feel for the event that way.

Finally: at the time of writing, there’s plans afoot for Rails Camps in California and perhaps Belgium – talk to Bobbilee and Christophe, respectively, to stay in the loop for those. One happening somewhere near New York is also a possibility. As mentioned, the next Rails Camp in Australia will be in June near Sydney, and some plans are taking shape for the November camp too. New Zealand Rails Camps are generally in the first quarter of each year, so keep an eye out for news of their sixth outing later this year.

10 Feb 2015

RubyConf AU 2015: Thank You

Last week RubyConf AU 2015 took place in Melbourne. A year prior to that, I’d put my hand up to run it… and over the course of twelve months, had assembled an excellent team, lined up speakers, venues, and a whole bunch of fun.

On Wednesday morning, it became real, as the workshops kicked off. By Saturday evening, it was finished with our after party at the Melbourne Lawn Bowls club in Flagstaff Gardens.

Going by the feedback we’ve received, I think it’s safe to say it was a success – at the very least, I’m thrilled with what we achieved.

But, of course, it would not have been possible without contributions from many, many people. I do want to list them here, even though it’s guaranteed I’ll forget someone and then feel terrible once I realise.

Firstly: to our sponsors, who not only gave us considerable amounts of money (no small thing in itself), but trusted and supported our efforts to grow the Australian Ruby community. Thank you Envato,, Redbubble, reinteractive, Digital Ocean, JobReady, Torii Recruitment, GitHub, Pluralsight, BuildKite, Lookahead Search, EngineYard, Soundcloud and Travis CI.

To our venues: Jasper, Zinc, and Deakin Edge. You provided fantastic spaces for our community to listen, learn, eat and socialise within. A special thank you to the AV team at Deakin Edge: Blake, Wes and Brad, plus our own video recorder Anthony, returning yet again to make sure our talks are captured for future generations.

To our stenographer Rebekah, who provided live captioning of our conference proceedings. She was not only extremely good at her job, but also responded to Keith and Josh’s banter in style.

To the weather gods – Melbourne’s traditionally fickle weather gave us four days of warm sunshine, which was perfect for showing off our fine city.

To the team behind our ticketing system Tito, who helped us with beta features and late night support.

To the Ruby Australia committee, who were super supportive when I first asked about running this conference, and provide essential and appreciated financial and organisational support. You play a massive part in the health and success of our community.

To our event manager Deborah Langley, and her colleague Sam. Engaging Deb to work on our event made our lives a great deal easier, and helped us to achieve great things. Plus, Deb and Sam helped the running of the conference and events purr along smoothly.

To our volunteers, lead by the inestimable Liam Esler and Mel Sherrin, and our stage manager Maxine Sherrin. You took excellent care of our attendees and speakers, kept things running to schedule, and deserve all of the credit for how calmly the conference ran.

To Amanda Neumann and Darcy Laycock, who worked with me to select presenters from our massive selection of proposals. We agonised over which talks made the cut (and there were many excellent choices that missed out), but I think our choices were great ones!

To our local Rubyists: Healesville guide Pete Yandell, and cycling leaders Gareth Townsend & Gus Gollings, who all ensured our attendees from near and far got to experience a different aspect of Melbourne beyond just the conference sessions.

To our fabulous illustrator Dougal MacPherson, who, with his 15 minute drawings hat on, drew a picture of every session (including workshops), which then became lovely gifts for our speakers.

To Tim Lucas, for his tireless work on our slick website, plus the corralling of our beautiful and popular t-shirts – which were designed by Magdalena Ksiezak (for the conference) and Carla Hackett (for the Rails Girls workshops).

To the organisers of the previous RubyConf AU events – Keith Pitty, Martin Stannard_, Michael Koukoullis, Josh Price, Elle Meredith, Jason Crane, Georgina Robilliard, and Steve Gilles. We have only been able to create this event by standing on your shoulders and reaping the rewards of your hard work.

To Ben Askins, who kicked off the bonding of our fantastic Australian Ruby community by organising the very first Rails Camp. That event changed my life.

To the large number of conferences that provided inspiration, including (but certainly not limited to) JSConf US and EU, FutureRuby, NordicRuby, eurucamp, and Web Directions: Code.

To our speakers, workshop presenters, Rails Girls organisers, and our entertaining and excellent MCs Josh Kalderimis and Keith Pitt. We gave you the stage, and you made us so very proud.

To my fellow organisers: Melissa Kaulfuss, Matt Allen, and Sebastian von Conrad. Through our shared vision and skill-set we have crafted a special event, all contributing in different and most definitely valued ways. I really cannot thank you enough.

To our employers: Inspire9, Envato, Lookahead Search and Icelab, who supported us in our endeavour, with time and patience and suggestions.

To our families, who recognised the commitment we had to give to make this real, and looked after us, loved and supported us. You’re the very definition of amazing.

To everyone else who helped in any way – I was inundated with offers of support and assistance over the past year, and while I didn’t have the opportunity to take everyone up on that, the offers themselves are greatly appreciated.

And finally, to everyone who attended the conference, and the broader Ruby community. It feels far more that we’ve done this with you than for you.

Thank you all, so very, very much.

30 Dec 2013

Melbourne Ruby Retrospective for 2013

The Melbourne Ruby community has grown and evolved a fair bit in this past year, and I’m extremely proud of what it has become.

Mind you, I’ve always thought it was pretty special. I first started to attend the meets back when Rails was young and the community in Australia was pretty new, towards the end of 2005. The meets themselves started in January of that year – almost nine years ago! – and have continued regularly since, in many shapes, sizes and venues, under the guiding hands of many wise Rubyists.

Given I’ve been around so long, it’s a little surprising I’d not had a turn convening the meetings on a regular basis (though I’d certainly helped out when other organisers couldn’t be present). After the excellent, recent guidance of Dave Goodlad and Justin French, Mario Visic and Ivan Vanderbyl stepped up – and then Ivan made plans to move to the USA. I was recently inspired by discussions around growing and improving the community at the latest New Zealand Rails Camp, and so I offered to take Ivan’s place. (As it turns out, Ivan’s yet to switch sides of the Pacific Ocean. Soon, though!)

And so, since February, Mario and I have added our own touches to the regular events. Borrowing from both Sydney and Christchurch, we’ve added monthly hack nights – evenings where there’s no presentations, but people of all different experience levels bring along their laptops and get some coding done. If anyone gets stuck, there’s plenty of friendly and experienced developers around to help.

More recently, reInteractive have helped to bring InstallFests from Sydney to Melbourne. They are events to help beginners interested in Ruby and Rails get the tools they need installed on their machines and then go through the process of setting up a basic blog, with mentors on hand to help deal with any teething problems.

For the bulk of Melbourne Ruby community’s life, the meets have been announced through Google groups – first the Melbourne Ruby User Group, then in the broader Ruby or Rails Oceania group. It’d become a little more clear over the past couple of years that this wasn’t obvious to outsiders who were curious about Ruby – which prompted the detailing of meeting schedules on – but there was still room for improvement. reInteractive’s assistance with the InstallFest events was linked to their support with setting up a group on – and almost overnight we’ve had a significant increase in newcomers.

Now, many of us Rubyists are quite opinionated, and I know some find Meetup inelegant and, well, noisy. I certainly don’t think it’s as good as it could be – but it’s the major player in the space, and it’s the site upon which many people go searching for communities like ours. The Google group does okay when it comes to discussions, but highlighting upcoming events (especially if you’re not a regular) is not its forte at all.

We’ve not abandoned the Google group, but now we announce events through both tools – and the change has been so dramatic that, as much as I’m wary of supporting big players in any space, I’d argue that you’d be stupid not to use Meetup. We’ve had so many new faces come along to our events – and while we still have a long way to go for equal gender representation (it’s still predominantly white males), it’s slowly improving.

With the new faces appearing, we held a Newbie Night as one of our presentation evenings (something that’s happened a couple of times before, but certainly not frequently enough). Mario and I were lucky enough to have Jeremy Tennant step up to run this and corral several speakers to provide short, introductory presentations on a variety of topics. (Perhaps this should become a yearly event!)

We’re also blessed to have an excellent array of sponsors – Envato, Inspire9, Zendesk, reInteractive and Lookahead Search have all provided a mixture of money, space and experienced minds. We wouldn’t be where we are now without you, your support is appreciated immensely.

Mario and I have also spent some time thinking a bit deeper about some of the longstanding issues with tech events, and tried to push things in a healthier direction:

At many of the last handful of meetings for this year, instead of pizza, we’ve had finger food from the ASRC Catering service, tacos from The Taco Guy, and a few pancakes as well. In each case we’ve ensured there’s vegetarian, gluten-free and lactose-free options. This trend shall certainly continue!

The drinks fridge at Inspire9 (our wonderful hosts for the past couple of years) now have plenty of soft drinks and sparkling mineral water alongside the alcoholic options – and we’ve been pretty good at making sure jugs of tap water are available too. There’s also tea and coffee, though we need to be better at highlighting this.

We’ve also adopted Ruby Australia’s Code of Conduct for all Melbourne Ruby events. This is to both recognise that our community provides value and opportunity to many, and to make it clear we want it to continue to be a safe and welcoming place, offline and online.

We’re by no means perfect, and I’m keen to help this community grow stronger and smarter over the coming year – but we’ve got some great foundations to build on. The Melbourne Ruby community – and indeed, the broader Australian Ruby community – is growing from strength to strength, and a lot of that is due to the vast array of leaders we have, whose shoulders we are standing on.

Alongside the regular city meets, there are Rails Camps twice a year, RailsGirls events becoming a regular appearance on the calendar, and the second RubyConf Australia is in Sydney this coming February. I’m looking forward to seeing what 2014 brings – thanks to all who’ve been part of the ride thus far!

22 Jul 2013

Rewriting Thinking Sphinx: Introducing Realtime Indices

The one other feature in the rewrite of Thinking Sphinx that I wanted to highlight should most certainly be considered in beta, but it’s gradually getting to the point where it can be used reliably: real-time indices.

Real-time indices are built into Sphinx, and are indices that can be dynamically updated on the fly – and are not backed by a database sources. They do have a defined structure with fields and attributes (so they’re not a NoSQL key/value store), but they remove the need for delta indices, because each record in a real-time index can be updated directly. You also get the benefit, within Thinking Sphinx, to refer to Ruby methods instead of tables and columns.

The recent 3.0.4 release of Thinking Sphinx provides support for this, but the workflow’s a little different from the SQL-backed indices:

Define your indices

Presuming a Product model defined just so:

class Product < ActiveRecord::Base
  has_many :categorisations
  has_many :categories, :through => :categorisations

You can put together an index like this:

ThinkingSphinx::Index.define :product, :with => :real_time do
  indexes name

  has category_ids, :type => :integer, :multi => true

You can see here that it’s very similar to a SQL-backed index, but we’re referring to Ruby methods (such as category_ids, perhaps auto-generated by associations), and we’re specifying the attribute type explicitly – as we can’t be sure what a method returns.

Add Callbacks

Every time a record is updated in your database, you want those changes to be reflected in Sphinx as well. Sometimes you may want associated models to prompt a change – hence, these callbacks aren’t added automatically.

In our example above, we’d want after_save callbacks in our Product model (of course) and also our Categorisation model – as that will impact a product’s category_ids value.

# within product.rb
after_save ThinkingSphinx::RealTime.callback_for(:product)

# within categorisation.rb
after_save ThinkingSphinx::RealTime.callback_for(:product, [:product])

The first argument is the reference to the indices involved – matching the first argument when you define your index. The second argument in the Categorisation example is the method chain required to get to the objects involved in the index.

Generate the configuration

rake ts:configure

We’ve no need for the old ts:index task as that’s preloading index data via the database.

Start Sphinx

rake ts:start

All of our interactions with Sphinx are through the daemon – and so, Sphinx must be running before we can add records into the indices.

Populate the initial data

rake ts:generate

This will go through each index, load each record for the appropriate model and insert (or update, if it exists) the data for that into the real-time indices. If you’ve got a tonne of records or complex index definitions, then this could take a while.

Everything at once

rake ts:regenerate

The regenerate task will stop Sphinx (if it’s running), clear out all Sphinx index files, generate the configuration file again, start Sphinx, and then repopulate all the data.

Essentially, this is the rake task you want to call when you’ve changed the structure of your Sphinx indices.

Handle with care

Once you have everything in place, then searching will work, and as your models are updated, your indices will be too. In theory, it should be pretty smooth sailing indeed!

Of course, there could be glitches, and so if you spot inconsistencies between your database and Sphinx, consider where you may be making changes to your database without firing the after_save callback. You can run the ts:generate task at any point to update your Sphinx dataset.

I don’t yet have Flying Sphinx providing full support for real-time indices – it should work fine, but there’s not yet any automated backup (whereas SQL-backed indices are backed up every time you process the index files). This means if a server fails it’d be up to you to restore your index files. It’s on my list of things to do!

What’s next?

I’m keen to provide hooks to allow the callbacks to fire off background jobs instead of having that Sphinx update part of the main process – though it’s certainly not as bad as the default delta approach (you’re not shelling out to another process, and you’re only updating a single record).

I’m starting to play with this in my own apps, and am keen to see it used in production. It is a different way of using Sphinx, but it’s certainly one worth considering. If you give it a spin, let me know how you go!

11 Jul 2013

Gutentag: Simple Rails Tagging

The last thing the Rails ecosystem needs is another tagging gem. But I went and built one anyway… it’s called Gutentag, and perhaps worth it for the name alone (something I get an inordinate amount of happiness from).

My reasons for building Gutentag are as follows:

A tutorial example

I ran a workshop at RailsConf earlier this year (as a pair to this talk), and wanted a simple example that people could work through to have the experience of building a gem. Almost every Rails app seems to need tags, so I felt this was a great starting point – and a great way to show off how simple it is to write and publish a gem.

You can work through the tutorial yourself if you like – though keep in mind the focus is more on the process of building a gem rather than the implementation of this gem in particular.

A cleaner code example

Many gems aren’t good object-oriented citizens – and this includes most of the ones I’ve written. They’re built with long, complex classes and modules, are structured in ways that Demeter does not like, and aren’t particularly easy to extend cleanly.

I have the beginnings of a talk on how to structure gems (especially those that work with Rails) sensibly – but I’ve not yet had the opportunity to present this at any conferences.

One point that will definitely feature if I ever do get that opportunity: more and more, I like to avoid including modules into ActiveRecord and other parts of Rails – and if you peruse the source you’ll see I’m only adding the absolute minimum to ActiveRecord::Base, plus I’ve pulled out the logic around the tag names collection and resulting persistence into a separate, simple class.

I got a nice little buzz when I had Code Climate scan the source and give it an A rating without me needing to change anything.

Test-driven design

I started with tests, and wrote them in a way that made it clear how I expected the gem to behave – and then wrote the implementation to match. If you’re particularly keen, you can scan through each commit to see how the gem has evolved – I tried to keep them small and focused.

Or, just have a read through of the acceptance test files – there’s only two, so it won’t take you long.


There are a large number of other tagging gems out there – and if you’re using one of those already, there’s no incentive at all to switch. I’ve used acts-as-taggable-on many times without complaints.

But Gutentag certainly works – the README outlines how you can use it – and at least people might smile every time they add it to a Gemfile. But at the end of the day, if it’s just used as an example of a simple gem done well, I’ll consider this a job well done.

09 Jul 2013

Rewriting Thinking Sphinx: Middleware, Glazes and Panes

Time to discuss more changes to Thinking Sphinx with the v3 releases – this time, the much improved extensibility.

There have been a huge number of contributors to Thinking Sphinx over the years, and each of their commits are greatly appreciated. Sometimes, though, the pull requests that come in cover extreme edge cases, or features that are perhaps only useful to the committer. But running your own hacked version of Thinking Sphinx is not cool, and then you’ve got to keep an especially close eye on new commits, and merge them in manually, and… blergh.

So instead, we now have middleware, glazes and panes.


The middleware pattern is pretty well-established in the Ruby community, thanks to Rack – but it’s started to crop up in other libraries too (such as Mike Perham’s excellent Sidekiq).

In Thinking Sphinx, middleware classes are used to process search requests. The default set of middleware are as follows:

  • ThinkingSphinx::Middlewares::StaleIdFilter adding an attribute filter to hide search results that are known to not match any ActiveRecord objects.
  • ThinkingSphinx::Middlewares::SphinxQL generates the SphinxQL query to send to Sphinx.
  • ThinkingSphinx::Middlewares::Geographer modifies the SphinxQL query with geographic co-ordinates if they’re provided via the :geo option.
  • ThinkingSphinx::Middlewares::Inquirer sends the constructed SphinxQL query through to Sphinx itself.
  • ThinkingSphinx::Middlewares::UTF8 ensures all string values returned by Sphinx are encoded as UTF-8.
  • ThinkingSphinx::Middlewares::ActiveRecordTranslator translates Sphinx results into their corresponding ActiveRecord objects.
  • ThinkingSphinx::Middlewares::StaleIdChecker notes any Sphinx results that don’t have corresponding ActiveRecord objects, and retries the search if they exist.
  • ThinkingSphinx::Middlewares::Glazier wraps each search result in a glaze if there’s any panes set for the search (read below for an explanation on this).

Each middleware does its thing, and then passes control through to the next one in the chain. If you want to create your own middleware, your class must respond to two instance methods: initialize(app) and call(contexts).

If you subclass from ThinkingSphinx::Middlewares::Middleware you’ll get the first for free. contexts is an array of search context objects, which provide access to each search object along with the raw search results and other pieces of information to note between middleware objects. Middleware are written to handle multiple search requests, hence why contexts is an array.

If you’re looking for inspiration on how to write your own middleware, have a look through the source – and here’s an extra example I put together when considering approaches to multi-tenancy.

Glazes and Panes

Sometimes it’s useful to have pieces of metadata associated with each search result – and it could be argued the cleanest way to do this is to attach methods directly to each ActiveRecord instance that’s returned by the search.

But inserting methods on objects on the fly is, let’s face it, pretty damn ugly. But that’s precisely what older versions of Thinking Sphinx do. I’ve never liked it, but I’d never spent the time to restructure things to work around that… until now.

There are now a few panes available to provide these helper methods:

  • ThinkingSphinx::Panes::AttributesPane provides a method called sphinx_attributes which is a hash of the raw Sphinx attribute values. This is useful when your Sphinx attributes hold complex values that you don’t want to re-calcuate.
  • ThinkingSphinx::Panes::DistancePane provides the identical distance and geodist methods returning the calculated distance between lat/lng geographical points (and is added automatically if the :geo option is present).
  • ThinkingSphinx::Panes::ExcerptsPane provides access to an excerpts method which you can then chain any call to a method on the search result – and get an excerpted value returned.
  • ThinkingSphinx::Panes::WeightPane provides the weight method, returning Sphinx’s calculated relevance score.

None of these panes are loaded by default – and so the search results you’ll get are the actual ActiveRecord objects. You can add specific panes like so:

# For every search
ThinkingSphinx::Configuration::Defaults::PANES << ThinkingSphinx::Panes::WeightPane

# Or for specific searches:
search ='pancakes')
search.context[:panes] << ThinkingSphinx::Panes::WeightPane

When you do add at least pane into the mix, though, the search result gets wrapped in a glaze object. These glaze objects direct any methods called upon themselves with the following logic:

  • If the search result responds to the given method, send it to that search result.
  • Else if any pane responds to the given method, send it to the pane.
  • Otherwise, send it to the search result anyway.

This means that your ActiveRecord instances take priority – so pane methods don’t overwrite your own code. It also allows for method_missing metaprogramming in your models (and ActiveRecord itself) – but otherwise, you can get access to the useful metadata Sphinx can provide, without monkeypatching objects on the fly.

If you’re writing your own panes, the only requirement is that the initializer must accept three arguments: the search context, the underlying search result object, and a hash of the raw values from Sphinx. Again, the source code for the panes is not overly complex – so have a read through that for inspiration.

I’m always keen to hear about any middleware or panes other people write – so please, if you do make use of either of these approaches, let me know!

24 Jun 2013

Rewriting Thinking Sphinx: Loading Only When Necessary

I’ve obviously been neglecting this blog – even a complete rewrite of Thinking Sphinx hasn’t garnered a mention yet! Time to remedy this…

There’s plenty to focus on with Thinking Sphinx v3 (released just under six months ago), because a ton has changed – but that’s pretty well covered in the documentation. I’m going to cover one thing per post instead.

First up: index definitions are now located in their own files, located in the app/indices directory. Given they can get quite complex, I think they’re warranted to have their own files – and besides, let’s keep our concerns separate, instead of stuffing everything into the models (yes, I’m a firm proponent of skinny everything, not just skinny controllers).

So, instead of this within your model:

class User < ActiveRecord::Base
  # ...

  define_index do
    indexes first_name, last_name, country

  # ...

You now create a file called user_index.rb (or whatever, really, as long it ends with .rb) and place it in app/indices:

ThinkingSphinx::Index.define :user, :with => :active_record do
  indexes first_name, last_name, country

You’ll note the model is now specified with a symbol, and we’re providing an index type via the :with option. At the moment, the latter is always :active_record unless you’re using Sphinx’s real-time indices (which are definitely beta-status in Thinking Sphinx). The model name as a symbol, however, represents one of the biggest gains from this shift.

In previous versions of Thinking Sphinx, to discover all of the index definitions that existed within your app, the gem would load all of your models. Initial versions did this every time your app initialised, though that later changed so they the models and index definitions were loaded only when necessary.

Except, it was necessary if a search was being run, or even just if a model was modified (because updates to Sphinx’s index files could be required) – which is the majority of Rails requests, really. And yes, this information was cached between requests like the rest of Rails, except – like the rest of Rails – in your development environment.

Loading all your models is quite a speed hit – so this could be pretty painful for applications with a large number of models.

There were further workarounds added (such as the indexed_models option in config/sphinx.yml), but it became clear that this approach was far from ideal. And of course, there’s separation of concerns and skinny models and so on.

This gives some hint as to why we don’t provide the model class itself when defining indexes – because we don’t want to load our models until we absolutely have to, but we do get a reference to them. The index definition logic is provided in a block, which means it’ll only be evaluated when necessary as well.

This doesn’t get around the issue of knowing when changes to model instances occur though, so this got dealt with in two ways. Firstly: delta index settings are now an argument at the top of the index, not within the logic block:

  :user, :with => :active_record, :delta => true
) do
  # ...

And attribute updating is no longer part of the default set of features.

This means Thinking Sphinx can now know whether deltas are involved before evaluating the index definition logic block – and thus, the callbacks are lighter and smarter.

The end result is thus:

  • Thinking Sphinx only loads the index definitions when they’re needed;
  • They, in turn, only load the models and definition logic when required;
  • Each index now gets its own file;
  • Your models stay cleaner; and
  • Request times are faster.

Overall, a vast improvement.

22 Jan 2012

Backing up with Backup

I’ve found myself singing the praises of Michael van Rooijen’s backup gem twice in quick succession lately – and so, I just want to run through how I’m using it, and how useful I find it.

For those not familiar with it, Backup provides a neat DSL for creating backup scripts with archiving files and databases through to common data stores (S3, Rackspace, SFTP, etc), with notifications via email, Campfire and others. If you want a rundown of all the options, click the link above – there’s quite a few. I’m using the gem to make sure all critical data for Flying Sphinx is stored in multiple locations – and particularly, with different providers.

The documentation’s pretty solid, so I won’t keep you long, but here’s two examples. First up, here’s my script for copying an archive of essential files (including a SQLite database) off to Ninefold – with the private details changed:, "Database Backup") do
  archive :oedipus do |archive|
    archive.add '/mnt/sphinx/oedipus'

  compress_with Gzip do |compression| = true

  store_with Ninefold do |nf|
    nf.storage_token  = 'STORAGE_TOKEN'
    nf.storage_secret = 'STORAGE_SECRET'
    nf.path           = "oedipus/#{`hostname`.strip}"
    nf.keep           = 20

  notify_by Mail do |mail|
    mail.on_success = true
    mail.on_failure = true

    mail.from      = 'support-at-flying-sphinx'        = 'pat-at-freelancing-gods'
    mail.address   = ''
    mail.user_name = 'SMTP_USER_NAME'
    mail.password  = 'SMTP_PASSWORD'

For the above, I added Ninefold support to Backup, and Michael was kind enough to merge my commits in.

For my next script, though, I’m syncing directories to both S3 (in Singapore) and Rackspace (in the UK). The current releases of Backup don’t support syncing to Rackspace – but I ended up taking inspiration from fellow Melburnian Ryan Allen’s Sir Sync-a-Lot and rewrote the S3 support with his bulk MD5 approach. The code was simple enough – thanks to Wesley Beary’s excellent Fog – so I adapted the code to handle Rackspace as well.

However, I’ve not written tests for this, and my code does not yet support mirroring – so, I’ve not yet provided a patch back to Michael. If you want to use my code, feel free – but I will get to submitting a proper patch soon.

All that said, here’s the script:, "Sphinx Backup") do
  sync_with S3 do |s3|
    s3.access_key_id      = 'ACCESS_KEY'
    s3.secret_access_key  = 'SECRET_KEY'
    s3.bucket             = "fs-#{`hostname`.strip}-sync"
    s3.region             = 'ap-southeast-1'
    s3.path               = ''
    s3.mirror             = false

    s3.directories do |directory|
      directory.add '/mnt/sphinx/oedipus'
      directory.add '/mnt/sphinx/flying-sphinx'

  sync_with Rackspace do |rs|
    rs.api_key  = 'API_KEY'
    rs.username = 'USER_NAME'
    rs.auth_url = ''
    rs.bucket   = "fs-#{`hostname`.strip}-sync"
    rs.path     = ''
    rs.mirror   = false

    rs.directories do |directory|
      directory.add '/mnt/sphinx/oedipus'
      directory.add '/mnt/sphinx/flying-sphinx'

  notify_by Mail do |mail|
    mail.on_success = true
    mail.on_failure = true

    mail.from      = 'support-at-flying-sphinx'        = 'pat-at-freelancing-gods'
    mail.address   = ''
    mail.user_name = 'SMTP_USER_NAME'
    mail.password  = 'SMTP_PASSWORD'

I’ve been running the first script for several months, and the second for close to a month – both via cron – and had no problems at all. If you’ve not got a solid backup system in place because you’re finding it complex and frustrating, you’ve now got one less excuse.

21 Nov 2011

Cut and Polish: A Guide to Crafting Gems

As I mentioned here earlier in the year, a few weeks ago I had the pleasure of visiting Ukraine and speaking at the RubyC conference in Kyiv. My talk was a run through of how to build gems, some of the tools that can help, and a few best practices.

The video of my session is now online, if you’re interested:

There’s also the slides with notes, if you prefer that.

One of the questions asked towards the end was about publishing private gems, which I’d not dealt with before. However, Darcy was quick to tweet that Gemfury looks like a promising solution for those scenarios.

Please let me know if you think I’ve missed any critical elements of building and publishing gems – or if you have any further questions.

And many thanks to the RubyC team for putting together the conference and inviting me to speak – I had a great time!

24 Sep 2011

Versioning your APIs

As I developed Flying Sphinx, I found myself both writing and consuming several APIs: from Heroku to Flying Sphinx, Flying Sphinx to Heroku, the flying-sphinx gem in apps to Flying Sphinx, Flying Sphinx to Sphinx servers, and Sphinx servers to Flying Sphinx.

None of that was particularly painful – but when Josh Kalderimis was improving the flying-sphinx gem, he noted that the API it interacts with wasn’t that great. Namely, it was inconsistent with what it returned (sometimes text status messages, sometimes JSON), it was sending authentication credentials as GET/POST parameters instead of in a header, and it wasn’t versioned.

I was thinking that given I control pretty much every aspect of the service, it didn’t matter if the APIs had versions or not. However, as Josh and I worked through improvements, it became clear that the apps using older versions of the flying-sphinx gem were going to have one expectation, and newer versions another. Versioning suddenly became a much more attractive idea.

The next point of discussion was how clients should specify which version they are after. Most APIs put this in the path – here’s Twitter’s as an example, specifying version 1:

However, I’d recently been working with Scalarium’s API, and theirs put the version information in a header (again, version 1):

Accept: application/vnd.scalarium-v1+json

Some research turned up a discussion on Hacker News about best practices for APIs – and it’s argued there that using headers keeps the paths focused on just the resource, which is a more RESTful approach. It also makes for cleaner URLs, which I like as well.

How to implement this in a Rails application though? My routing ended up looking something like this:

namespace :api do
  constrants do
    scope :module => :v1 do
      resource :app do
        resources :indices

  constraints do
    scope :module => :v2
      resource :app do
        resources :indices

The ApiVersion class (which I have saved to app/lib/api_version.rb) is where we check the version header and route accordingly:

class ApiVersion
  def initialize(version)
    @version = version

  def matches?(request)
    versioned_accept_header?(request) || version_one?(request)


  def versioned_accept_header?(request)
    accept = request.headers['Accept']
    accept && accept[/application\/vnd\.flying-sphinx-v#{@version}\+json/]

  def unversioned_accept_header?(request)
    accept = request.headers['Accept']
    accept.blank? || accept[/application\/vnd\.flying-sphinx/].nil?

  def version_one?(request)
    @version == 1 && unversioned_accept_header?(request)

You’ll see that I default to version 1 if no header is supplied. This is for the older versions of the flying-sphinx gem – but if I was starting afresh, I may default to the latest version instead.

All of this gives us URLs that look like something like this:

My SSL certificate is locked to – if it was wildcarded, then I’d be using a subdomain ‘api’ instead, and clean those URLs up even further.

The controllers are namespaced according to both the path and the version – so we end up with names like Api::V2::AppsController. It does mean you get a new set of controllers for each version, but I’m okay with that (though would welcome suggestions for other approaches).

Authentication is managed by namespaced application controllers – here’s an example for version 2, where I’m using headers:

class Api::V2::ApplicationController < ApplicationController
  skip_before_filter :verify_authenticity_token
  before_filter :check_api_params

  expose(:app) { App.find_by_identifier identifier }


  def check_api_params
    # ensure the response returns with the same header value
    headers['X-Flying-Sphinx-Token'] = request.headers['X-Flying-Sphinx-Token']
    render_json_with_code 403 unless app && app.api_key == api_key

  def api_token

  def identifier
    api_token && api_token.split(':').first

  def api_key
    api_token && api_token.split(':').last

Authentication, in case it’s not clear, is done by a header named X-Flying-Sphinx-Token with a value of the account’s identifier and api_key concatenated together, separated by a colon.

(If you’re not familiar with the expose method, that’s from the excellent decent_exposure gem.)

So where does that leave us? Well, we have an elegantly namespaced API, and both versions and authentication is managed in headers instead of paths and parameters. I also made sure version 2 responses all return JSON. Josh is happy and all versions of the flying-sphinx gem are happy.

The one caveat with all of this? While it works for me, and it suits Flying Sphinx, it’s not the One True Way for API development. We had a great discussion at the most recent Rails Camp up at Lake Ainsworth about different approaches – at the end of the day, it really comes down to the complexity of your API and who it will be used by.

10 Sep 2011

Speaking at RubyC

Just a quick note for anyone in or near Eastern Europe – I’ll be heading over to Kiev for RubyC in November. I’m going to be speaking there about how to build gems and the best practices when doing so.


So, if that interests you (or you’d just like to catch up or hear some of the other speakers talk about interesting Ruby-related topics), then hopefully I’ll see you there!

02 Sep 2011

Combustion - Better Rails Engine Testing

I spent a good part of last month writing my first Rails engine – although it’s not yet released and for a client, so I won’t talk about that too much here.

Very quickly in the development process, I was looking around on how to test Rails engines. It seemed that, beyond some basic unit tests, having a full Rails application within your test or spec directory was the accepted approach for integration testing.

That felt kludgy and bloated to me, so I decided to try something a little different.

The end goal was full stack testing in a clear and manageable fashion – writing specs within my spec directory, not a bundled Rails app’s spec directory. Capybara’s DSL would be nice as well.

This, of course, meant having a Rails application to test through – but it turns out you can get away without the vast majority of files that Rails generates for you. Indeed, the one file a Rails app expects is config/database.yml – and that’s only if you have ActiveRecord in play.

Enter Combustion – my minimal Rails app-as-a-gem for testing engines, with smart defaults for your standard Rails settings.

Setting It Up

A basic setup is as follows:

  • Add the gem to your gemspec or Gemfile.
  • Run the generator in your engine’s directory to get a small Rails app stub created: combust (or bundle exec combust if you’re referencing the git repository instead).
  • Add Combustion.initialize! to your spec/spec_helper.rb (currently only RSpec is supported, but shouldn’t be hard to patch for TestUnit et al).

Here’s a sample spec_helper, mixing in Capybara as well:

require 'rubygems'
require 'bundler'

Bundler.require :default, :development

require 'capybara/rspec'


require 'rspec/rails'
require 'capybara/rails'

RSpec.configure do |config|
  config.use_transactional_fixtures = true

Putting It To Work

Firstly, you’ll want to make sure you’re using your engine within the test Rails application. The generator has likely added the hooks we need for this. If you’re adding routes, then edit spec/internal/config/routes.rb. If you’re dealing with models, make sure you add the tables to spec/internal/db/schema.rb. The README covers this a bit more detail.

And then, get stuck into your specs. Here’s a really simple example:

# spec/controllers/users_controller_spec.rb
require 'spec_helper'

describe UsersController do
  describe '#new' do
    it "runs successfully" do
      get :new

      response.should be_success

Or, using Capybara for integration:

# spec/acceptance/visitors_can_sign_up_spec.rb
require 'spec_helper'

describe 'authentication process' do
  it 'allows a visitor to sign up' do
    visit '/'

    click_link 'Sign Up'
    fill_in 'Name',     :with => 'Pat Allan'
    fill_in 'Email',    :with => ''
    fill_in 'Password', :with => 'chunkybacon'
    click_button 'Sign Up'

    page.should have_content('Sign Out')

And that’s really the core of it. Write the specs you need to test your engine within the context of a full Rails application. If you need models, controllers or views in the internal application to fully test out your engine, then add them to the appropriate location within spec/internal – but only add what’s necessary.

Rack It Up

Oh, and one of my favourite little helpers is this: Combustion’s generator adds a file to your engine, which means you can fire up your test application in the browser – just run rackup and visit http://localhost:9292.


As already mentioned, Combustion is built with RSpec in mind – but I will happily accept patches for TestUnit as well. Same for Cucumber – should work in theory, but I’m yet to try it.

It’s also written for Rails 3.1 – it may work with Rails 3.0 with some patches, but I very much doubt it’ll play nicely with anything before that. Still, feel free to investigate.

And it’s possible that this could be useful for integration testing for libraries that aren’t engines. If you want to try that, I’d love to hear how it goes.

Final Notes

So, where do we stand?

  • You can test your engine within a full Rails stack, without a full Rails app.
  • You only add what you need to your Rails app stub (that lives in spec/internal).
  • Your testing code is DRYer and easier to maintain.
  • You can use standard RSpec and Capybara helpers for integration testing.
  • You can view your test application via Rack.

I’m not the first to come up with this idea – after I had finished Combustion, it was pointed out to me that Kaminari’s test suite does a similar thing (just not extracted out into a separate library). It wouldn’t surprise me if others have done the same – but in my searching, I kept coming across well-known libraries with full Rails apps in their test or spec directories.

If you think Combustion could suit your engine, please give it a spin – I’d love to have others kick the tires and ensure it works in a wider set of situations. Patches and feedback are most definitely welcome.

30 May 2011

Searching with Sphinx on Heroku

Just over two weeks ago, I released Flying Sphinx – which provides Sphinx search capability for Heroku apps. I’ll talk more about how I built it and the challenges faced at some point, but right now I just want to introduce the service and how you may go about using it.

Why Sphinx?

Perhaps you’re not familiar with Sphinx and how it can be useful. For those who are new to Sphinx, it’s a full-text search tool – think of your own personal Google for within your website. It comes with two main moving parts – the indexer tool for interpreting and storing your search data (indices), and the searchd tool, which runs as a daemon accepting search requests, and returns the most appropriate matches for a given search query.

In most situations, Sphinx is very fast at indexing your data, and connects directly to MySQL and PostgreSQL databases – so it’s quite a good fit for a lot of Rails applications.

Using Sphinx in Rails

I’ve written a gem, Thinking Sphinx, which integrates Sphinx neatly with ActiveRecord. It allows you to define indices in your models, and then use rake tasks to handle the processing of these indices, along with managing the searchd daemon.

If you want to install Sphinx, have a read through of this guide from the Thinking Sphinx documentation – in most cases it should be reasonably painless.

Installing Thinking Sphinx in a Rails 3 application is quite simple – just add the gem to your Gemfile:

gem 'thinking-sphinx', '2.0.5'

For older versions of Rails, the Thinking Sphinx docs have more details.

I’m not going to get too caught up in the details of how to structure indices – this is also covered within the Thinking Sphinx documentation – but here’s a quick example, for user account:

class User < ActiveRecord::Base
  # ...
  define_index do
    indexes name, :sortable => true
    indexes location
    has admin, created_at
  # ...

The indexes method defines fields – which are the textual data that people can search for. In this case, we’ve got the user names and locations covered. The has method is for attributes – which are used for filtering and sorting (fields can’t be used for sorting by default). The distinction of fields and attributes is quite important – make sure you understand the difference.

Now that we have our index defined, we can have Sphinx grab the required data from our database, which is done via a rake task:

rake ts:index

What Sphinx does here is grab all the required data from the database, inteprets it and stores it in a custom format. This allows Sphinx to be smarter about ranking search results and matching words within your fields.

Once that’s done, we next start up the Sphinx daemon:

rake ts:start

And now we can search! Either in script/console or in an appropriate action, just use the search method on your model: 'pat'

This returns the first page of users that match your search query. Sphinx always paginates results – though you can set the page size to be quite large if you wish – and Thinking Sphinx search results can be used by both WillPaginate and Kaminari pagination view helpers.

Instead of sorting by the most relevant matches, here’s examples where we sort by name and created_at: 'pat', :order => :name 'pat', :order => :created_at

And if we only want admin users returned in our search, we can filter on the admin attribute: 'pat', :with => {:admin => true}

There’s many more options for search calls – the documentation (yet again) covers most of them quite well.

One more thing to remember – if you change your index structures, or add/remove index defintions, then you should restart and reindex Sphinx. This can be done in a single rake task:

rake ts:rebuild

If you just want the latest data to be processed into your indices, there’s no need to restart Sphinx – a normal ts:index call is fine.

Using Thinking Sphinx with Heroku

Now that we’ve got a basic search setup working quite nicely, let’s get it sorted out on Heroku as well. Firstly, let’s add the flying-sphinx gem to our Gemfile (below our thinking-sphinx reference):

gem 'flying-sphinx', '0.5.0'

Get that change (along with your indexed model setup) deployed to Heroku, then inform Heroku you’d like to use the Flying Sphinx add-on (the entry level plan costs $12 USD per month):

heroku addons:add flying_sphinx:wooden

And finally, let’s get our data on the site indexed and the daemon running:

heroku rake fs:index
heroku rake fs:start

Note the fs prefix instead of the ts prefix in those rake calls – the normal Thinking Sphinx tasks are only useful on your local machine (or on servers that aren’t Heroku).

When you run those rake tasks, you will probably see the following output:

Sphinx cannot be found on your system. You may need to configure the
following settings in your config/sphinx.yml file:
  * bin_path
  * searchd_binary_name
  * indexer_binary_name

For more information, read the documentation:

This is because Thinking Sphinx doesn’t have access to Sphinx locally, and isn’t sure which version of Sphinx is available. To have these warnings silenced, you should add a config/sphinx.yml file to your project, with the version set for the production environment:

  version: 1.10-beta

Push that change up to Heroku, and you won’t see the warnings again.

For the more curious of you: the Sphinx daemon is located on a Flying Sphinx server, also located within the Amazon cloud (just like Heroku) to keep things fast and cheap. This is all managed by the flying-sphinx gem, though – you don’t need to worry about IP addresses or port numbers.

Also: the same rules apply with Flying Sphinx for modifying index structures or adding/removing index definitions – make sure you restart Sphinx so it’s aware of the changes:

heroku rake fs:rebuild

The final thing to note is that you’ll want the data in your Sphinx indices updated regularly – perhaps every day or every hour. This is best done on Heroku via their Cron add-on – since that’s just a rake task as well.

If you don’t have a cron task already, the following (perhaps in lib/tasks/cron.rake) will do the job:

desc 'Have cron index the Sphinx search indices'
task :cron => 'fs:index'

Otherwise, maybe something more like the following suits:

desc 'Have cron index the Sphinx search indices'
task :cron => 'fs:index' do
  # Other things to do when Cron comes calling

If you’d like your search data to have your latest changes, then I recommend you read up on delta indexing – both for Thinking Sphinx and for Flying Sphinx.

Further Sources

Keep in mind this is just an introduction – the documentation for Thinking Sphinx is pretty good, and Flying Sphinx is improving regularly. There’s also the Thinking Sphinx google group and the Flying Sphinx support site if you have questions about either, along with numerous blog posts (though the older they are, the more likely they’ll be out of date). And finally – I’m always happy to answer questions about this, so don’t hesitate to get in touch.

12 Mar 2010

Using Thinking Sphinx with Cucumber

While I highly recommend you stub out your search requests in controller unit tests/specs, I also recommend you give your full stack a work-out when running search scenarios in Cucumber.

This has gotten a whole lot easier with the ThinkingSphinx::Test class and the integrated Cucumber support, but it’s still not perfect, mainly because generally everyone (correctly) keeps their database changes within a transaction. Sphinx talks to your database outside Rails’ context, and so can’t see anything, unless you turn these transactions off.

It’s not hard to turn transactions off in your features/support/env.rb file:

Cucumber::Rails::World.use_transactional_fixtures = false

But this makes Cucumber tests far more fragile, because either each scenario can’t conflict with each other, or the database needs to be cleaned before and after each scenario is run.

Pretty soon after I added the inital documentation for this, a few expert Cucumber users pointed out that you can flag certain feature files to be run without transactional fixtures, and the rest use the default:

Feature: Searching
  In order to find things as easily as possible
  As a user
  I want to search across all data on the site

This is a good step in the right direction, but it’s not perfect – you’ll still need to clean up the database. Writing steps to do that is easy enough:

Given /^a clean slate$/ do
  Object.subclasses_of(ActiveRecord::Base).each do |model|
    next unless model.table_exists?
    model.connection.execute "TRUNCATE TABLE `#{model.table_name}`"

(You can also use Database Cleaner, as noted by Thilo in the comments).

But adding that to the start and end of every single scenario isn’t particularly DRY.

Thankfully, there’s Before and After hooks in Cucumber, and they can be limited to scenarios marked with certain tags. Now we’re getting somewhere!

Before('@no-txn') do
  Given 'a clean slate'

After('@no-txn') do
  Given 'a clean slate'

And here’s a bonus step, to make indexing data a little easier:

Given /^the (\w+) indexes are processed$/ do |model|
  model = model.titleize.gsub(/\s/, '').constantize
  ThinkingSphinx::Test.index *model.sphinx_index_names

So, how do things look now? Well, you can write your features normally – just flag them with no-txn, and your database will be cleaned up both before and after each scenario.

My current preferred approach is adding a file named features/support/sphinx.rb, containing this code:

require 'cucumber/thinking_sphinx/external_world'

Before('@no-txn') do
  Given 'a clean slate'

After('@no-txn') do
  Given 'a clean slate'

And I put the step definitions in either features/step_definitions/common_steps.rb or features/step_definitions/search_steps.rb.

So, now you have no excuse to not use Thinking Sphinx with your Cucumber suite. Get testing!

03 Jan 2010

A Month in the Life of Thinking Sphinx

It’s just over two months since I asked for – and received – support from the Ruby community to work on Thinking Sphinx for a month. A review of this would be a good idea, hey?

I’m going to write a separate blog post about how it all worked out, but here’s a long overview of the new features.

Internal Cucumber Cleanup

This one’s purely internal, but it’s worth knowing about.

Thinking Sphinx has a growing set of Cucumber features to test behaviour with a live Sphinx daemon. This has made the code far more reliable, but there was a lot of hackery to get it all working. I’ve cleaned this up considerably, and it is now re-usable for other gems that extend Thinking Sphinx.

External Delta Gems

Of course, it was my own re-use that was driving that need: I wanted to use it in gems for the delayed job and datetime delta approaches.

There was a clear need for removing these two pieces of functionality from Thinking Sphinx: to keep the main library as slim as possible, and to make better use of gem dependencies, allowing people to use whichever version of delayed job they like.

So, if you’ve not upgraded in a while, it’s worth re-reading the delta page of the documentation, which covers the new setup pretty well.

Testing Helpers

Internal testing is all very well, but what’s much more useful for everyone using Thinking Sphinx is the new testing class. This provides a clean, simple interface for processing indexes and starting the Sphinx daemon.

There’s also a Cucumber world that simplifies things even further – automatically starting and stopping Sphinx when your features are run. I’ve been using this myself in a project over the last few days, and I’m figuring out a neat workflow. More details soon, but in the meantime, have a read through the documentation.

No Vendored Code for Gems

One of the uglier parts of Thinking Sphinx is the fact that it vendors Riddle and AfterCommit (and for a while, Delayed Job), two essential libraries. This is not ideal at all, particularly when gem dependencies can manage this for you.

So, Thinking Sphinx no longer vendors these libraries if you install it as a gem – instead, the riddle and after_commit gems will get brought along for the ride.

The one catch is that they’re still vendored for plugin installations. I recommend people use Thinking Sphinx as a gem, but there are valid reasons for going down the plugin path.

Default Sphinx Scopes

Thanks to some hard work by Joost Hietbrink of the Netherlands, Thinking Sphinx now supports default sphinx scopes. All I had to do was merge this in – Joost was the first contributor to Thinking Sphinx (and there’s now over 100!), so he knows the code pretty well.

In lieu of any real documentation, here’s a quick sample – define a scope normally, and then set it as the default:

class Article < ActiveRecord::Base
  # ...
  sphinx_scope(:by_date) {
    {:order => :created_at_}
  default_sphinx_scope :by_date
  # ...

Thread Safety

I’ve made some changes to improve the thread safety of Thinking Sphinx. It’s not perfect, but I think all critical areas are covered. Most of the dynamic behaviour occurs when the environment is initialised anyway.

That said, I’m anything but an expert in this area, so consider this a tentative feature.

Sphinx Select Option

Another community-sourced patch – this time from Andrei Bocan in Romania: if you’re using Sphinx 0.9.9, you can make use of its custom select statements: 'pancakes',
  :sphinx_select => '*, @weight + karma AS superkarma'

This is much like the :select option in ActiveRecord – but make sure you use :sphinx_select (as the former gets passed through to ActiveRecord’s find calls).

Multiple Index Support

You can now have more than one index in a model. I don’t see this as being a widely needed feature, but there’s definitely times when it comes in handy (such as having one index with stemming, and one without). The one thing to note is that all indexes after the first one need explicit names:

define_index 'stemmed' do
  # ...

You can then specify explicit indexes when searching: 'pancakes',
  :index => 'stemmed_core' 'pancakes',
  :index => 'article_core,stemmed_core'

Don’t forget that the default index name is the model’s name in lowercase and underscores. All indexes are prefixed with _core, and if you’ve enabled deltas, then a matching index with the _delta suffix exists as well.

Building on from this, you can also now have indexes on STI subclasses when superclasses are already indexed.

While the commits to this feature are mine, I was reading code from a patch by Jonas von Andrian – so he’s the person to thank, not me.

Lazy Initialisation

Thinking Sphinx needs to know which models have indexes for searching and indexing – and so it would load every single model when the environment is initialised, just to figure this out. While this was necessary, it also is slow for applications with more than a handful of models… and in development mode, this hit happens on every single page load.

Now, though, Thinking Sphinx only runs this load request when you’re searching or indexing. While this doesn’t make a difference in production environments, it should make life on your workstations a little happier.

Lazy Index Definition

In a similar vein, anything within the define_index block is now evaluated when it’s needed. This means you can have it anywhere in your model files, whereas before, it had to appear after association definitions, else Thinking Sphinx would complain that they didn’t exist.

This feature actually introduced a fair few bugs, but (thanks to some patience from early adopters), it now runs smoothly. And if it doesn’t, you know where to find me.

Sphinx Auto-Version detection

Over the course of the month, Thinking Sphinx and Riddle went through some changes as to how they’d be required (depending on your version of Sphinx). First, there was separate gems for 0.9.8 and 0.9.9, and then single gems with different require statements. Neither of these approaches were ideal, which Ben Schwarz clarified for me.

So I spent a day or two working on a solution, and now Thinking Sphinx will automatically detect which version you have installed. You don’t need any version numbers in your require statements.

The one catch with this is that you currently need Sphinx installed on every machine that needs to know about it, including web servers that talk to Sphinx on a separate server. There’s an issue logged for this, and I’ll be figuring out a solution soon.

Sphinx 0.9.9

This isn’t quite a Thinking Sphinx feature, but it’s worth noting that Sphinx 0.9.9 final release is now available. If you’re upgrading (which should be painless), the one thing to note is that the default port for Sphinx has changed from 3312 to 9312.


If you want to grab the latest and greatest Thinking Sphinx, then version 1.3.14 is what to install. And read the documentation on upgrading!

30 Dec 2009

Wandering Freelancer

At a recent Melbourne Ruby meet, I was asked to speak about my travelling freelancer lifestyle, and the talk was recorded. I feel a little self-conscious about the topic, but perhaps you’ll find it interesting.

Massive thanks to James Healy for not only recording the talks that night, but producing the neat slides-and-video output. I’m looking forward to the Melbourne Ruby channel building up a good collection of sessions.

Also: I’ll be posting a review of my month working on Thinking Sphinx soon.

28 Oct 2009

Funding Thinking Sphinx

Update: I’ve now hit my target. If you want to donate more, I won’t turn you away, but perhaps you should send those funds to other worthy open source projects, or a local charity. A massive thank you to all who have pitched in to the pledgie, your generosity and support is amazing.

Over the past two years, Thinking Sphinx has grown massively – in lines of code, in the numbers of users, in complexity, in time required to support it. I’m regularly amazed and touched by the recommendations I see on Twitter, and the feedback I get in conversations. The fact that there’s been almost one hundred contributors is staggering.

It’s not all fun and games, though… there’s still plenty of features that can be added, and bugs to be fixed, and documentation to write. So, what I’d really like to do is spend November working close to full-time on just Thinking Sphinx. I have a long task list. All I need is a bit of financial help to cover living expenses.

I have an existing pledgie tied to the GitHub project, currently sitting on $600. If I can get another $2000, then I won’t have to worry at all about how I’m going to pay bills or rent for November. Even $1400 will make it viable for me, albeit maybe with some help from my savings.

If you or your workplace can make a donation, that would be very much appreciated. I’m happy to provide weekly updates on where things are at if people request it – but of course, watching the GitHub projects for Thinking Sphinx itself and the documentation site is the most reliable way to keep an eye on my progress.

I’m hoping to get Thinking Sphinx to a point where the documentation is by far the best place for support, and it’s only the really tricky problems (and bug reports) that end up in my inbox.

I want it to be a model Ruby library that doesn’t get in your way, is as fast as possible, and plays nicely with other libraries.

I want the testing suite to be rock-solid. I’ve been much better at writing tests first over the last six months, and using Cucumber has made the test suite so much more reliable, but there’s still some way to go.

This is not a rewrite – it’s polishing.

I’ve been toying with this idea for a while, and it’s time to have a stab at it. Hopefully you can provide some assistance to do this.

05 Oct 2009

Better Gem Publishing with Gemcutter

If you’re working with Ruby and have been paying attention to Twitter or RSS feeds, then you’ve probably heard of Gemcutter. If not, it’s the latest flavour for publishing gems, and I’m finding the simplicity of it a delight.

Its appearance is doubly useful, as since GitHub has moved to Rackspace, automated gem building from projects has been disabled, perhaps never to return.

Getting Started

If you’ve not clicked the link to Gemcutter yet, let’s run down how easy it is to get it set up on your machine.

sudo gem install gemcutter
gem tumble

That’s it. Any future gem installs will look at Gemcutter’s growing library.

This doesn’t replace RubyForge or GitHub in your sources list, but it does set Gemcutter as the top priority – which is fine, as it has almost all of RubyForge’s gems ready for you anyway.


Firstly, get yourself an account, click that confirmation email link, then hunt down a gem you want to publish, and run the following command:

gem push my-awesome-gem-0.0.1.gem

If it’s your first time, you’ll be asked for your login details, and then the gem is online and ready for anyone to download it. No waiting, no forms, no pain.

When you’ve got a new version, just run that same command again, pointing to the new gem file:

gem push my-awesome-gem-0.1.0.gem

One command. No authentication prompts. Available for everyone straight away. Awesome.


If you’ve already got gems on RubyForge that you’d like to take ownership of on Gemcutter, it’s another one-step process:

gem migrate my-legacy-gem

You’ll be prompted for your RubyForge account name and password, and then Gemcutter does the rest.

Pretty easy, hey?

My Gems

Over this past weekend, I made Gemcutter the definitive source for all of my gems:

Incoming Confusion

There’s been some discussion about whether Gemcutter should replace the gem hosting facilities provided by Rubyforge. This may or may not happen, but it is confirmed that Gemcutter will be moving to soon.

Everything will still work fine via the address, though, so don’t let that hold you back from diving in head first.


The talented Nick Quaranto has been working hard on this for a while, and it’s great to see the Ruby community embrace Gemcutter so quickly. Here’s hoping it becomes the defacto gem source for all Ruby projects.

27 Sep 2009


This morning I decided to get Nginx and Passenger set up in my local dev environment. I needed an easier way to test of Thinking Sphinx in such environments, but also, I find Nginx configuration syntax so much easier than Apache.

And of course, if I’ve got these components there, it would be great to use them to serve my development versions of rails applications, much like script/server. So I’ve got a script/nginx file that manages that as well. Sit tight, and let’s run through how to make this happen on your machine.

Be Prepared to Think

Firstly, a couple of notes on my development machine – I’m running Snow Leopard, and I compile libraries by source. No MacPorts, no custom versions of Ruby (yet). So, you may need to tweak the instructions to fit your own setup.

Installing Passenger

Before we get to Nginx, you’ll want the Passenger gem installed first.

sudo gem install passenger

You’ll also need to compile Passenger’s nginx module (keep an eye on the file path below – yours may be different):

cd /Library/Ruby/Gems/1.8/gems/passenger-2.2.5/ext/nginx
sudo rake nginx

Installing Nginx

Nginx requires the PCRE library, so that adds an extra step, but it’s nothing too complex. Jump into Terminal or your shell application of choice, create a directory to hold all the source files, and step through the following commands (initially sourced from instructions by Wincent Colaiuta):

curl -O \
tar xjvf pcre-7.9.tar.bz2
cd pcre-7.9
make check
sudo make install

That should be PCRE taken care of – I didn’t have any issues on my machine, hopefully it’s the same for you. Next up: Nginx itself. Grab the source:

curl -O \
tar zxvf nginx-0.7.62.tar.gz
cd nginx-0.7.62

Let’s pause for a second before we configure things.

Even though the focus is having Nginx working in a local user setting, not system-wide, I wanted the default file locations to be something approaching Unix/OS X standards, so I’ve gone a bit crazy with configuration flags. You may want to alter them to your own personal tastes:

./configure \
  --prefix=/usr/local/nginx \
  --add-module=/Library/Ruby/Gems/1.8/gems/passenger-2.2.5/ext/nginx \
  --with-http_ssl_module \
  --with-pcre \
  --sbin-path=/usr/sbin/nginx \
  --conf-path=/etc/nginx/nginx.conf \
  --pid-path=/var/nginx/ \
  --lock-path=/var/nginx/nginx.lock \
  --error-log-path=/var/nginx/error.log \

And with that slightly painful step out of the way, let’s compile and install:

sudo make install

And just to test that Nginx is happy, run the following command:

nginx -v

Do you see the version details? Great! (If you don’t, then review the last couple of steps – did anything go wrong? Do you have the passenger module path correct?)

Configuring for a Rails App

The penultimate section – let’s create a simple configuration file for Rails applications, which can be used by our script/nginx file. I store mine at /etc/nginx/rails.conf, but you can put yours wherever you like.

daemon off;

events {
  worker_connections  1024;

http {
  include /etc/nginx/mime.types;
  # Assuming path has been set to a Rails application
  access_log            log/nginx.access.log;
  client_body_temp_path tmp/nginx.client_body_temp;
  fastcgi_temp_path     tmp/nginx.client_body_temp;
  proxy_temp_path       tmp/nginx.proxy_temp;
  passenger_root /Library/Ruby/Gems/1.8/gems/passenger-2.2.5;
  passenger_ruby /usr/bin/ruby;
  server {
    listen      3000;
    server_name localhost;
    root              public;
    passenger_enabled on;
    rails_env         development;


The final piece of the puzzle – the script/nginx file, for the Rails app of your choice:

#!/usr/bin/env bash
nginx -p `pwd`/ -c /etc/nginx/rails.conf \
  -g "error_log `pwd`/log/nginx.error.log; pid `pwd`/log/;";

Don’t forget to make it executable:

chmod +x script/nginx

If you run the script right now, you’ll see a warning that Nginx can’t write to the global error log, but that’s okay. Even with that message, it uses a local error log. I’ve granted full access to the global log just to avoid the message, but if you know a Better Way, I’d love to hear it.

sudo chmod 666 /var/nginx/error.log

Head on over to localhost:3000 – and, after Passenger’s warmed up, your Rails app should load. Success!

Known Limitations

  • The environment is hard-coded to development. If this is annoying, the easiest way around it is to create multiple versions of rails.conf, one per environment, and then use the appropriate one in your script/nginx file.
  • You can’t specify a custom port either. Patches welcome.
  • You won’t see the log output. Either tail log/development.log when necessary, or suggest a patch for script/nginx. I’d prefer the latter.

Beyond that, it should work smoothly. If I’m wrong, that’s what the comments form is for.

Also, you can find all of my config files, as well as other details of how I’ve set up my machine since installing Snow Leopard, on

14 Jul 2009

Rails Camps - Coming to a Country Near You

This weekend, there’s going to be a Rails Camp. In October, there’s going to be a Rails Camp. Then in November, there’s going to be a Rails Camp. That in itself is pretty freaking cool. What’s even cooler is that they’re in Maine, England and Australia respectively.


If you’re not quite sure what Rails Camps are – they’re unconference style events, held away from cities, generally without internet, on a weekend from Friday to Monday. The venues are usually scout halls or similar, so the name is slightly inaccurate – most people don’t bring tents, but sleep in dorm rooms instead.

Getting Down to Business

Also, they are events for Rubyists of all level of experience – and not just focused on Rails either. Anything related to Ruby and development in general is a welcome topic for discussion.

Communal Hacking

The weekends are made up of plenty of hacking, socialising, talks, and partying. Alcohol and guitar hero usually feature. A ton of fun ensues.

Making Pizzas

Rails Camp New England

A quick rundown in chronological order: first up, from the 17th to 20th of July, is Rails Camp New England. This will (as far as I know) be the first Rails Camp in North America. We’ll be up in the middle of Maine, at the MountainView House (a bit different from most Rails Camp venues) in Bryant Pond.

Unfortunately, if you want to come to this camp, we’re all sold out. Let me know anyway, just in case someone drops out (although it is late notice).

Rails Camp UK 2

Building on the success of last year’s first UK Rails Camp, a second one has been put together by Tom Crinson out in Margate, Kent.


If you’re anywhere in the UK, or even Europe, you really should be keeping the weekend of the 16th to 19th of October free. In fact, go book your spot right now.

Rails Camp Australia 6

Last on this list is the original Rails Camp, that started back in June 2007, run by the inimitable Ben Askins. We’re returning to Melbourne (the host of the second camp, in November 2007), but this time we’re down by the beach in Somers.

John showing us how it's done

November 20th to 23rd are the dates for this, and going by the names of confirmed attendees, alongside what looks to be an fantastic venue, it’s going to rock just as much as the last five (and quite possibly even more). Feel like booking your place?

For all of these events, you should beg, borrow or steal to get your hands on a ticket. The energy, intelligence and passion of past camps has been amazing (which is why I do my best to spread the word), and they are a breath of fresh air compared to the staid and structured setup of RailsConf and most other technical conferences.

Thanks to John Barton, Max Muermann, and Jason Crane for the photos above.

RssSubscribe to the RSS feed

About Freelancing Gods

Freelancing Gods is written by , who works on the web as a web developer in Melbourne, Australia, specialising in Ruby on Rails.

In case you're wondering what the likely content here will be about (besides code), keep in mind that Pat is passionate about the internet, music, politics, comedy, bringing people together, and making a difference. And pancakes.

His ego isn't as bad as you may think. Honest.

Here's more than you ever wanted to know.

Open Source Projects

Other Sites

Creative Commons Logo All original content on this site is available through a Creative Commons by-nc-sa licence.