Common Questions and Issues
Depending on how you have Sphinx setup, or what database you’re using, you might come across little issues and curiosities. Here’s a few to be aware of.
- Editing the generated Sphinx configuration file
- Running multiple instances of Sphinx on one machine
- Viewing Result Weights
- Wildcard Searching
- Slow Indexing
- MySQL and Large Fields
- PostgreSQL with Manual Fields and Attributes
- Delta Indexing Not Working
- Running Delta Indexing with Passenger
- Can only access the first thousand search results
- Vendored Delayed Job, AfterCommit and Riddle
- Filtering on String Attributes
- Models outside of
app/models
- Removing HTML from Excerpts
- Using other Database Adapters
- Using OR Logic with Attribute Filters
- Catching Exceptions when Searching
- Slow Requests (Especially in Development)
- Errors saying no fields are defined
- Using with Unicorn
- Alternatives to MVAs with Strings
- Indices not being processed
Editing the generated Sphinx configuration file
In most situations, you won’t need to edit this file yourself, and can rely on Thinking Sphinx to generate it reliably.
If you do want to customise the settings, you’ll find most options are available to set via config/thinking_sphinx.yml
- many are mentioned on the Advanced Sphinx Configuration page. For those that aren’t mentioned on that page, you could still try setting it, and there’s a fair chance it will work.
On the off chance that you actually do need to edit the file, make sure you’re running the ts:index
task with the INDEX_ONLY
environment variable set to true, otherwise the task will always regenerate the configuration file, overwriting your customisations.
Running multiple instances of Sphinx on one machine
You can run as many Sphinx instances as you wish on one machine - but each must be bound to a different port. You can do this via the config/thinking_sphinx.yml
file - just add a setting for the port for the specific environment using the mysql41 setting (or port for pre-v3):
staging:
mysql41: 9313
Other options are documented on the Advanced Sphinx Configuration page.
Viewing Result Weights
To retrieve the weights/rankings of each search result, you can enumerate through your matches using each_with_weight
, once you’ve added the appropriate mask:
search = Article.search('pancakes', :select => '*, weight()')
search.masks << ThinkingSphinx::Masks::WeightEnumeratorMask
search.each_with_weight do |article, weight|
# ...
end
If you want to access weights directly for each search result, you should add a weight pane to the search context:
search = Article.search('pancakes', :select => '*, weight()')
search.context[:panes] << ThinkingSphinx::Panes::WeightPane
search.each do |article|
article.weight
end
Sphinx 2.0.x
Note: If you are using a version of Sphinx prior to 2.1.1, then the ranking is available instead by the internal attribute `@weight`.
Wildcard Searching
Sphinx can support wildcard searching (for example: Austr∗), but it is turned off by default. To enable it, you need to add two settings to your config/thinking_sphinx.yml
file:
development:
enable_star: 1
min_infix_len: 1
test:
enable_star: 1
min_infix_len: 1
production:
enable_star: 1
min_infix_len: 1
You can set the min_infix_len
value to something higher if you don’t need single characters with a wildcard being matched. This may be a worthwhile fine-tuning, because the smaller the infixes are, the larger your index files become.
Don’t forget to rebuild your Sphinx indexes after making this change.
rake ts:rebuild
Slow Indexing
If Sphinx is taking a while to process all your records, there are a few common reasons for this happening. Firstly, make sure you have database indexes on any foreign key columns and any columns you filter or sort by.
Secondly - are you using fixtures, or are there large gaps between primary key values for your models? Sphinx isn’t set up to process disparate IDs efficiently by default - and Rails’ fixtures have randomly generated IDs, which are usually extremely large integers. To get around this, you’ll need to set sql_range_step
in your config/thinking_sphinx.yml
file for the appropriate environments:
development:
sql_range_step: 10000000
MySQL and Large Fields
If you’ve got a field that is built off multiple values in one column from a MySQL database - ie: through a has_many association - then you may hit MySQL’s default limit for string concatenation: 1024 characters. You can increase the group_concat_max_len value by adding the following to your index definition:
set_property :group_concat_max_len => 8192
If these fields get particularly large though, then there’s another setting you may need to set in your MySQL configuration: max_allowed_packet, which has a default of sixteen megabytes. You can’t set this option via Thinking Sphinx though (it’s a rare edge case).
PostgreSQL with Manual Fields and Attributes
If you’re using fields or attributes defined by strings (raw SQL) in SQL-backed indices, then the columns used in them aren’t automatically included in the GROUP BY clause of the generated SQL statement. To make sure the query is valid, you will need to explicitly add these columns to the GROUP BY clause.
A common example is if you’re converting latitude and longitude columns from degrees to radians via SQL.
has "RADIANS(latitude)", :as => :latitude, :type => :float
has "RADIANS(longitude)", :as => :longitude, :type => :float
group_by "latitude", "longitude"
Delta Indexing Not Working
Often people find delta indexing isn’t working on their production server. Sometimes, this is because Sphinx is running as one user on the system, and the Rails/Merb application is being served as a different user. Check your production.log and Apache/Nginx error log file for mentions of permissions issues to confirm this.
Indexing for deltas is invoked by the web user, and so needs to have access to the index files. The simplest way to ensure this is run all Thinking Sphinx rake tasks by that web user.
If you’re still having issues, and you’re using Passenger, read the next hint.
Running Delta Indexing with Passenger
If you’re using Phusion Passenger on your production server, with delta indexing on some models, a common issue people find is that their delta indexes don’t get processed.
If it’s not a permissions issue (see the previous hint), another common cause is because Passenger has it’s own PATH set up, and can’t execute the Sphinx binaries (indexer and searchd) implicitly.
The way around this is to find out where your binaries are on the server:
which searchd
And then set the bin_path option in your config/thinking_sphinx.yml
file for the production environment:
production:
bin_path: '/usr/local/bin'
Can only access the first thousand search results
This is actually how Sphinx is supposed to behave. Have a read of the Large Result Sets section of the Advanced Configuration page to see why, and how to work around it if you really need to.
Vendored Delayed Job, AfterCommit and Riddle
If you’ve still got Delayed Job vendored as part of Thinking Sphinx and would rather use a more up-to-date version of the former, recent releases of Thinking Sphinx do not have it included any longer.
As for AfterCommit and Riddle, while they are still included for plugin installs, they’re no longer in the Thinking Sphinx gem (since 1.3.3). Instead, they are considered dependencies, and will be installed as separate gems.
Filtering on String Attributes
While you can have string columns as attributes in Sphinx, they cannot be filtered on (unless you’re using Sphinx 2.2.3 or newer).
To get around this, there’s three options: firstly, use integer attributes instead, if you possibly can. This works for small result sets (for example: gender). Secondly, you could just have that attribute is a field instead - which is fine in any case where it’s not a big deal if the words in that column influence search results.
Otherwise, you might want to consider manually converting the string to a CRC integer value:
has "CRC32(category)", :as => :category, :type => :integer
This way, you can filter on it like so:
Article.search 'pancakes', :with => {
:category => 'Ruby'.to_crc32
}
Of course, this isn’t amazingly clean, especially since CRC32 encoding can have collisions. It’s most definitely not the perfect solution.
The best way forward, if it’s feasible, is to upgrade the version of Sphinx you’re using to 2.2.3 or newer.
Models outside of `app/models`
Thinking Sphinx v1/v2
Note: This setting applies only to older versions of Thinking Sphinx. In version 3, indices are stored separately from models.
If you’re using plugins or other web frameworks (Radiant, Ramaze, etc) that don’t always store their models in app/models
, you can tell Thinking Sphinx to look in other locations when building the configuration file:
ThinkingSphinx::Configuration.instance.
model_directories << "/path/to/models/dir"
By default, Thinking Sphinx will load all models in app/models
and vendor/plugins/*/app/models
.
Removing HTML from Excerpts
For a while, Thinking Sphinx auto-escaped excerpts. However, Sphinx itself can remove HTML entities for indexing and excerpts, which is a better way to approach this. So, you’ll want to add the following setting to your config/thinking_sphinx.yml
file:
html_strip: true
Using other Database Adapters
If you’re using Thinking Sphinx in combination with a database adapter that isn’t quite run-of-the-mill, you may need to add a snippet of code to a Rails initialiser or equivalent (This is only available in versions 1.4.0, 2.0.0 and 3.0.0 onwards).
For Thinking Sphinx v3, there’s just one way to do this, and it’s pretty simple:
ThinkingSphinx::ActiveRecord::DatabaseAdapters.default =
ThinkingSphinx::ActiveRecord::DatabaseAdapters::MySQLAdapter
In v1 and v2, you can either supply a block:
ThinkingSphinx.database_adapter = lambda do |model|
case model.connection.config[:adapter]
when 'mysql', 'mysql2'
:mysql
when 'postgresql'
:postgresql
else
raise "You can only use Thinking Sphinx with MySQL or PostgreSQL"
end
end
Or ThinkingSphinx.database_adapter
accepts a symbol as well, if you just want to presume that you’ll always be using either MySQL or PostgreSQL:
ThinkingSphinx.database_adapter = :postgresql
Using OR Logic with Attribute Filters
It is possible to filter on attributes using OR logic - although you need to be using Sphinx 0.9.9 or newer.
There’s two steps to it… firstly, you need to create a computed attribute while searching, using Sphinx’s select option, and then filter by that computed value. Here’s an example where we want to return all publicly visible articles, as well as articles belonging to the user with an ID of 5.
with_display = "*, IF(visible = 1 OR user_id = 5, 1, 0) AS display"
Article.search 'pancakes',
:select => with_display,
:with => {'display' => 1}
It’s important to note that you’ll want to include all existing attribute values by default (that’s the *
at the start of the select) if you’re using an old (pre-v3) version of Thinking Sphinx. It’s quite similar to standard SQL syntax.
Also for those using pre-v3 versions of Thinking Sphinx, the :select
option should be :sphinx_select
.
Finally: if you’ve given your attributes aliases (using the :as
option in your index definition), then you must refer to those attributes by their aliases, not the original database columns. This applies generally to anything using those attributes (filtering, ordering, facets, etc).
For further reading, I recommend Sphinx’s documentation on both the select option and expression syntax.
Catching Exceptions when Searching
By default, Thinking Sphinx does not execute the search query until you examine your search results - which is usually in the view. This is so you can chain sphinx scopes without sending multiple (unnecessary) queries to Sphinx.
However, this means that exceptions will be fired from within the view - and most people put their exception handling in the controller. To force exceptions to fire when you actually define the search, all you need to do is to inform Thinking Sphinx that it should populate the results immediately:
Article.search 'pancakes', :populate => true
Obviously, if you’re chaining scopes together, make sure you add this at the end with a final search call:
Article.published.search :populate => true
Slow Requests (Especially in Development)
Thinking Sphinx v1/v2
Note: This setting applies only to older versions of Thinking Sphinx. In version 3, indices are stored separately from models, and so models are only loaded when necessary.
If you’re finding a lot of requests are quite slow (particularly in your local development environment), this could be because you have a lot of models. Thinking Sphinx loads all models to determine which ones are indexed by Sphinx (this is necessary to load search results), but you can make things much faster by setting out a list of indexed models in your config/sphinx.yml
file.
Errors saying no fields are defined
Thinking Sphinx v1/v2
Note: This setting applies only to older versions of Thinking Sphinx. In version 3, BlankSlate is no longer used.
If you have defined fields (using the indexes
method) but you’re getting an error saying none are defined, it could be due to other gems packaging custom (and perhaps broken) versions of the BlankSlate gem. To get around this, add the proper BlankSlate gem to your Gemfile above thinking-sphinx
:
gem 'blankslate', '2.1.2.4'
# ...
gem 'thinking-sphinx', '2.0.14'
Using with Unicorn
If you’re using Unicorn as your web server, you’ll want to ensure the connection pool is cleared after forking.
after_fork do |server, worker|
# Add this to an existing after_fork block if needed.
ThinkingSphinx::Connection.pool.clear
end
Alternatives to MVAs with Strings
Given Sphinx doesn’t support multi-value attributes, what are alternative ways to achieve similar functionality?
The easiest approach is when the string values are coming from an association. In this case, use the foreign key ids instead, and translate string values to the underlying id when you’re filtering your searches.
Otherwise, you could look into using CRC’d integer values of strings, though there is the possibility of collisions.
Indices not being processed
If you’re finding indices aren’t being processed - particularly delta indices - it could be that guard files haven’t been cleaned up properly. They are located in the indices directory, and take the name pattern ts-INDEXNAME.tmp
.
Provided there is no indexing occuring, they can safely be deleted.