Archive for July, 2012

A little Ruby program to monitor Solr DIH imports

July 11, 2012

Solr is a text indexing package. All interaction with it is through GETting and POSTting to the service, and then XML responses.

After you do the GET to start an import with Solr’s DataImportHandler, you have to check a status URL, and Solr gives a response like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">busy</str>
    <str name="importResponse">A command is still running...</str>
    <lst name="statusMessages">
        <str name="Time Elapsed">0:0:4.545</str>
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">36262</str>
        <str name="Total Documents Processed">36261</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:31:03</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

And then after a while when you check the status URL, the response looks like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">idle</str>
    <str name="importResponse"/>
    <lst name="statusMessages">
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">1000000</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:23:30</str>
        <str name="">Indexing completed. Added/Updated: 1000000 documents. Deleted 0 documents.</str>
        <str name="Committed">2012-07-11 09:26:01</str>
        <str name="Total Documents Processed">1000000</str>
        <str name="Time taken">0:2:31.95</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

But when does it finish? There’s no way to tell other than hitting that status URL and watching for it to change. I needed a tool to tell me when importing had finished, so I could use it in my makefile. It just has to check the status until it’s completed, and then exit.

So, I wrote a little program to do the monitoring, using Ruby and the Nokogiri library. Nokogiri is a web client similar to Perl’s WWW::Mechanize, with built-in XPath and CSS selector capabilities.

#!/usr/bin/ruby

require 'rubygems'
require 'nokogiri'
require 'open-uri'

while true
    doc = Nokogiri::XML( open( 'http://hostname:8080/solr/db/dih?command=status' ) )

    # If it's still running, this status will say something like "A process is still running..."
    # The status turns blank when the process has stopped.
    status = doc.xpath( '//response/str[@name="importResponse"]' ).inner_text
    if ( status == '' )
        break
    end

    # Get the import process's elapsed time and record count and display then
    time_elapsed   = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Time Elapsed"]' ).inner_text
    docs_processed = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Total Documents Processed"]' ).inner_text
    puts docs_processed + ' documents in ' + time_elapsed + ' seconds'

    sleep(2)
end

I’m not much of a Ruby guy, but this was pretty simple to write. Most of my time was looking at Nokogiri’s method listings and reacquainting myself with XPath syntax. The one Ruby gotcha I found was that before Ruby 1.9, if your program uses any Ruby gems, you have to put require 'rubygems' before you require any other gems.

SELECT * is a bug waiting to happen

July 10, 2012

A SQL SELECT statement that use * instead of an explicit column list is a bug waiting to happen.  Beyond the quick-and-dirty prototyping stage, every SQL query in an application should explicitly specify the columns it needs to protect against future changes.

Say you’ve got a table and code like this:

USERS table:
id integer NOT NULL
name varchar(100) NOT NULL
mail varchar(100)

my $query = perform_select( 'select * from users' );
while ( my $row = $query->fetch_next ) {
    if ( defined($row{mail}) ) {
        # do something to send user mail
    }
}

Later on, someone goes and renames the users.mail column to users.email. Your program will never know it. The email branch will just never execute.

Here’s another example. Say you’ve got that users table joining to departments, like so

users table:
id integer NOT NULL
name varchar(100) NOT NULL
email varchar(100)
deptid integer

dept table:
id integer NOT NULL
deptname varchar(100) NOT NULL

SELECT *
FROM users u JOIN dept d ON (u.deptid = d.id)

So your selects come back with id, name, email, deptid, id, deptname. You’ve got “id” in there twice. How does your DB layer handle that situation? Which “id” column takes precedence? That’s not something I want to have to spend brain cycles thinking about.

You should even specify which table each columns come from. For example, say you don’t want the IDs and you just specify the columns you want. So you write something like this:

SELECT name, email, deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

Later on, someone adds an email column to the dept table. Now, your “SELECT name, email, deptname” is making an ambiguous column reference to “email”. If you specify everything fully:

SELECT u.name, u.email, d.deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

then you’re future-proof.

Of course, this rule doesn’t apply to code that is dealing with columns in aggregate. If you’re writing a utility that deals with all columns in a row and transforms them somehow as a group, then no, you don’t need to specify columns.

Aside from the potential bugs, I also think it’s important to be clear to the human reader of your code what exactly you’re pulling from the database. SELECT * makes it a guessing game. Which of these makes it more obvious to the reader what I’m doing?

SELECT * FROM users;

or

SELECT first_name, last_name, email_addr FROM users;

There are also all sorts of speed reasons to specify columns. You reduce the amount of work fetching data from the disk, and your DBMS may not even have to fetch rows from disk if the data is covered in an index. For discussion of the performance issues, see this StackOverflow thread. One thing to remember: Your code will never be slower if you specify columns. It can ONLY be faster.

The speedups are secondary, however. I want to write my queries to be resistant to future change. I don’t mind making a few extra keystrokes to make that happen. That’s why I always specify columns in my SELECTs.