Archive for the ‘Programming’ Category

Make the Linux chkconfig service list easier to read

October 4, 2013

If you run a Linux box, and you want to see what services start up at which level, you use runlevel:

$ chkconfig
acpid           0:off   1:off   2:on    3:on    4:on    5:on    6:off
atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off
auditd          0:off   1:off   2:on    3:off   4:off   5:off   6:off
blk-availability        0:off   1:on    2:on    3:on    4:on    5:on    6:off
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off
crond           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cups            0:off   1:off   2:on    3:off   4:off   5:off   6:off
...

Boy is that hard to read at a glance. All the “on” and “off” look very similar to each other, and that blk-availability service’s length screws up the tabbed columns. I decided I needed a better way, which I called cclist.

$ cclist
  2345  acpid
   345  atd
  2     auditd
 12345  blk-availability
 12345  cpuspeed
  2345  crond
  2     cups
...

Here’s the code to ~/bin/cclist:

#!/usr/bin/perl

open( my $fh, '/sbin/chkconfig --list |' ) or die "Can't open chkconfig: $!";

while () {
    if ( /^(S+)(s+d:o(n|ff)){7}/ ) {
        chomp;
        my @cols = split;
        my $service = shift @cols;
        for ( @cols ) {
            my ( $level, $status ) = split /:/;
            print $status eq "on" ? $level : " ";
        }
        print "t$servicen";
    }
    else {
        print;
    }
}
Advertisements

ack 2.0 has been released

April 19, 2013

ack 2.0 has been released. ack is a grep-like search tool that has been optimized for searching large heterogeneous trees of source code.

ack has been around since 2005. Since then it has become very popular and is packaged by all the major Linux distributions. It is cross-platform and pure Perl, so will run on Windows easily. See the “Why ack?” page for the top ten reasons, and dozens of testimonials.

ack 2.0 has many changes from 1.x, but here are four big differences and features that long-time ack 1.x users should be aware of.

  • By default all text files are searched, not just files with types that ack recognizes. If you prefer the old ack 1.x behavior of only searching files that ack recognizes, you can use the -k/--known-types option.
  • There is a much more flexible type identification system available. You can specify a file type based on extension (.rb for Ruby), filename (Rakefile is a Ruby file), first line matching a regex (Matching /#!.+ruby/ is a Ruby file) or regex match on the filename itself.
  • Greater support for ackrc files. You can have a system-wide ackrc at /etc/ackrc, a user-specific ackrc in ~/.ackrc, and ackrc files local to your projects.
  • The -x argument tells ack to read the list of files to search from stdin, much like xargs. This lets you do things like git ls | ack -x foo and ack will search every file in the git repository, and only those files that appear in the repository.

On the horizon, we see creating a framework that will let authors create ack plugins in Perl to allow flexibility. You might create a plugin that allows searching through zip files, or reading text from an Excel spreadsheet, or a web page.

ack has always thrived on numerous contributions from the ack community, but I especially want to single out Rob Hoelz for his work over the past year or two. If it were not for Rob, ack 2.0 might never have seen the light of day, and for that I am grateful.

A final note: In the past, ack’s home page was betterthangrep.com. With the release of ack 2.0, I’ve changed to beyondgrep.com. “Beyond” feels less adversarial than “better than”, and implies moving forward as well as upward. beyondgrep.com also includes a page of other tools that go beyond the capabilities of grep when searching source code.

For long time ack users, I hope you enjoy ack 2.0 and that it makes your programming life easier and more enjoyable. If you’ve never used ack, give it a try.

Solr’s DataImportHandler can’t handle line-based SQL comments

September 13, 2012

At least twice now I’ve run into this problem where I try to comment my SQL code, but doing so makes my Solr data importer blow up.  I post it here for posterity.

Part of your DIH configuration will be at least one entity, probably with SQL code like this:

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo,
            bar
        FROM blah_blah
    ">

And maybe part of the SQL query isn’t obvious, so you want to add a comment like

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo, -- We need the foo so we can fribble the wibbitz
            bar
        FROM blah_blah
    ">

But that blows up because the DIH strips linefeeds from your SQL code before passing it to the server.  This means that the SQL code you’re passing looks like this:

SELECT foo, -- We need the foo so we can fribble the wibbitz bar FROM blah_blah

Your line-based comment has wiped out the rest of your SQL query.  So what you have to do is use C-style comments

<entity name="nodes" dataSource="jdbc""
    query="
        SELECT
            foo, /* We need the foo so we can fribble the wibbitz */
            bar
        FROM blah_blah
    ">

Chances are your database supports C-style comments, according to this post on StackOverflow:

C style comments are standard in SQL 2003 and SQL 2008 (but not in SQL 1999 or before). The following DBMS all support C style comments:

  • Informix
  • PostgreSQL
  • MySQL
  • Oracle
  • DB2
  • Sybase
  • Ingres
  • Microsoft SQL Server
  • SQLite (3.7.2 and later)

That is not every possible DBMS, but it is more or less every major SQL DBMS.

A little Ruby program to monitor Solr DIH imports

July 11, 2012

Solr is a text indexing package. All interaction with it is through GETting and POSTting to the service, and then XML responses.

After you do the GET to start an import with Solr’s DataImportHandler, you have to check a status URL, and Solr gives a response like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">busy</str>
    <str name="importResponse">A command is still running...</str>
    <lst name="statusMessages">
        <str name="Time Elapsed">0:0:4.545</str>
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">36262</str>
        <str name="Total Documents Processed">36261</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:31:03</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

And then after a while when you check the status URL, the response looks like this:

<response>
    <lst name="responseHeader">
        <int name="status">0</int>
        <int name="QTime">0</int>
    </lst>
    <lst name="initArgs">
        <lst name="defaults">
            <str name="config">jdbc.xml</str>
        </lst>
    </lst>
    <str name="command">status</str>
    <str name="status">idle</str>
    <str name="importResponse"/>
    <lst name="statusMessages">
        <str name="Total Requests made to DataSource">1</str>
        <str name="Total Rows Fetched">1000000</str>
        <str name="Total Documents Skipped">0</str>
        <str name="Full Dump Started">2012-07-11 09:23:30</str>
        <str name="">Indexing completed. Added/Updated: 1000000 documents. Deleted 0 documents.</str>
        <str name="Committed">2012-07-11 09:26:01</str>
        <str name="Total Documents Processed">1000000</str>
        <str name="Time taken">0:2:31.95</str>
    </lst>
    <str name="WARNING">This response format is experimental.  It is likely to change in the future.</str>
</response>

But when does it finish? There’s no way to tell other than hitting that status URL and watching for it to change. I needed a tool to tell me when importing had finished, so I could use it in my makefile. It just has to check the status until it’s completed, and then exit.

So, I wrote a little program to do the monitoring, using Ruby and the Nokogiri library. Nokogiri is a web client similar to Perl’s WWW::Mechanize, with built-in XPath and CSS selector capabilities.

#!/usr/bin/ruby

require 'rubygems'
require 'nokogiri'
require 'open-uri'

while true
    doc = Nokogiri::XML( open( 'http://hostname:8080/solr/db/dih?command=status' ) )

    # If it's still running, this status will say something like "A process is still running..."
    # The status turns blank when the process has stopped.
    status = doc.xpath( '//response/str[@name="importResponse"]' ).inner_text
    if ( status == '' )
        break
    end

    # Get the import process's elapsed time and record count and display then
    time_elapsed   = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Time Elapsed"]' ).inner_text
    docs_processed = doc.xpath( '//response/lst[@name = "statusMessages"]/str[@name = "Total Documents Processed"]' ).inner_text
    puts docs_processed + ' documents in ' + time_elapsed + ' seconds'

    sleep(2)
end

I’m not much of a Ruby guy, but this was pretty simple to write. Most of my time was looking at Nokogiri’s method listings and reacquainting myself with XPath syntax. The one Ruby gotcha I found was that before Ruby 1.9, if your program uses any Ruby gems, you have to put require 'rubygems' before you require any other gems.

SELECT * is a bug waiting to happen

July 10, 2012

A SQL SELECT statement that use * instead of an explicit column list is a bug waiting to happen.  Beyond the quick-and-dirty prototyping stage, every SQL query in an application should explicitly specify the columns it needs to protect against future changes.

Say you’ve got a table and code like this:

USERS table:
id integer NOT NULL
name varchar(100) NOT NULL
mail varchar(100)

my $query = perform_select( 'select * from users' );
while ( my $row = $query->fetch_next ) {
    if ( defined($row{mail}) ) {
        # do something to send user mail
    }
}

Later on, someone goes and renames the users.mail column to users.email. Your program will never know it. The email branch will just never execute.

Here’s another example. Say you’ve got that users table joining to departments, like so

users table:
id integer NOT NULL
name varchar(100) NOT NULL
email varchar(100)
deptid integer

dept table:
id integer NOT NULL
deptname varchar(100) NOT NULL

SELECT *
FROM users u JOIN dept d ON (u.deptid = d.id)

So your selects come back with id, name, email, deptid, id, deptname. You’ve got “id” in there twice. How does your DB layer handle that situation? Which “id” column takes precedence? That’s not something I want to have to spend brain cycles thinking about.

You should even specify which table each columns come from. For example, say you don’t want the IDs and you just specify the columns you want. So you write something like this:

SELECT name, email, deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

Later on, someone adds an email column to the dept table. Now, your “SELECT name, email, deptname” is making an ambiguous column reference to “email”. If you specify everything fully:

SELECT u.name, u.email, d.deptname
FROM users u JOIN dept d ON (u.deptid = d.id)

then you’re future-proof.

Of course, this rule doesn’t apply to code that is dealing with columns in aggregate. If you’re writing a utility that deals with all columns in a row and transforms them somehow as a group, then no, you don’t need to specify columns.

Aside from the potential bugs, I also think it’s important to be clear to the human reader of your code what exactly you’re pulling from the database. SELECT * makes it a guessing game. Which of these makes it more obvious to the reader what I’m doing?

SELECT * FROM users;

or

SELECT first_name, last_name, email_addr FROM users;

There are also all sorts of speed reasons to specify columns. You reduce the amount of work fetching data from the disk, and your DBMS may not even have to fetch rows from disk if the data is covered in an index. For discussion of the performance issues, see this StackOverflow thread. One thing to remember: Your code will never be slower if you specify columns. It can ONLY be faster.

The speedups are secondary, however. I want to write my queries to be resistant to future change. I don’t mind making a few extra keystrokes to make that happen. That’s why I always specify columns in my SELECTs.

My Solr+Tomcat troubles, and how I fixed them

May 22, 2012

I’ve been working at getting Solr working under Tomcat, and spent most of a day working on fixing these problems. The fixes didn’t take so much time as the trying to grok the Java app ecosystem.

My Solr install worked well. I was able to import records and search them through the interface. Where I ran into trouble was with the Velocity search browser that comes with Solr.

I’m documenting my troubles and their solutions here because otherwise they won’t exist on the web for people to find. Putting solutions to problems on the web makes them findable for the next poor guy who has the same problem. I figure that if I spend a day working on fixing problems, I can spend another hour publishing them so others can benefit.

These are for Solr 3.5 running under Tomcat 6.0.24.

Unable to open velocity.log

Velocity tries to create a file velocity.log and gets a permission failure.

HTTP Status 500 - org.apache.velocity.exception.VelocityException:
Failed to initialize an instance of
org.apache.velocity.runtime.log.Log4JLogChute with the current
runtime configuration. java.lang.RuntimeException:
org.apache.velocity.exception.VelocityException: Failed to initialize
an instance of org.apache.velocity.runtime.log.Log4JLogChute with
the current runtime configuration. at
...
Caused by: java.io.FileNotFoundException: velocity.log
(Permission denied) at java.io.FileOutputStream.openAppend(Native
Method) at java.io.FileOutputStream.<init>(FileOutputStream.java:207)
...

But where is it trying to create the file? What directory? Since
no pathname was specified, it seemed that the file would be created
in the current working directory of Tomcat. What would that be?

First I had to figure out what process that Tomcat was running as:

frisbee:~ $ ps aux | grep tomcat
tomcat     498  0.6  1.3 6240056 214880 ?      Sl   09:27   0:10 /usr/lib/jvm/java/bin/java ....

In this case, it’s PID 498. So we go to the /proc/498 directory and see what’s in there.

frisbee:~ $ cd /proc/498
frisbee:/proc/498 $ ls -al
ls: cannot read symbolic link cwd: Permission denied
ls: cannot read symbolic link root: Permission denied
ls: cannot read symbolic link exe: Permission denied
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 ./
dr-xr-xr-x 173 root   root   0 May 17 11:33 ../
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr/
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd
...

We can see that cwd is a symlink to a directory, but we have to be root to see what the target directory is. I have to run ls again as root.

frisbee:/proc/498 $ sudo ls -al
[sudo] password for alester:
total 0
dr-xr-xr-x   7 tomcat tomcat 0 May 22 09:27 .
dr-xr-xr-x 174 root   root   0 May 17 11:33 ..
dr-xr-xr-x   2 tomcat tomcat 0 May 22 09:58 attr
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 autogroup
-r--------   1 tomcat tomcat 0 May 22 09:58 auxv
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cgroup
--w-------   1 tomcat tomcat 0 May 22 09:58 clear_refs
-r--r--r--   1 tomcat tomcat 0 May 22 09:56 cmdline
-rw-r--r--   1 tomcat tomcat 0 May 22 09:58 coredump_filter
-r--r--r--   1 tomcat tomcat 0 May 22 09:58 cpuset
lrwxrwxrwx   1 tomcat tomcat 0 May 22 09:58 cwd -> /usr/share/tomcat6

I could also have used the stat command.

frisbee:/proc/498 $ sudo stat cwd
File: `cwd' -> `/usr/share/tomcat6'
Size: 0               Blocks: 0          IO Block: 1024   symbolic link
Device: 3h/3d   Inode: 100017      Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (   91/  tomcat)   Gid: (   91/  tomcat)
Access: 2012-05-22 09:58:17.131009458 -0500
Modify: 2012-05-22 09:58:17.130009715 -0500
Change: 2012-05-22 09:58:17.130009715 -0500

So we find that the CWD is /usr/share/tomcat6. I don’t want the tomcat user to have rights to that directory, so instead I create a velocity.log file in a proper log directory and then symlink
to it.

frisbee:/proc/498 $ cd /var/log/tomcat6
frisbee:/var/log/tomcat6 $ sudo touch velocity.log
frisbee:/var/log/tomcat6 $ sudo chown tomcat:tomcat velocity.log
frisbee:/var/log/tomcat6 $ cd /usr/share/tomcat6
frisbee:/usr/share/tomcat6 $ sudo ln -s /var/log/tomcat6/velocity.log velocity.log

Now the app is able to open /usr/share/tomcat6/velocity.log without error.

log4j error

Once I created a log file Velocity could write to, it stared throwing an error with log4j. log4j is the Java logging package.

org.apache.log4j.Logger.setAdditivity(Z)V java.lang.NoSuchMethodError:
org.apache.log4j.Logger.setAdditivity(Z)V at
org.apache.velocity.runtime.log.Log4JLogChute.initAppender(Log4JLogChute.java:126) at
org.apache.velocity.runtime.log.Log4JLogChute.init(Log4JLogChute.java:85) at
org.apache.velocity.runtime.log.LogManager.createLogChute(LogManager.java:157) at
org.apache.velocity.runtime.log.LogManager.updateLog(LogManager.java:255) at
org.apache.velocity.runtime.RuntimeInstance.initializeLog(RuntimeInstance.java:795) at
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:250) at
org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:107) at
org.apache.solr.response.VelocityResponseWriter.getEngine(VelocityResponseWriter.java:132) at
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:40) at
org.apache.solr.core.SolrCore$LazyQueryResponseWriterWrapper.write(SolrCore.java:1774) at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:352) at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:273) at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555) at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:857) at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:679)

In searching the web for this error, I found this ticket in the Solr bug tracker that says that the log4j .jar files should be removed from the Solr tarball, because they can conflict with existing .jars on the system. That conflict was exactly the error I was getting.

I wanted to remove the extra .jar files, so I used locate to search my system for any log4j .jars. Indeed, there was one installed with solr:

frisbee:~ $ locate log4j
...
/var/lib/tomcat6/webapps/solr/WEB-INF/lib/log4j-over-slf4j-1.6.1.jar
...

So I just changed the extension of the file so it wouldn’t get loaded as a .jar.

frisbee:~ $ sudo mv /var/lib/tomcat6/webapps/solr/WEB-INF/lib/log4j-over-slf4j-1.6.1.{jar,jarx}

Now Velocity loads beautifully. Now the real work starts: Configuration of Velocity to understand the schema in my Solr core.

I hope this helps someone in the future!

Programmers, please take five minutes to provide some data for an experiment

April 19, 2012

Whenever people talk about ack, there’s always a discussion of whether ack is faster than grep, and how much faster, and people provide data points that show “I searched this tree with find+grep in 8.3 seconds, and it took ack 11.5 seconds”. Thing is, that doesn’t take into account the amount of time it takes to type the command.

How much faster is it to type an ack command line vs. a find+xargs line? I wanted to time myself.

Inspired by this tweet by @climagic, I wanted to find out for myself. I used time read to see how long it would take me to type three different command lines.

The three command lines are:
A: ack --perl foo
B: find . -name '*.php' | xargs grep foo
C: find . -name '*.pl' -o -name '*.pm' | xargs grep foo

So I tried it out using time read. Note that it’s not actually executing the command, but measuring how long it takes to hit Enter.

$ time read
find . -name '*.pl' -o -name '*.pm' | xargs grep foo

real    0m8.648s
user    0m0.000s
sys     0m0.000s

For me, my timings came out to average about 1.4s for A, 6.1s for B and 8.6s for C. That was with practice. I also found that it is nearly impossible for me to type the punctuation-heavy B and C lines without making typos and having to correct them.

So I ask of you, dear readers, would you please try this little experiment yourself, and then post your results in the comments? Just give me numbers for A, B and C and then also include the name of your favorite Beatle so I know you actually read this. Also, if you have any insights as to why you think your results came out the way they did, please let me know.

At this point I’m just collecting data. It’s imperfect, but I’m OK with that.

  • Yes, I’m sure there’s another way I could do this timing. It might even be “better”, for some values of “better”.
  • Yes, I know that I’m asking people to report their own data and there may be observational bias.
  • Yes, I know I’m excluding Windows users from my sample.
  • Yes, I know it’s possible to create shell aliases for long command lines.
  • Yes, I know that the find command lines should be using find -print0 and xargs -0.
  • Yes, I know that some shells have globbing like **/*.{pl,pm}.

Note: I’ve heard from a zsh user that time doesn’t work for this because it’s a shell function, but /usr/bin/time does work.

Thanks for your help! I’ll report on results in a future post.

The world’s two worst variable names

April 18, 2012

As programmers, assigning names makes up a big part of our jobs. Phil Karlton said “There are only two hard things in Computer Science: cache invalidation and naming things.” It’s a hard problem, and it’s something we deal with every time we write a line of code. Whether it’s a variable or a table or a column in that table or a file on the filesystem, or what we call our projects and products, naming is a big deal.

Bad variable naming is everywhere. Maybe you’ll find variables that are too short to be adequately descriptive. The programmer might as well have been working in TRS-80 BASIC, where only the first two characters of variable names were significant, and we had to keep a handwritten lookup chart of names in a spiral notebook next to the keyboard.

Sometimes you’ll find variables where all vowels have been removed as a shortening technique, instead of simple truncation, so you have $cstmr instead of $cust. I sure hope you don’t have to distinguish the customers from costumers! Worse, $cstmr is harder to type because of the lack of vowels, and is no longer pronounceable in conversation.

There are also intentionally bad variable names, where the writer was more interested in being funny than clear. I’ve seen $crap as a loop variable, and a colleague tells of overhauling old code with a function called THE_LONE_RANGER_RIDES_AGAIN(). That’s not the type of bad variable name I mean.

While I’m well aware that variable naming conventions can often turn into a religious war, I’m entirely confident when I declare The World’s Worst Variable Name is $data.

Of course it’s data! That’s what variables contain! That’s all they ever contain. It’s like if you were packing up your belongings in moving boxes, and on the side you labeled the box “matter.”

Variable names should say what type of data they hold. Asking the question “what kind” is an easy way to enhance your variable naming. I once saw $data used when reading a record from a database table. The code was something like:

$data = read_record();
print "ID = ", $data["CUSTOMER_ID"];

Asking the question “what kind of $data?” turns up immediate ideas for renaming. $record would be a good start. $customer_record would be better still.

Vague names are the worst, but right behind them are naming related objects with nearly identical names that do not distinguish them. Therefore the World’s Second Worst Variable Name is: $data2.

More generally, any variable that relies on a numeral to distinguish it from a similar
variable needs to be refactored, immediately. Usually, you’ll see it like this:

$total = $price * $qty;
$total2 = $total - $discount;
$total2 += $total2 * $taxrate;

$total3 = $purchase_order_value + $available_credit;
if ( $total2 < $total3 ) {
    print "You can't afford this order.";
}

You can see this as an archaeological dig through the code. At one point, the code only figured out the total cost of the order, $total. If that’s all the code does, then $total is a fine name. Unfortunately, someone came along later, added code for handling discounts and tax rate, and took the lazy way out by putting it in $total2. Finally, someone added some checking against the total that the user can pay and named it $total3.

The real killer in this chunk of code is that if statement:

if ( $total2 < $total3 )

You can’t read that without going back to figure out how it was calculated. You have to look back up above to keep track of what’s what.

If you’re faced with naming something $total2, change the existing name to something more specific. Spend the five minutes to name the variables appropriately. This level of refactoring is one of the easiest, cheapest and safest forms of refactoring you can have, especially if the naming is confined to a single subroutine.

Let’s do a simple search-and-replace on the coding horror above:

$order_total = $price * $qty;
$payable_total = $order_total - $discount;
$payable_total += $payable_total * $taxrate;

$available_funds = $purchase_order_value + $available_credit;
if ( $payable_total < $available_funds ) {
    print "You can't afford this order.";
}

The only thing that changed was the variable names, and already it’s much easier to read. Now there’s no ambiguity as to what each of the _total variables means. And look what we found: The comparison in the if statement was reversed. Effective naming makes it obvious.

There is one exception to the rule that all variables ending with numerals are bad. If the entity itself is named with a number, then keep that as part of the name. It’s fine to use $sha1 for variable that holds a SHA-1 hash. It helps no one to rename it to $sha_one.

After I wrote the first version of this article, I created policies for Perl::Critic to check for these two naming problems. My add-on module Perl::Critic::Bangs includes two policies to check for these problems: ProhibitVagueNames and ProhibitNumberedNames.

What other naming sins drive you crazy? Have you created automated ways to detect them?

Today’s PostgreSQL indexing gotcha

February 16, 2012

At work, I have a big 14M-row production table with a bunch of indexes on it.  One of the indexes was bloated, so I built a new version of the index, and dropped the old bloated index.  Got back a gig of space on the filesystem.  Excellent.

Now, from what I understand, that should be all I have to do.  Postgres doesn’t need an ANALYZE on the table to use the new index.  All the column stats for the table in pg_stats are still there, so the query planner can use the index, and it should all Just Work.

Except that all of a sudden slow queries started showing up in the server log, and we were doing sequential scans. The planner wasn’t using the newly built index.

So I did an ANALYZE on the table, and suddenly the planner started using the index. Why was this?

This goes against what I knew. On this page, Robert Treat, Pg guru, says:

When adding indexes, it is not necessary to re-analyze the table so that postgres will “know” about the index; simply creating the index is enough for postgres to know about it.

So why didn’t it work for me? Turns out it was because the index I rebuilt was a functional index.

Apparently, Pg doesn’t know about the functional index unless there’s an ANALYZE to make the planner know about it. I’m guessing that somewhere there’s a pg_stats equivalent that has functional index histograms in it, too.

If you have further insight on this, please let me know in the comments.

The most important pig in the office

January 26, 2012

In the world of devops, we have complex software packages like Jenkins to take care of continuous integration of source, and intricate monitoring systems like Nagios and Icinga to keep an eye on the state of your servers. For notification of problems, however, sometimes the low-tech solution is best. Ladies and gentlemen, I present to you: Olivia the pig.

Olivia normally sits on my bookshelf at the office. She’s a toy from when my daughter Quinn was much younger, the star of the Olivia picture books that she loved back then. When Quinn decided she no longer wanted to play with Olivia, I gave Olivia a new home at the office, where she serves two important purposes in the web development department.

First, Olivia is a warning flag. She’s a plush visual semaphore that says to all the developers “Something is wrong with the website source code.” Like most shops (I hope!), we practice continuous integration of our source. The rule is that we must be able to roll out trunk to the production website at any time. Projects are done on branches, so that we don’t have half-done work in progress committed to trunk. If code is committed to trunk, it’s ready to go live at any time.

Olivia is the visual indicator that says “we cannot push to production.” Normally, Olivia sits on my bookshelf. where she is only seen by me. When she is on top of my cube, she can be seen by everyone on our end of the building, including all of us developers. She can’t be missed. If Olivia is out, Job #1 is to fix trunk and get it stable.

Sometimes Olivia comes out intentionally, such as when we start a project merge to trunk. Olivia stays out until we’ve passed all our smoke tests, lest someone push out unproven code. Olivia tells everyone that things have not yet been proven to be stable.

Many organizations have physical semaphores like this. In the Perl 5 world, the person who is allowed to make changes to the master source tree is said to be “holding the patch pumpkin.” Why did the Perl community come up with the name “patch pumpkin”? Chip Salzenberg explains:

David Croy once told me once that at a previous job, there was one tape drive and multiple systems that used it for backups. But instead of some high-tech exclusion software, they used a low-tech method to prevent multiple simultaneous backups: a stuffed pumpkin. No one was allowed to make backups unless they had the “backup pumpkin”.

In Chip’s example, the physical semaphore granted privileges. With Olivia, the semaphore of her being atop my cube indicates a problem. In both cases, they’re low-tech but they work.

One Saturday afternoon at the office I had Quinn with me, and she asked why I had her old Olivia stuffed animal at work. I explained Olivia’s purpose, and Quinn thought that was pretty cool. A little later she came back with the picture below. Change “Andy’s computer” to “mission-critical web application that drives all company revenue” and she’s got it right: If Olivia is out, the web app is broken.

The visual semaphore is so simple and unmistakable, a then-eight-ear-old understands it.

But Olivia has a second purpose that I didn’t expect. She’s also a conversation starter, and so an ambassador for the department.

We web developers sit near a main entrance to the building, so many people walk by throughout the day. When Olivia is up on my cube, I’m often asked by a passerby why I have a stuffed pig up there. That’s my chance to do a little department PR. I can tell the person about automated testing, continuous integration, and how we make sure that everything is going well.

I’ve become pretty good at cramming everything into a minute. I’ll say something like “The source code to the website gets 100,000 tests run against it every hour on the hour, so we know we can always update it if we need to. If we discover that something is wrong, Olivia tells everyone that something is wrong. She’s like an oil light on your car’s dashboard.”

I’ve found that people are generally interested in hearing about how we do the magic of running the website. Talking to people helps them understand what it is we do as developers, and helps squash the idea that we just sit around all day typing. They find out just what sorts of processes go into keeping business-critical systems running.

Does your organization have similar low-tech tricks as part of your process? Tell me about them in the comments.