JavaScript Trie Performance Analysis

After my last post discussing dictionary lookups in JavaScript the unanimous consensus seemed to be that utilizing Trie would result in additional space savings and yield performance benefits.

A Trie is a relatively simple data structure. At its simplest form you’re building a tree-like structure where each final leaf results in a complete word. This allows for some very efficient use of file size – reducing common prefixes in words quite easily.

I looked through a number of JavaScript Trie solutions and yet was not terribly pleased with them. They were either total overkill (including large object structures and functionality far beyond what I needed) or weren’t compressing the resulting data structure enough. The closest in functionality that I found, to what I wanted, was what was written by Henrik Hinrichs, called wordlistparser. Wordlistparser creates a simple Trie structure using Node.js.

I’ve dumped my work-in-progress JavaScript Trie Generator up on Github (note that it requires Node.js to run).

Generating a Trie

The basic Trie generation functionality is as follows (note that I use ‘0’ or ‘$:0’ to indicate a complete word):

// Go through all the words in the dictionary
for ( var i = 0, l = words.length; i < l; i++ ) {
	// Get all the letters that we need
	var word = words&#91;i&#93;, letters = word.split(""), cur = trie;

	// Loop through the letters
	for ( var j = 0; j < letters.length; j++ ) {
		var letter = letters&#91;j&#93;, pos = cur&#91; letter &#93;;

		// If nothing exists for this letter, create a leaf
		if ( pos == null ) {
			// If it's the end of the word, set a 0,
			// otherwise make an object so we can continue
			cur = cur&#91; letter &#93; = j === letters.length - 1 ? 0 : {};
		
		// If a final leaf already exists we need to turn it
		// into an object to continue traversing
		} else if ( pos === 0 ) {
			cur = cur&#91; letter &#93; = { $: 0 };

		// Otherwise there is nothing to be set, so continue on
		} else {
			cur = cur&#91; letter &#93;;
		}
	}
}&#91;/js&#93;

<strong>Optimizing the Structure</strong>

While this is fine I wanted to go a step further. Rather than having single letters at every leaf of the tree I saw much room for optimization. <a href="https://johnresig.com/blog/dictionary-lookups-in-javascript/#comment-392044">Dave Ward posted</a> a sample tree structure which was rather similar to what was generated by Henrik's code (only Henrik's code used '$:1' instead of 'end:true' to save bytes). Looking at the resulting tree structure provided by Dave I was not terribly pleased with the resulting number of objects or code length.

<strong>Code posted by Dave Ward</strong>

[js]var trie = {
    b: {
        a: {
            r: {
                end: true,
                s: {
                    end: true
                }
            }
        }
    },
    f: {
        o: {
            o: {
                end: true
            }
        }
    },
    r: {
        a: {
            t: {
                end: true,
                e: {
                    end: true
                }
            }
        }
    }
};

I saw that letters that only had a single child could be reduced into a single string. “r a t” could just become “rat”. Additionally there was no need to have a bulky {$:1} structure just to denote the end of a word when you could just use the number 1 to indicate that as well.

Possible Optimization

var trie = {
	bar: {
		$: 1,
		s: 1
	},
	foo: 1,
	rat: {
		$: 1,
		e: 1
	}
};

This change compresses the structure of the Trie down to its bare minimum – using only 3 object literals instead of the previous 12. Using the particular Trie structure makes lookups slightly more challenging (you need to look at the entire start of the word string, rather than just the first character) but it still seems to perform well enough.

The code for optimization:

// Optimize a Trie structure
function optimize( cur ) {
	var num = 0;

	// Go through all the leaves in this branch
	for ( var node in cur ) {
		// If the leaf has children
		if ( typeof cur[ node ] === "object" ) {
			// Continue the optimization even deeper
			var ret = optimize( cur[ node ] );

			// The child leaf only had one child
			// and was "compressed" as a result
			if ( ret ) {
				// Thus remove the current leaf
				delete cur[ node ];
				
				// Remember the new name
				node = node + ret.name;
				
				// And replace it with the revised one
				cur[ node ] = ret.value;
			}
		}

		// Keep track of how many leaf nodes there are
		num++;
	}

	// If only one leaf is present, compress it
	if ( num === 1 ) {
		return { name: node, value: cur[ node ] };
	}
}

There’s still room for some additional improvement, though. I looked at the changes proposed by the directed acyclic word graph and realized that there was a minor tweak that could be made for improving file size even more: Reducing the number of common suffixes.

For example if you had the following words: sliced, slicer, slices, diced, dicer, dices – you would note that they have common endings (d, r, s) that could be reduced. I added another simple improvement to my Trie implementation that took this into account.

var trie = {
	slice: 1,
	dice: 1,
	$: [ 0, { d: 0, r: 0, s: 0 } ]
};

While it’s not as impressive with only two root words – it ends up working surprisingly well when you have over 100,000 words, reducing file size even more. The full code for finding, and reducing, the common suffixes can be found on Github.

Finding a Word

To find a word in the Trie it requires a little more work than the traditional Trie search:

function findTrieWord( word, cur ) {
	// Get the root to start from
	cur = cur || dict;
	
	// Go through every leaf
	for ( var node in cur ) {
		// If the start of the word matches the leaf
		if ( word.indexOf( node ) === 0 ) {
			// If it's a number
			var val = typeof cur[ node ] === "number" && cur[ node ] ?
				// Substitute in the removed suffix object
				dict.$[ cur[ node ] ] :

				// Otherwise use the current value
				cur[ node ];

			// If this leaf finishes the word
			if ( node.length === word.length ) {
				// Return 'true' only if we've reached a final leaf
				return val === 0 || val.$ === 0;

			// Otherwise continue traversing deeper
			// down the tree until we find a match
			} else {
				return findTrieWord( word.slice( node.length ), val );
			}
		}
	}
	
	return false;
}

Note that we have to do the indexOf and slice operations on the word in order to get at the proper substrings that we need for comparison. Additionally we have to make sure that we substitute in the suffix objects that we’ve removed previously. In all it’s likely to add some level of performance overhead, compared with a traditional Trie, but considering how fast it is already that hit really ends up being negligible.

Further File Size Optimization

I did two final tweaks to save on file size. First, I remove all the quotes in the file. Yes, this makes it no longer valid JSON – but that’s OK for my particular situation (I’m loading it as a JavaScript file). Second, I go through and re-quote all the reserved words that are used as object property names. Thus { for: 0, in: 0, zoo: 0 } would become: { 'for': 0, 'in': 0, zoo: 0 }. This final result makes the best use of file size while still working properly in all browsers.

File Size Analysis

The biggest win provided by a Trie should be related to its decrease in file size. The plain string dictionary contained 112,429 words in it and clocked in at 916KB so it seems like any decrease should be possible.

I did multiple levels of comparison. Comparing the basic string version of the dictionary (that I discussed in the last blog post), the simple Trie (no optimizations), the optimized Trie, and the suffix optimized Trie.

Dictionary File Size Comparison

The result is quite interesting. The unoptimized, uncompressed, Trie is actually slightly larger than the plain dictionary – but shows some solid benefits once compressed. The further optimizations yield even better results. The final suffix optimized, gzipped, Trie yields a file size of 155KB compared to the initial 276KB of the plain string dictionary. Not bad for over 110,000 words!

Load Speed Analysis

If I was only interested in the download size of the dictionary I would’ve stopped there – however I want to make sure that no other aspect of the application will be hurt by making this switch. Specifically I want to make sure that loading in the dictionary (converting it from a serialized JS string into a usable form) was still fast. I want to make sure that when I do an Ajax request to get the dictionary, or when I load the dictionary from localStorage, the resulting overhead of parsing all the code doesn’t cripple the application.

Dictionary Load Speed Comparison

Unfortunately this is where we start to see some serious problems with using anything but a plain string. In the above chart I include the plain string dictionary and the Trie – but also the hash-lookup technique that I described in my last post.

Note that this load process is going to happen every time the application starts (for example having to re-parse the serialized Trie into a usable JavaScript object).

In the above chart I’m only analyzing the code in one of the fastest JavaScript environments in existence: an unhindered Node.js running on a Core i7 processor – and even then loading the Trie dictionary takes over 100ms. I fear how long it would take on a mobile phone or in an older Internet Explorer browser. If you’re wondering why this might be the case think about what’s happening here: You’re loading and evaluating over 506KBs (after being un-gzipped) of JavaScript code. This will bog down any system – especially a phone.

To drive the point home I ran some tests in some browsers:

Load Speed of Trie

Sure enough – even one of the fastest mobile browsers (Mobile Safari 4.3) takes over 4 seconds to load the Trie data structure! In fact it crashed the browser on a few test runs.

Considering that Gmail delays nearly all execution of its JavaScript code, on mobile devices, requiring 506KBs of JavaScript code to be executed up front becomes untenable.

Search Speed Analysis

I didn’t want to stop there, though. There were (justified) concerns in reply to the previous post regarding the lookup technique being employed to find words in the plain string dictionary (it was using a simple indexOf operation). This is a linear search through the string and will undoubtedly be much slower than using a Hash or a Trie lookup.

Dictionary Search Speed Comparison

I ran some tests (for both finding valid words and invalid words) and confirmed that suspicion: Doing a linear search across a plain string is fundamentally much slower than using a hash or a Trie. However – note the time scale. In the absolute worst case searching the plain string dictionary for a word yielded lookup times of less than one millisecond. Considering that in my game dictionary lookups are only happening every couple seconds (and not thousands of times a second) I can get away with this performance hit. Presumably if you were building a game that required multitudes of dictionary lookups you might want to go with a faster hash or Trie technique.

Memory Usage Analysis

Finally I decided to take a look at memory usage. This was one last factor that I needed to take into account, with regards to mobile. Using too much memory will cause a web site to reload if the user switches tabs on an iOS device – and will likely cause similar problems on other mobile devices.

Dictionary Memory Usage Comparison

Once again, having a simple object yields some impressive gains. Using just a plain string (even though it’s 916KB) still ends up using less than half the amount of memory that a Trie structure does – and almost 10x less than a naïve hash.

Raw Memory Usage Data:

(I loaded the dictionary 100 times in Node.js in each case in order to try and get an accurate reading from OS X’s Activity Monitor monitor application.)

Baseline memory usage of an empty Node.js application:

Memory usage of a the plain string technique:

(Note that even though I loaded the dictionary 100 times it appears to have been smart and cut down on duplicate memory usage. I even tried to inject some randomness into the dictionary string but still ended up with similar results. I ended up just counting this result once, rather than dividing by 100 like with the other results.)

Memory usage of the hash technique:

Memory usage of using a Trie:

Conclusion

So while utilizing a Trie structure could yield some rather significant file size and word lookup performance improvements – it comes at a cost that is rather unacceptable for an application that’s attempting to target older browsers or mobile devices (both in terms of initial load and in terms of memory usage).

I hope the Trie JavaScript code that I’ve written can be useful to some either way.

If anything though, I hope to communicate that when you are making decisions about what technologies to use – just because something may sound better on paper you really should do some careful analysis of the solution before moving ahead.

Raw Data

The raw data for all my tests (which can be found in my Trie.js Github repo) is as follows:

Plain String
--------------
Normal: 916KB
Gzipped: 276KB
Build Speed: 1 (.01 per)
Find Speed: 47331 (0.42ms per)
Not Find Speed: 71902 (0.64ms per)
Private Memory: 10.3MB (1.2MB per)

Hash
--------------
Normal: 916KB
Gzipped: 276KB
Hash Build Speed: 6638 (66.4 per)
Hash Find Speed: 19 (0.0002ms per)
Hash Not Find Speed: 34 (0.0003ms per)
Private Memory: 1014MB (10.05MB per)

Trie
--------------
Simple Trie: 1.0MB
Gzipped: 184KB

Optimized Trie: 761KB
Gzipped: 168KB

Suffix Optimized Trie: 506KB
Gzipped 155KB
Build Speed: 11178 (111.8 per)
Find Speed: 413 (0.004 per)
Not Find Speed: 552 (0.005 per)
Private Memory: 299.2MB (2.9MB per)

Update: Early this morning Mike Koss posted a follow-up on my previous blog post with a compressed Trie implementation. This is quite interesting and will certainly result in a smaller file size than what I’ve done. However, there’s a problem: The result will end up taking a longer amount of time to load than the plain JavaScript structure that I employed. In my solution, since it’s already written using JavaScript syntax, the performance will run as fast as the JavaScript engine can parse the code. However in Mike’s solution the corresponding JavaScript structure will need to be re-built from the text itself. Theoretically if Mike comes up with a solution that used the compressed file size but leaves the data in its original string – that would be the best of all worlds. I’ll be curious to see what he comes up with!

Posted: March 17th, 2011


Subscribe for email updates

55 Comments (Show Comments)



Comments are closed.
Comments are automatically turned off two weeks after the original post. If you have a question concerning the content of this post, please feel free to contact me.


Secrets of the JavaScript Ninja

Secrets of the JS Ninja

Secret techniques of top JavaScript programmers. Published by Manning.

John Resig Twitter Updates

@jeresig / Mastodon

Infrequent, short, updates and links.