Menu

How to Create a Table That Can Display One Million Records

million_record_table
If you were to create a PHP script that creates a plain HTML table with one million rows, you’ll immediately run into a few problems trying to load it in a browser:

  • It requires downloading hundreds of megabytes (depending on row size)
  • The browser will create millions and millions of DOM elements and run out of memory

As a result, it will freeze and crash.

Developers have known that this is a foolish thing to do for a long time.  This is why most long tables on the web are paged.  From a usability point of view, having a million rows seems absurd since a user can never reasonably view all of them.

So how (and why) did we build a table that supports 1 million rows?

Background

The fundamental strategy behind Nanigans success is that we create a lot of ads (an “ad” for this discussion is a creative design and some targeting parameters that is pushed to an ad network like Facebook).  By “a lot”, I mean sometimes over a million ads.  We use automation to distinguish between ads that have good value (see predictive lifetime value), and ads that are likely throwing money away.  Good ads survive, and bad ads are killed.  We iterate some new ads and rinse and repeat.

Our customers need to able to see where their money is going, as well as learn insights about what worked best so that the next ads they create can perform even better.

Current solution

Our current intelligence tool is effectively a thin wrapper over MySQL. There are a number of “views” (which are basically GROUP BYs) and a number of filters that can be applied (which are WHERE clauses).  Your view gets translated to a massive SQL statement, and loaded onto the screen.  We render one screen worth of rows at a time, and every row is a fixed height.

This has worked pretty well for a number of years. Customers dig into data by looking at it aggregated across one dimension, applying a filter and then looking at it across a different dimension. This works pretty good for medium sized data sets, but for huge amounts of ads we started to see serious performance problems (the joins that were required hit a wall when we had millions and millions of rows).  To add to the fun, every “drill” into the data was another query, which compounded our performance issues.

Requirements

So we set out to build a better intelligence tool for our customers with the following requirements.engineering requirements

  • Drilling into data cut a variety of ways should be incredibly easy, ideally in a multi-level table (much like a pivot table).
  • Results need to appear in less than a second, regardless of Internet connection (phones!).
  • If you happen to get 1,000,000 rows at any level, it shouldn’t break the browser.
  • Cells have to support variable height, and possibly complex contents (like a preview of an ad).
  • Scrolling should behave like a webpage, and should not lock to the top of each cell.  This is necessary if you have a really tall cell.
  • Changing the sorts on results needs to be incredibly fast.

We tried a couple of methods that ultimately failed:

        Although there are potentially a million records, much of what makes them up is repeated.  Creating a lookup map for long strings reduces bandwidth costs dramatically at the expense of transferring the lookup map in the first place.  Unfortunately, the lookup map for some customers hit 32MB–which is too much data to download before any records are displayed.  This didn’t even begin to cover the DOM issues.

1. We tried taking advantage of some nifty HTML5 local storage options.  We would sync up record changes on load of the app, do basic aggregation in the browser and pull performance data from the server as needed.  This was a fun experiment, but to sort the data by performance, you really need the entire data set.  Again this failed before we really tackled the DOM issue.

2. We tried an infinite scroll solution, which was better, but still suffered from the same problems that most infinite scroll solutions have.   An inadvertent action could remove all context of what you’re looking at, it wouldn’t be clear via the scrollbar how much data was actually there, DOM elements would be left over at the top in case you scrolled back–which means you still run out of memory if you made it to the bottom of the table.

3. We tried creating a “row group” widget for each set of 50 rows.  In its initial state, it starts empty and attempts to approximate its height.  If the scroll window of the browser was close to the widget, the widget would fetch the row data to make sure it was ready.  If the widget became visible on the screen, it would actually fill itself with row data.  If the widget goes offscreen again, it would adjust the height estimation, and empty itself out again.

This was real progress.  The bandwidth requirements were low, the scrollbar behaved like Excel, and we reduced our DOM requirements by over 50x.  Unfortunately, if there were a million rows, that would still result in 20,000 row groups.  Updating each of those on each scroll event was impossible.

The Solution

Instead of building a massive list of “row groups”, we tried using the DOM to aid us in building a search tree.  Rather than building one “row group” for every 50 rows, we build 10 “row groups” and divide all rows evenly among them.  The first “row group” is responsible for rows 1-100,000, the second for rows 100,001-200,000, and so on.  Again, the row height estimator played a very important roll.

The big difference is that if a “row group” was visible and it covered more than 10 rows, instead of filling itself with all 100,000 rows, it would subdivide into “row groups” again.  Rinse and repeat.  By applying this algorithm recursively, we reduced our need for 20,000 DOM elements down to 60.  Searching for row groups that should be shown is now a tree search and incredibly fast.

This reduced the front end of our table problem to a logarithmic solution.  Increasing our data size 10x only adds 10 DOM elements to the mix.  We are only limited by our backend and the maximum height of a page that the browser can render (32 million pixels +\-).

Conclusion

Of course, we didn’t go into any details on how we actually render rows.  There are many solutions to this including a traditional table element, sized divs, or something more extravagant like display:box.  This approach is really a way to build a massive list of arbitrary elements and it just happens to work great for a table.

All of this assumes that there is a backend that can serve any number of pre-computed rows incredibly fast, as well as build queries based on arbitrary aggregations.  We’ll explore more on this in an upcoming post. Stay tuned!

Leave a reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>