You can adjust all of your cookie settings by navigating the tabs in this window.
If you were to create a PHP script that creates a plain HTML table with one million rows, you’ll immediately run into a few problems trying to load it in a browser:
As a result, it will freeze and crash.
Developers have known that this is a foolish thing to do for a long time. This is why most long tables on the web are paged. From a usability point of view, having a million rows seems absurd since a user can never reasonably view all of them.
So how (and why) did we build a table that supports 1 million rows?
The fundamental strategy behind Nanigans success is that we create a lot of ads (an “ad” for this discussion is a creative design and some targeting parameters that is pushed to an ad network like Facebook). By “a lot”, I mean sometimes over a million ads. We use automation to distinguish between ads that have good value (see predictive lifetime value), and ads that are likely throwing money away. Good ads survive, and bad ads are killed. We iterate some new ads and rinse and repeat.
Our customers need to able to see where their money is going, as well as learn insights about what worked best so that the next ads they create can perform even better.
Our current intelligence tool is effectively a thin wrapper over MySQL. There are a number of “views” (which are basically GROUP BYs) and a number of filters that can be applied (which are WHERE clauses). Your view gets translated to a massive SQL statement, and loaded onto the screen. We render one screen worth of rows at a time, and every row is a fixed height.
This has worked pretty well for a number of years. Customers dig into data by looking at it aggregated across one dimension, applying a filter and then looking at it across a different dimension. This works pretty good for medium sized data sets, but for huge amounts of ads we started to see serious performance problems (the joins that were required hit a wall when we had millions and millions of rows). To add to the fun, every “drill” into the data was another query, which compounded our performance issues.
So we set out to build a better intelligence tool for our customers with the following requirements.
We tried a couple of methods that ultimately failed:
1. We tried taking advantage of some nifty HTML5 local storage options. We would sync up record changes on load of the app, do basic aggregation in the browser and pull performance data from the server as needed. This was a fun experiment, but to sort the data by performance, you really need the entire data set. Again this failed before we really tackled the DOM issue.
2. We tried an infinite scroll solution, which was better, but still suffered from the same problems that most infinite scroll solutions have. An inadvertent action could remove all context of what you’re looking at, it wouldn’t be clear via the scrollbar how much data was actually there, DOM elements would be left over at the top in case you scrolled back–which means you still run out of memory if you made it to the bottom of the table.
3. We tried creating a “row group” widget for each set of 50 rows. In its initial state, it starts empty and attempts to approximate its height. If the scroll window of the browser was close to the widget, the widget would fetch the row data to make sure it was ready. If the widget became visible on the screen, it would actually fill itself with row data. If the widget goes offscreen again, it would adjust the height estimation, and empty itself out again.
This was real progress. The bandwidth requirements were low, the scrollbar behaved like Excel, and we reduced our DOM requirements by over 50x. Unfortunately, if there were a million rows, that would still result in 20,000 row groups. Updating each of those on each scroll event was impossible.
Instead of building a massive list of “row groups”, we tried using the DOM to aid us in building a search tree. Rather than building one “row group” for every 50 rows, we build 10 “row groups” and divide all rows evenly among them. The first “row group” is responsible for rows 1-100,000, the second for rows 100,001-200,000, and so on. Again, the row height estimator played a very important roll.
The big difference is that if a “row group” was visible and it covered more than 10 rows, instead of filling itself with all 100,000 rows, it would subdivide into “row groups” again. Rinse and repeat. By applying this algorithm recursively, we reduced our need for 20,000 DOM elements down to 60. Searching for row groups that should be shown is now a tree search and incredibly fast.
This reduced the front end of our table problem to a logarithmic solution. Increasing our data size 10x only adds 10 DOM elements to the mix. We are only limited by our backend and the maximum height of a page that the browser can render (32 million pixels +-).
Of course, we didn’t go into any details on how we actually render rows. There are many solutions to this including a traditional table element, sized divs, or something more extravagant like display:box. This approach is really a way to build a massive list of arbitrary elements and it just happens to work great for a table.
All of this assumes that there is a backend that can serve any number of pre-computed rows incredibly fast, as well as build queries based on arbitrary aggregations. We’ll explore more on this in an upcoming post. Stay tuned!