OffLog : High volume reporting on a limited system

  • How do you log a lot of data without killing your limited setup ?
  • But how can you scale the system or change components easily, i.e. have good architecture ?
  • How to not over load production databases, so logging kills the actual system you're running ?
Questions I've dealt with a few times, and a pet project for logging on a game I'm building that I thought I'd share.

The situation

You've been told to report on "everything" though not given the kind of back-end (or cash) that Facebook/Google/Amazon have for systems like MapReduce or Pentaho.

You have at least one server (virtual or real) for "reporting", or at least an allowance of processing time and storage some where.

Though if the systems gets big you want to be able to swap out the back end or add more capacity without having to do anything in the application.

There are likely some front end machine(s) doing some kind of scripted work, for an application or game (LAMP in this example) and a database back-end for the actual application.

But the main thing you have is a LOT of data to log and process.

Given how much time and effort that has gone into building the application and making it efficient, the thought of slowing it down with logging irks you.

Lots of tight code & apc/memecache, and now you're going to slow it down logging to a database, slowing everything to disk speed and synchronous calls ? Hell no.

And rightly so.

Personally I believe :
  1. Reporting shouldn't go in the "main database".
  2. Reporting shouldn't slow down the application, or as one grows so does the other, and it all grinds to a halt.
  3. Good architecture at the start, helps advert problems along the way, and can not only mitigate some nasty problems, but helps in long term maintenance and evolution of the system.

The components

This example is LAMP (and memcache), with the addition of Gearman. That's it.

And if you haven't played with Gearman, the sooner the better.

The set up

I'm using two servers, though can do everything on one and move to two when I need or want, transparently to the client machines.


  • Cn - Client script uses GearmanClient::doBackground to hand off logging actions to Gearman, as to not interrupt the main script
  • Wn - Worker, maintains persistent connection to memecache on logging server, to update action counts
  • 0-5 = 6 * 10 min time slices in memcache (the ring buffer), workers log to current time slice

So what happens ?

The key part is the workers from gearman are filling a Ring Buffer in memcache, In this example, there are 6 time slices in the hour, so 10 min sections.

As a logging action is created (or incremented) at min 6 of the hour it goes in slice 0 (0-5 for the 6) at 15 past the hour, it's slice 1, and so on.

Much like the sections of a trivial pursuit board game piece.

As one slice is filled and the clock ticks on, so it moves onto the next slice every 10 mins.

This is the first place we balance accuracy with "getting the job done", we're pre-aggregating. We can break down to 10 min sections, but beyond that the detail is lost.

A fair price for getting a good enough idea of what is going on I think.

If there are users or actions that must be  logged, then have the workers just those data points separately. As it's all going through Gearman the client doesn't know no care (and shouldn't) what happens.

The use of Gearman here allows the front end scripts to hand off logging data and keep running at maximum speed. Gearman can then buffer and pass of data as quick as the workers can process it. At this point if the memecache is on another machine, it's down to network speed, though only for logging, not your main application.

The collection & the register

So as we've just read the Gearman workers on the front end servers are filling up the slices in memecahe (with persistent connections to alleviate start up and tear down).

But now we have to collect that data and store it.

Each of the workers do two things, they take log action data from a front end script and put the details in memcache, they also make sure that the userid and action id is logged in a register.

This is because we can't "SELECT *" from memcache to get all the users that have done something in that 10 min section.

This is where cron comes in, ever slice size, being 10 mins in this example and the default implementation.

Firstly to get the list of users that have done something in that slice, and what the list of actions are. Then with a loop though the users and all the actions all the counts are grabbed from memcache.

There will be some misses here, as given we have a list of all the users and all the actions, not all users have performed all actions. Though we'll accept the misses to make sure we get it all.

After which we have all the data from that slice and it can now be left, as the memcache time out is 35 mins so it will have evaporated before the workers come around again with fresh data.

Also given the ring buffer size and time dimensions each cron has 35 mins to complete aggregating and pushing the data to the database before the data times out in memcache.

Now in processing time terms, 35 mins is a veritable ice age and if you run over that there is something else going on!


Integration to an app

Using the PHP client, we can instantiate the logging client at the beginning of our main application script, so we can log multiple items though out the script if needed.


$loggingClient = new offLogClient($offLogCfg);
$loggingClient->logSimpleAction(1, ACTION_USER_LOGON, 'test string');


First param being the userid. Seconds being ACTION_USER_LOGON constant number, which is defined in the config along with all the others needed.

The third is an arbitrary sting that can be used for all manner of extra information, though for the purposes of action counting, is ignored. This is used for a watched action or watched user, where we want specifics.

Aggregation

The aggregation happens in the slices of the buffer of course, each piece of data being a userid and a actionid (key) pointing to a count (value) :
  • Userid X did action Y, Z times in this time slice.
Once the cron job has gone though and collected all the data from the time slice and put it in the database, you're back into the realm of SQL where you can generate reports, archive data, mine, spot trends, and graph out.

On the topic of graphs, I use : http://jpgraph.net and will be adding a few simple scripts to the code by way of example as well.

Future

As long as you have a LAMP environment, all you need to install in a Gearman server and the support in PHP, there is an overview of that here.

The only other thing needed is the code, and I've put that in gitHub here :

https://github.com/JeremyHutchings/OffLog

Obviously a work in progress and an idea I'm playing with and developing, now that I've thought it through and written the blog piece I can get on with coding it up!

Given the use of Gearman, if the logging requirements change level of details, volume, etc using a different back end system is easialy doable.

An example of why I think components in a system though exhibit the same two qualities that good OO code does, good encapsulation and low cohesion.

@JeremyHutchings

Comments

Popular posts from this blog

How to create SugarCRM graphs

Virtual Teams and Remoting Working; a research based perspective

What is :: Hackmode - Flow - The Zone