Data in different resolutions Data in different resolutions database database

Data in different resolutions


The normal way to do this on a low-latency data warehouse application is to have a partitioned table with a leading partition containing something that can be updated quickly (i.e. without having to recalculate aggregates on the fly) but with trailing partitions backfilled with the aggregates. In other words, the leading partition can use a different storage scheme to the trailing partitions.

Most commercial and some open-source RDBMS platforms (e.g. PostgreSQL) can support partitioned tables, which can be used to do this type of thing one way or another. How you populate the database from your logs is left as an exercise for the reader.

Basically, the structure of this type of system goes like:

  • You have a table partitioned on somesort of date or date-time value,partitioned by hour, day or whatevergrain seems appropriate. The logentries get appended to this table.

  • As the time window slides off apartition, a periodic job indexes orsummarises it and converts it intoits 'frozen' state. For example, ajob on Oracle may create bitmapindexes on that partition or update amaterialized view to include summarydata for that partition.

  • Later on, you can drop old data,summarize it or merge partitionstogether.

  • As time goes on, the periodic jobback fills behind the leading edgepartition. The historical data isconverted to a format that lendsitself to performant statisticalqueries while the front edgepartition is kept easy to updatequickly. As this partition doesn'thave so much data, querying acrossthe whole data set is relativelyfast.

The exact nature of this process varies between DBMS platforms.

For example, table partitioning on SQL Server is not all that good, but this can be done with Analysis Services (an OLAP server that Microsoft bundles with SQL Server). This is done by configuring the leading partition as pure ROLAP (the OLAP server simply issues a query against the underlying database) and then rebuilding the trailing partitions as MOLAP (the OLAP server constructs its own specialised data structures including persistent summaries known as 'aggregations'). Analysis services can do this completely transparently to the user. It can rebuild a partition in the background while the old ROLAP one is still visible to the user. Once the build is finished it swaps in the partition; the cube is available the whole time with no interruption of service to the user.

Oracle allows partition structures to be updated independently, so indexes can be constructed, or a partition built on a materialized view. With Query re-write, the query optimiser in Oracle can work out that aggregate figures calculated from a base fact table can be obtained from a materialized view. The query will read the aggregate figures from the materialized view where partitions are available and from the leading edge partition where they are not.

PostgreSQL may be able to do something similar, but I've never looked into implementing this type of system on it.

If you can live with periodic outages, something similar can be done explicitly by doing the summarisation and setting up a view over the leading and trailing data. This allows this type of analysis to be done on a system that doesn't support partitioning transparently. However, the system will have a transient outage as the view is rebuilt, so you could not really do this during business hours - the most often would be overnight.

Edit: Depending on the format of the log files or what logging options are available to you, there are various ways to load the data into the system. Some options are:

  • Write a script using your favourite programming language that reads the data, parses out the relevant bits and inserts it into the database. This could run fairly often but you have to have some way of keeping track of where you are in the file. Be careful of locking, especially on Windows. Default file locking semantics on Unix/Linux allow you to do this (this is how tail -f works) but the default behaviour on Windows is different; both systems would have to be written to play nicely with each other.

  • On a unix-oid system you could write your logs to a pipe and have a process similar to the one above reading from the pipe. This would have the lowest latency of all, but failures in the reader could block your application.

  • Write a logging interface for your application that directly populates the database, rather than writing out log files.

  • Use the bulk load API for the database (most if not all have this type of API available) and load the logging data in batches. Write a similar program to the first option, but use the bulk-load API. This but would use less resources than populating it line-by-line, but has more overhead to set up the bulk loads. It would be suitable a less frequent load (perhaps hourly or daily) and would place less strain on the system overall.

In most of these scenarios, keeping track of where you've been becomes a problem. Polling the file to spot changes might be infeasibly expensive, so you may need to set the logger up so that it works in a way that plays nicely with your log reader.

  • One option would be to change the logger so it starts writing to a different file every period (say every few minutes). Have your log reader start periodically and load new files that it hasn't already processed. Read the old files. For this to work, the naming scheme for the files should be based on the time so the reader knows which file to pick up. Dealing with files still in use by the application is more fiddly (you will then need to keep track of how much has been read), so you would want to read files only up to the last period.

  • Another option is to move the file then read it. This works best on filesystems that behave like Unix ones, but should work on NTFS. You move the file, then read it at leasure. However, it requires the logger to open the file in create/append mode, write to it and then close it - not keep it open and locked. This is definitely Unix behaviour - the move operation has to be atomic. On Windows you may really have to stand over the logger to make this work.


Take a look at RRDTool. It's a round robin database. You define the metrics you want to capture but can also define the resolution that you store it at.

For example, you can specify for the las hour, you keep every seconds worth of information; for the past 24 hours - every minute; for the past week, every hour, etc.

It's widely used to gather stats in systems such as Ganglia and Cacti.


When it comes to slicing and aggregating data (by time or something else), the star schema (Kimball star) is a fairly simple, yet powerful solution. Suppose that for each click we store time (to second resolution), user’s info, the button ID, and user’s location. To enable easy slicing and dicing, I’ll start with pre-loaded lookup tables for properties of objects that rarely change -- so called dimension tables in the DW world.

pagevisit2_model_02

The dimDate table has one row for each day, with number of attributes (fields) that describe a specific day. The table can be pre-loaded for years in advance, and should be updated once per day if it contains fields like DaysAgo, WeeksAgo, MonthsAgo, YearsAgo; otherwise it can be “load and forget”. The dimDate allows for easy slicing per date attributes like

WHERE [YEAR] = 2009 AND DayOfWeek = 'Sunday'

For ten years of data the table has only ~3650 rows.

The dimGeography table is preloaded with geography regions of interest -- number of rows depend on “geographic resolution” required in reports, it allows for data slicing like

WHERE Continent = 'South America'

Once loaded, it is rarely changed.

For each button of the site, there is one row in the dimButton table, so a query may have

WHERE PageURL = 'http://…/somepage.php'

The dimUser table has one row per registered user, this one should be loaded with a new user info as soon as the user registers, or at least the new user info should be in the table before any other user transaction is recorded in fact tables.

To record button clicks, I’ll add the factClick table.

pagevisit2_model_01

The factClick table has one row for each click of a button from a specific user at a point in time. I have used TimeStamp (second resolution), ButtonKey and UserKey in a composite primary key to to filter-out clicks faster than one-per-second from a specific user. Note the Hour field, it contains the hour part of the TimeStamp, an integer in range 0-23 to allow for easy slicing per hour, like

WHERE [HOUR] BETWEEN 7 AND 9

So, now we have to consider:

  • How to load the table? Periodically -- maybe every hour or every few minutes -- from the weblog using an ETL tool, or a low-latency solution using some kind of event-streaming process.
  • How long to keep the information in the table?

Regardless of whether the table keeps information for a day only or for few years -- it should be partitioned; ConcernedOfTunbridgeW has explained partitioning in his answer, so I’ll skip it here.

Now, a few example of slicing and dicing per different attributes (including day and hour)

To simplify queries, I’ll add a view to flatten the model:

/* To simplify queries flatten the model */ CREATE VIEW vClicks AS SELECT * FROM factClick AS f JOIN dimDate AS d ON d.DateKey = f.DateKey JOIN dimButton AS b ON b.ButtonKey = f.ButtonKey JOIN dimUser AS u ON u.UserKey = f.UserKey JOIN dimGeography AS g ON g.GeographyKey = f.GeographyKey

A query example

/* Count number of times specific users clicked any button  today between 7 and 9 AM (7:00 - 9:59)*/ SELECT  [Email]        ,COUNT(*) AS [Counter] FROM    vClicks WHERE   [DaysAgo] = 0         AND [Hour] BETWEEN 7 AND 9         AND [Email] IN ('dude45@somemail.com', 'bob46@bobmail.com') GROUP BY [Email] ORDER BY [Email]

Suppose that I am interested in data for User = ALL. The dimUser is a large table, so I’ll make a view without it, to speed up queries.

/* Because dimUser can be large table it is good to have a view without it, to speed-up queries when user info is not required */ CREATE VIEW vClicksNoUsr AS SELECT * FROM factClick AS f JOIN dimDate AS d ON d.DateKey = f.DateKey JOIN dimButton AS b ON b.ButtonKey = f.ButtonKey JOIN dimGeography AS g ON g.GeographyKey = f.GeographyKey

A query example

/* Count number of times a button was clicked on a specific page today and yesterday, for each hour. */ SELECT  [FullDate]        ,[Hour]        ,COUNT(*) AS [Counter] FROM    vClicksNoUsr WHERE   [DaysAgo] IN ( 0, 1 )         AND PageURL = 'http://...MyPage' GROUP BY [FullDate], [Hour] ORDER BY [FullDate] DESC, [Hour] DESC



Suppose that for aggregations we do not need to keep specific user info, but are only interested in date, hour, button and geography. Each row in the factClickAgg table has a counter for each hour a specific button was clicked from a specific geography area.

pagevisit2_model_03

The factClickAgg table can be loaded hourly, or even at the end of each day –- depending on requirements for reporting and analytic. For example, let’s say that the table is loaded at the end of each day (after midnight), I can use something like:

/* At the end of each day (after midnight) aggregate data. */ INSERT  INTO factClickAgg         SELECT  DateKey                ,[Hour]                ,ButtonKey                ,GeographyKey                ,COUNT(*) AS [ClickCount]         FROM    vClicksNoUsr         WHERE   [DaysAgo] = 1         GROUP BY DateKey                ,[Hour]                ,ButtonKey                ,GeographyKey

To simplify queries, I'll create a view to flatten the model:

/* To simplify queries for aggregated data */ CREATE VIEW vClicksAggregate AS SELECT * FROM factClickAgg AS f JOIN dimDate AS d ON d.DateKey = f.DateKey JOIN dimButton AS b ON b.ButtonKey = f.ButtonKey JOIN dimGeography AS g ON g.GeographyKey = f.GeographyKey

Now I can query aggregated data, for example by day :

/* Number of times a specific buttons was clicked in year 2009, by day */ SELECT  FullDate        ,SUM(ClickCount) AS [Counter] FROM    vClicksAggregate WHERE   ButtonName = 'MyBtn_1'         AND [Year] = 2009 GROUP BY FullDate ORDER BY FullDate

Or with a few more options

/* Number of times specific buttons were clicked in year 2008, on Saturdays, between 9:00 and 11:59 AM by users from Africa */ SELECT  SUM(ClickCount) AS [Counter] FROM    vClicksAggregate WHERE   [Year] = 2008         AND [DayOfWeek] = 'Saturday'         AND [Hour] BETWEEN 9 AND 11         AND Continent = 'Africa'         AND ButtonName IN ( 'MyBtn_1', 'MyBtn_2', 'MyBtn_3' )