How to Audit Database Activity without Performance and Scalability Issues? How to Audit Database Activity without Performance and Scalability Issues? database database

How to Audit Database Activity without Performance and Scalability Issues?


I'm not sure if it's a mature enough approach for a productionsystem, but I had quite a lot of success with monitoring databasetraffic using a network traffic sniffer.

Send the raw data between the application and database off to anothermachine and decode and analyse it there.

I used PostgreSQL, and decoding the traffic and turning it intoa stream of database operations that could be logged was relativelystraightforward. I imagine it'd work on any database where the packetformat is documented though.

The main point was that it put no extra load on the database itself.

Also, it was passive monitoring, it recorded all activity, but couldn't block any operations, so might not be quite what you're looking for.


There is no need to "roll your own". Just turn on auditing:

  1. Set the database parameter AUDIT_TRAIL = DB.
  2. Start the instance.
  3. Login with SQLPlus.
  4. Enter the statement
    audit all;
    This turns on auditing for many critical DDL operations, but DML and some other DDL statements are still not audited.
  5. To enable auditing on these other activities, try statements like these:
    audit alter table; -- DDL auditaudit select table, update table, insert table, delete table; -- DML audit

Note: All "as sysdba" activity is ALWAYS audited to the O/S. In Windows, this means the Windows event log. In UNIX, this is usually $ORACLE_HOME/rdbms/audit.

Check out the Oracle 10g R2 Audit Chapter of the Database SQL Reference.

The database audit trail can be viewed in the SYS.DBA_AUDIT_TRAIL view.

It should be pointed out that the internal Oracle auditing will be high-performance by definition. It is designed to be exactly that, and it is very hard to imagine anything else rivaling it for performance. Also, there is a high degree of "fine-grained" control of Oracle auditing. You can get it just as precise as you want it. Finally, the SYS.AUD$ table along with its indexes can be moved to a separate tablespace to prevent filling up the SYSTEM tablespace.

Kind regards,Opus


If you want to record copies of changed records on a target system you can do this with Golden Gate Software and not incur much in the way of source side resource drain. Also you don't have to make any changes to the source database to implement this solution.

Golden Gate scrapes the redo logs for transactions referring to a list of tables you are interested in. These changes are written to a 'Trail File' and can be applied to a different schema on the same database, or shipped to a target system and applied there (ideal for reducing load on your source system).

Once you get the trail file to the target system there are some configuration tweaks you can set an option to perform auditing and if needed you can invoke 2 Golden Gate functions to get info about the transaction:

1) Set the INSERTALLRECORDS Replication parameter to insert a new record in the target table for every change operation made to the source table. Beware this can eat up a lot of space, but if you need comprehensive auditing this is probably expected.

2) If you don't already have a CHANGED_BY_USERID and CHANGED_DATE attached to your records, you can use the Golden Gate functions on the target side to get this info for the current transaction. Check out the following functions in the GG Reference Guide:GGHEADER("USERID")GGHEADER("TIMESTAMP")

So no its not free (requires Licensing through Oracle), and will require some effort to spin up, but probably a lot less effort/cost than implementing and maintaining a custom solution rolling your own, and you have the added benefit of shipping the data to a remote system so you can guarantee minimal impact on your source database.