Oracle: How to find the timestamp of the last update (any table) within a schema? Oracle: How to find the timestamp of the last update (any table) within a schema? oracle oracle

Oracle: How to find the timestamp of the last update (any table) within a schema?


I'm not aware of any such functionality in Oracle. See below.

The best solution I can come up with is to create a trigger on each of your tables that updates a one-row table or context with the current date/time. Such triggers could be at the table-level (as opposed to row-level), so they wouldn't carry as much overhead as most triggers.

Incidentally, Oracle can't keep a transaction ID per schema, as one transaction could affect multiple schemas. It might be possible to use V$ views to track a transaction back to the objects it affected, but it wouldn't be easy and it would almost certainly perform poorer than the trigger scheme.

It turns out, if you have 10g, you can use Oracle's flashback functionality to get this information. However, you'd need to enable flashback (which carries some overhead of it's own) and the query is ridiculously slow (presumably because it's not really intended for this use):

select max(commit_timestamp) from FLASHBACK_TRANSACTION_QUERY where table_owner = 'YOUR_SCHEMA'       and operation in ('INSERT','UPDATE','DELETE','MERGE') 

In order to avoid locking issues in the "last updated" table, you'd probably want to put that update into a procedure that uses an autonomous transaction, such as:

create or replace procedure log_last_update aspragma autonomous_transaction;begin   update last_update set update_date = greatest(sysdate,update_date);   commit;end log_last_update;

This will cause your application to serialize to some degree: each statement that needs to call this procedure will need to wait until the previous one finishes. The "last updated" table may also get out of sync, because the update on it will persist even if the update that activated the trigger is rolled back. Finally, if you have a particularly long transaction, the application could pick up the new date/time before the transaction is completed, defeating the purpose. The more I think about this, the more it seems like a bad idea.


The better solution to avoid these issues is just to insert a row from the triggers. This would not lock the table, so there wouldn't be any serialization and the inserts wouldn't need to be made asynchronously, so they could be rolled back along with the actual data (and wouldn't be visible to your application until the data is visible as well). The application would get the max, which should be very fast if the table is indexed (in fact, this table would be an ideal candidate for an index-organized table). The only downside is that you'd want a job that runs periodically to clean out old values, so it didn't grow too large.


dbms_stats.gather_table_stats might also help: http://forums.oracle.com/forums/thread.jspa?threadID=607610

   4. Statistics is considered to be stale, when the change is over 10% of current rows.    (As of 11g, this value can be customized per objects. Cool feature)    .    .     .exec dbms_stats.gather_table_stats(user, 'T_STAT');select * from sys.dba_tab_modifications where table_name = 'T_STAT';No row selectedselect stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';NOinsert into t_stat select rownum from all_objects where rownum <= 20;select * from sys.dba_tab_modifications where table_name = 'T_STAT';No rows selected <-- Oopsselect stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';NO  <-- Oopsexec dbms_stats.flush_database_monitoring_info;select * from sys.dba_tab_modifications where table_name = 'T_STAT';TABLE_OWNER TABLE_NAME  PARTITION_NAME  SUBPARTITION_NAME   INSERTS UPDATES DELETES TIMESTAMP   TRUNCATED   DROP_SEGMENTSUKJA        T_STAT      20  0   0   2008-01-18 PM 11:30:19  NO  0select stale_stats from sys.dba_tab_statistics where table_name = 'T_STAT';YES