Procedure to find the cause or reason for more archive log generation in oracle


Finding the reason for heavy archive log generation in a particular time period

We had a situation where we always have a high number of redo generation and it causes Dataguard to keep always out of sync. So decided to check which objects and schemas are causing a high number of archive logs.

Here is the procedure to capture the details that causing a high number or archive log generation.


1. Get the history of archive log generation- for my case I am generating for a week's time period.


col day for a12
set lines 1000
set pages 999
col "00" for a3
col "01" for a3
col "02" for a3
col "03" for a3
col "04" for a3
col "05" for a3
col "06" for a3
col "07" for a3
col "08" for a3
col "09" for a3
col "10" for a3
col "11" for a3
col "12" for a3
col "13" for a3
col "14" for a3
col "15" for a3
col "16" for a4
col "17" for a3
col "18" for a4
col "19" for a3
col "20" for a3
col "21" for a3
col "22" for a3
col "23" for a3
SELECT
to_char(first_time,'DD-MON-YYYY') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99')  "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23",
count(*) Tot
from
v$log_history
WHERE first_time > sysdate -7
GROUP by
to_char(first_time,'DD-MON-YYYY'),trunc(first_time) order by trunc(first_time);

Output:


DAY          00  01  02  03  04  05  06  07  08  09  10  11  12  13  14  15  16   17  18   19  20  21  22  23         TOT
------------ --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---- --- ---- --- --- --- --- --- ----------
25-JUN-2020    0   0   0   0   0   0   0   0   0   0  11  22  32  31  27  33   30  29   28  31  32  33  31  42        412
26-JUN-2020   55  30  42  38  40  33  24  21  15  13  25  17  32  28  32  24   26  38   34  42  31  26  32  35        733
27-JUN-2020   31  17  40  38  39  27  21   9  10   8   9  11  14  10  20  20   11   9   10   8  11   9   6  10        398
28-JUN-2020    9   6  32  33  31  17  12   5   7   6   8   8  13   6  10   7    8  11   10  11  20  13   9  30        322
29-JUN-2020   18  13  34  36  35  29  24  19  13  13  10  20  48  33  40  33   30  34   42  33  34  27  27  58        703
30-JUN-2020   54  19  40  43  47  33  28  22  13  13  21  20  35  29  32  28   29  32   27  30  31  34  29  42        731
01-JUL-2020   57  34  38  48  43  37  19  16  16  14  19  20  32  23  36  28   33  26   27  28  32  25  26  34        711
02-JUL-2020   44  44  51  66 135  42  4  16  15  13   20  48  33  40  33   30  34   42  33  34  27        588

From the above output, we could find almost the complete day we found a high number of archive log generation, and that was expected as per our current situation.

The next step is to find which object is causing high number of archive log generation. 

2. Find the table(s)/object(s) which has a high number of database block changes for a particular time causing database to do commit and generate a high number of redo generation.


set pages 9999
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,sum(db_block_changes_delta) as maxchanges
FROM dba_hist_seg_stat dhss,dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj# AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time BETWEEN to_date('2020-06-25 00:00:00','YYYY-MM-DD HH24:MI:SS')
AND to_date('2020-07-02 00:00:00','YYYY-MM-DD HH24:MI:SS')
GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
dhsso.object_name order by maxchanges asc;

Output

SNAP_TIME        OBJECT_NAME                    MAXCHANGES
---------------- ------------------------------ ----------
2020_06_25 23:00 CUSTOMER_DOCUMENT_PK                   22379024
2020_06_25 23:00 CUSTOMER_DOCUMENT_IX1                  22392448
2020_06_29 12:00 CUSTOMER_DOCUMENT                      23737056
2020_06_25 23:00 CUSTOMER_DOCUMENT                      26203552
2020_06_29 23:00 CUSTOMER_DOCUMENT_PK                   27740512
2020_06_29 23:00 CUSTOMER_DOCUMENT_IX1                  27752224
2020_06_26 00:00 CUSTOMER_DOCUMENT_PK                   30618832
2020_06_26 00:00 CUSTOMER_DOCUMENT_IX1                  30669520
2020_06_30 00:00 CUSTOMER_DOCUMENT_PK                   30983744
2020_07_01 00:00 CUSTOMER_DOCUMENT_IX1                  31129632
2020_06_30 00:00 CUSTOMER_DOCUMENT_IX1                  31184512
2020_07_01 00:00 CUSTOMER_DOCUMENT_PK                   31495232
2020_06_29 23:00 CUSTOMER_DOCUMENT                      31515440
2020_06_25 00:00 CUSTOMER_DOCUMENT_IX1                  33564896
2020_06_25 00:00 CUSTOMER_DOCUMENT_PK                   33714800
2020_06_30 00:00 CUSTOMER_DOCUMENT                      35457264
2020_07_01 00:00 CUSTOMER_DOCUMENT                      35822656
2020_06_26 00:00 CUSTOMER_DOCUMENT                      36009904
2020_06_25 00:00 CUSTOMER_DOCUMENT                      39829696

   
from the above output, you could see a high number of database block changes for few tables. The next step is to find a list of SQLs using those objects.


3. Find the list of SQLs using the object.

col SQL_TEXT for a60
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
 --dbms_lob.substr(sql_text,5000,1),
 dhss.instance_number,
 dhss.sql_id,executions_delta,rows_processed_delta
 FROM dba_hist_sqlstat dhss,
 dba_hist_snapshot dhs,
 dba_hist_sqltext dhst
 WHERE upper(dhst.sql_text) LIKE '%CUSTOMER_DOCUMENT%'
 AND dhss.snap_id=dhs.snap_id
 AND dhss.instance_Number=dhs.instance_number
 AND upper(dhst.sql_text) not like 'SELECT%'
 AND begin_interval_time BETWEEN to_date('2020-06-25 00:00:00','YYYY-MM-DD HH24:MI:SS')
 AND to_date('2020-07-02 00:00:00','YYYY-MM-DD HH24:MI:SS')
 AND dhss.sql_id = dhst.sql_id
 order by ROWS_PROCESSED_DELTA asc; 


Output



TIMESTAMP                     INST    SQL_ID        EXECUTIONS ROWS_PROCESSED
------------------------        --------------- ------------- ---------------- --------------------
2020_09_30 05:00               4    fcmggaabhhb4s           129577               129574
2020_09_29 04:00               4    fcmggaabhhb4s           130059               130058
2020_07_01 03:00               4    fcmggaabhhb4s           132149               132145
2020_09_25 03:00               4    fcmggaabhhb4s           141314               141310
2020_09_27 04:00               4    fcmggaabhhb4s           141959               141952
2020_09_27 03:00               4    fcmggaabhhb4s           143939               143935
2020_07_01 04:00               4    fcmggaabhhb4s           144049               144049
2020_09_28 03:00               4    fcmggaabhhb4s           190459               190454
2020_09_27 17:00               4    925vv3u9jvjh9           170829               170833
2020_09_28 04:00               4    fcmggaabhhb4s           172320               172317
2020_07_01 14:00               4    925vv3u9jvjh9          1290998              1290998


from the above output, you see the list of SQLs which are doing every single transaction doing a commit.  


4. Find the number of times the SQL executed.


set lines 200 pages 200
col PARSING_SCHEMA_NAME for a12
col END_INTERVAL_TIME for a25
set line 200
col BEGIN_INTERVAL_TIME for a40
select sql_id,sum(nvl(executions_delta,0)) execs
from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
where sql_id in ('fcmggaabhhb4s','925vv3u9jvjh9')  
and ss.snap_id = S.snap_id and ss.instance_number = S.instance_number   
and executions_delta > 0  
group by sql_id;

Output


SQL_ID             EXECS
------------- ----------
fcmggaabhhb4s  153449677
925vv3u9jvjh9   199072006


You could additionally find the user information details using the below SQL query.


SELECT to_char(sample_time,'yyyy_mm_dd hh24:mi:ss'),user_id,program
FROM dba_hist_active_sess_history  WHERE sql_id in ('fcmggaabhhb4s','925vv3u9jvjh9')
AND to_char(sample_time,'YYYY-MM-DD HH24:MI:SS') between '2020-06-25 00:00:00' and '2020-07-02 00:00:00' order by 1;






Comments

Popular posts from this blog

Auditing in Oracle database

rs.stepDown() in MongoDB replication

Tuples in Python