问题描述
嗨,大师们,
谢谢你在我们需要你的时候帮助我们。
我有一个导入350 GB表的任务。我稍后会创建索引。它是一个12.2 CDB和一个单独的PDB,并具有启用force_logging的备用DB。
是否有一种方法,以便我可以在导入之前预测存档生成。
问候,
兰瑟里克。
谢谢你在我们需要你的时候帮助我们。
我有一个导入350 GB表的任务。我稍后会创建索引。它是一个12.2 CDB和一个单独的PDB,并具有启用force_logging的备用DB。
是否有一种方法,以便我可以在导入之前预测存档生成。
问候,
兰瑟里克。
专家解答
我们可以做一个小测试来得到一个估计
所以我们的表占据了大约1.2克的空间。现在,我们将导出它,将其丢弃并重新导入它,并在我们的系统范围内检查增量重做消费
因此,我们可以看到重做消耗与导入的数据几乎相同
如果这将是过度的,您可以考虑在nologging模式下执行操作,然后在您的待机状态下执行正常步骤 (复制受影响的数据文件,并恢复它们)。
SQL> create table t ( x char(1000));
Table created.
SQL>
SQL> insert /*+ APPEND */ into t
2 select rownum
3 from
4 ( select 1 from dual connect by level <= 1000 ),
5 ( select 1 from dual connect by level <= 1000 );
1000000 rows created.
SQL> exec dbms_stats.gather_table_stats('','T');
PL/SQL procedure successfully completed.
SQL> select * from user_tables
2 where table_name = 'T'
3 @pr
==============================
TABLE_NAME : T
TABLESPACE_NAME : LARGETS
CLUSTER_NAME :
IOT_NAME :
STATUS : VALID
PCT_FREE : 10
PCT_USED :
INI_TRANS : 1
MAX_TRANS : 255
INITIAL_EXTENT : 65536
NEXT_EXTENT : 1048576
MIN_EXTENTS : 1
MAX_EXTENTS : 2147483645
PCT_INCREASE :
FREELISTS :
FREELIST_GROUPS :
LOGGING : NO
BACKED_UP : N
NUM_ROWS : 1000000
BLOCKS : 143486
EMPTY_BLOCKS : 0
AVG_SPACE : 0
CHAIN_CNT : 0
AVG_ROW_LEN : 1001
AVG_SPACE_FREELIST_BLOCKS : 0
NUM_FREELIST_BLOCKS : 0
DEGREE : 1
INSTANCES : 1
CACHE : N
TABLE_LOCK : ENABLED
SAMPLE_SIZE : 1000000
LAST_ANALYZED : 14-DEC-18
PARTITIONED : NO
IOT_TYPE :
TEMPORARY : N
SECONDARY : N
NESTED : NO
BUFFER_POOL : DEFAULT
FLASH_CACHE : DEFAULT
CELL_FLASH_CACHE : DEFAULT
ROW_MOVEMENT : DISABLED
GLOBAL_STATS : YES
USER_STATS : NO
DURATION :
SKIP_CORRUPT : DISABLED
MONITORING : YES
CLUSTER_OWNER :
DEPENDENCIES : DISABLED
COMPRESSION : DISABLED
COMPRESS_FOR :
DROPPED : NO
READ_ONLY : NO
SEGMENT_CREATED : YES
RESULT_CACHE : DEFAULT
CLUSTERING : NO
ACTIVITY_TRACKING :
DML_TIMESTAMP :
HAS_IDENTITY : NO
CONTAINER_DATA : NO
INMEMORY : DISABLED
INMEMORY_PRIORITY :
INMEMORY_DISTRIBUTE :
INMEMORY_COMPRESSION :
INMEMORY_DUPLICATE :
DEFAULT_COLLATION : USING_NLS_COMP
DUPLICATED : N
SHARDED : N
EXTERNAL : NO
CELLMEMORY :
CONTAINERS_DEFAULT : NO
CONTAINER_MAP : NO
EXTENDED_DATA_LINK : NO
EXTENDED_DATA_LINK_MAP : NO
INMEMORY_SERVICE :
INMEMORY_SERVICE_NAME :
CONTAINER_MAP_OBJECT : NO
PL/SQL procedure successfully completed.
SQL> select 143486 * 8192 from dual;
143486*8192
-----------
1175437312
SQL> select sum(bytes) from dba_segments
2 where segment_name = 'T';
SUM(BYTES)
----------
1218510848
1 row selected.
所以我们的表占据了大约1.2克的空间。现在,我们将导出它,将其丢弃并重新导入它,并在我们的系统范围内检查增量重做消费
C:\>expdp mcdonac/****** dumpfile=t.dmp directory=TEMP tables=T
Export: Release 12.2.0.1.0 - Production on Fri Dec 14 10:45:00 2018
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "MCDONAC"."SYS_EXPORT_TABLE_01": mcdonac/******** dumpfile=t.dmp directory=TEMP tables=T
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "MCDONAC"."T" 960.3 MB 1000000 rows
Master table "MCDONAC"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MCDONAC.SYS_EXPORT_TABLE_01 is:
C:\TEMP\T.DMP
Job "MCDONAC"."SYS_EXPORT_TABLE_01" successfully completed at Fri Dec 14 10:45:32 2018 elapsed 0 00:00:31
SQL> drop table t purge;
Table dropped.
SQL> select * from v$sysstat
2 where name like 'redo size%';
STATISTIC# NAME CLASS VALUE
---------- -------------------------------------------------- ---------- ----------
288 redo size 2 409354092
290 redo size for lost write detection 2 0
291 redo size for direct writes 2 549264
3 rows selected.
C:\>impdp mcdonac/****** dumpfile=t.dmp directory=TEMP
Import: Release 12.2.0.1.0 - Production on Fri Dec 14 10:50:21 2018
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Master table "MCDONAC"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "MCDONAC"."SYS_IMPORT_FULL_01": mcdonac/******** dumpfile=t.dmp directory=TEMP
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "MCDONAC"."T" 960.3 MB 1000000 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "MCDONAC"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 14 10:50:46 2018 elapsed 0 00:00:24
SQL>
SQL> /
STATISTIC# NAME CLASS VALUE
---------- -------------------------------------------------- ---------- ----------
288 redo size 2 1589530412
290 redo size for lost write detection 2 0
291 redo size for direct writes 2 1177370524
3 rows selected.
因此,我们可以看到重做消耗与导入的数据几乎相同
SQL> select 1589530412 - 409354092 from dual;
1589530412-409354092
--------------------
1180176320
如果这将是过度的,您可以考虑在nologging模式下执行操作,然后在您的待机状态下执行正常步骤 (复制受影响的数据文件,并恢复它们)。
文章转载自ASKTOM,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




