问题描述
嗨,汤姆,
我们无法将4亿记录从一个表 (临时表) 加载到主表。我们已经从上游获得了2亿数据,我们能够通过DBMS_PARALLEL_EXECUTE概念加载3小时。现在数据增加到4亿,我们除了6到7小时加载。是否有其他方法可以更快地插入数据。下面是当前使用的代码。
谢谢,
贾加纳特K
我们无法将4亿记录从一个表 (临时表) 加载到主表。我们已经从上游获得了2亿数据,我们能够通过DBMS_PARALLEL_EXECUTE概念加载3小时。现在数据增加到4亿,我们除了6到7小时加载。是否有其他方法可以更快地插入数据。下面是当前使用的代码。
BEGIN
DBMS_PARALLEL_EXECUTE.DROP_TASK (L_TASK_NAME);
EXCEPTION
WHEN OTHERS
THEN
NULL;
END;
--- This create task will create a task which can be seen in USER_PARALLEL_EXECUTE_TASKS table
DBMS_PARALLEL_EXECUTE.CREATE_TASK (L_TASK_NAME);
--This statement chunks the data based on Rowid
DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_ROWID (L_TASK_NAME,
'TIREC',
'TMP_PROFILE_OFFER_ORG',
TRUE,
500000);
L_SQL :=
'INSERT INTO T_PROFILE_OFFER (PROFILE_ID,
ENROLLMENT_ID,
MEM_OFFER_ID,
OFFER_ID,
KIT_ID,
OFFER_CODE,
START_DATE,
END_DATE,
OFFER_PRIORITY,
ACTIVATION_STATUS,
STATUS,
CREATE_USER,
UPDATE_USER)
SELECT PROFILE_ID,
ENROLLMENT_ID,
MEM_OFFER_ID,
OFFER_ID,
KIT_ID,
OFFER_CODE,
START_DATE,
END_DATE,
OFFER_PRIORITY,
ACTIVATION_STATUS,
STATUS,
CREATE_USER,
UPDATE_USER
FROM TMP_PROFILE_OFFER_ORG
WHERE ROWID BETWEEN :start_id AND :end_id
LOG ERRORS INTO ERR$_T_PROFILE_OFFER_MO
(''B109: Insert'')
REJECT LIMIT UNLIMITED';
--Following statement runs multiple session based on parallel level supplied
DBMS_PARALLEL_EXECUTE.RUN_TASK (TASK_NAME => L_TASK_NAME,
SQL_STMT => L_SQL,
LANGUAGE_FLAG => DBMS_SQL.NATIVE,
PARALLEL_LEVEL => L_DEGREE);
--If task is failed we need to try until it is finished or 5 attempts to execute it.
LOOP
EXIT WHEN DBMS_PARALLEL_EXECUTE.TASK_STATUS (L_TASK_NAME) =
DBMS_PARALLEL_EXECUTE.FINISHED
OR L_ATTEMPTS > L_RETRIES;
L_ATTEMPTS := L_ATTEMPTS + 1;
DBMS_PARALLEL_EXECUTE.RESUME_TASK (L_TASK_NAME);
END LOOP;
谢谢,
贾加纳特K
专家解答
在考虑更改任何内容之前,我认为您应该进行一些跟踪,因为加载该数据量不需要花费数小时。
这是你在我的笔记本电脑上运行的一个小复制品。
即使在适度的硬件中,您也应该能够在几分钟而不是几小时内加载数百万行。
因此,它的时间来跟踪您的负载,而不是通过dbms_parallel,只是一个简单的串行插入 (例如) 100万行,看看时间在哪里丢失。
这是你在我的笔记本电脑上运行的一个小复制品。
SQL> select count(*) from TMP_PROFILE_OFFER_ORG; COUNT(*) ---------- 200000000 -- 200 million SQL> set timing on SQL> insert into T_PROFILE_OFFER ( 2 PROFILE_ID, 3 ENROLLMENT_ID, 4 MEM_OFFER_ID, 5 OFFER_ID, 6 KIT_ID, 7 OFFER_CODE, 8 START_DATE, 9 END_DATE, 10 OFFER_PRIORITY, 11 ACTIVATION_STATUS, 12 STATUS, 13 CREATE_USER, 14 UPDATE_USER 15 ) 16 SELECT PROFILE_ID, 17 ENROLLMENT_ID, 18 MEM_OFFER_ID, 19 OFFER_ID, 20 KIT_ID, 21 OFFER_CODE, 22 START_DATE, 23 END_DATE, 24 OFFER_PRIORITY, 25 ACTIVATION_STATUS, 26 STATUS, 27 CREATE_USER, 28 UPDATE_USER 29 FROM TMP_PROFILE_OFFER_ORG 30 / 200000000 rows created. Elapsed: 00:06:10.66 SQL> set timing off SQL> truncate table T_PROFILE_OFFER reuse storage; Table truncated. SQL> set timing on SQL> insert /*+ APPEND */ into T_PROFILE_OFFER ( 2 PROFILE_ID, 3 ENROLLMENT_ID, 4 MEM_OFFER_ID, 5 OFFER_ID, 6 KIT_ID, 7 OFFER_CODE, 8 START_DATE, 9 END_DATE, 10 OFFER_PRIORITY, 11 ACTIVATION_STATUS, 12 STATUS, 13 CREATE_USER, 14 UPDATE_USER 15 ) 16 SELECT PROFILE_ID, 17 ENROLLMENT_ID, 18 MEM_OFFER_ID, 19 OFFER_ID, 20 KIT_ID, 21 OFFER_CODE, 22 START_DATE, 23 END_DATE, 24 OFFER_PRIORITY, 25 ACTIVATION_STATUS, 26 STATUS, 27 CREATE_USER, 28 UPDATE_USER 29 FROM TMP_PROFILE_OFFER_ORG 30 / 200000000 rows created. Elapsed: 00:02:00.80 SQL> commit; Commit complete. Elapsed: 00:00:00.05 SQL>
即使在适度的硬件中,您也应该能够在几分钟而不是几小时内加载数百万行。
因此,它的时间来跟踪您的负载,而不是通过dbms_parallel,只是一个简单的串行插入 (例如) 100万行,看看时间在哪里丢失。
文章转载自ASKTOM,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




