在 Oracle 中通过脚本插入的更快方法?
我正在使用 C# .NET 4.0 应用程序,该应用程序使用 ODP.NET 11.2.0.2.0 和 Oracle 11g 数据库.该应用程序预先加载了一些包含数据的查找表,并且由于大多数查找表的记录少于 20 条,因此脚本运行得非常快.但是,其中一个脚本有 802 条记录,插入记录需要 248.671 秒,对于如此少量数据的数据库来说,这似乎过分了,数据库以大量数据进行快速操作.
I'm working with a C# .NET 4.0 application, that uses ODP.NET 11.2.0.2.0 with an Oracle 11g database. The application pre-loads a few look-up tables with data, and since most have less than 20 records, the scripts run pretty quickly. However, one of the scripts has 802 records, and takes 248.671 seconds to insert the records, which seems excessive for such a small amount of data with a database that advertises fast operations with large volumes of data.
所以我想知道,有没有比当前编写脚本的方式更快的通过脚本插入数据的方法?
So I'm wondering, is there a faster way to insert data, via script, than the current way the script is written?
插入的表定义如下:
CREATE TABLE FileIds
(
Id NUMERIC(38) NOT NULL
,Name NVARCHAR2(50) DEFAULT 'Unknown' NOT NULL
,FileTypeGroupId NUMERIC(38) NOT NULL
,CONSTRAINT FK_FileIds_FileTypeGroups FOREIGN KEY ( FileTypeGroupId ) REFERENCES FileTypeGroups ( Id )
)
要插入的脚本如下所示:
And the script to insert looks like the following:
BEGIN
INSERT ALL
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1152,5)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1197,10)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1200,6)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1143,3)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1189,9)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1109,7)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1166,4)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (0,8)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1149,2)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1400,1)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1330,11)
INTO FileIds ( Id, FileTypeGroupId ) VALUES (1000,0)
-- 790 Records removed for example purposes.
SELECT * FROM DUAL;
COMMIT;
END;
外键中引用的 FileTypeGroups 表是在加载 FileIds 表之前预加载的.没有与 FileIds 表相关联的序列或触发器,并且尚未为该表创建索引.
The FileTypeGroups table, referenced in the Foreign Key, is pre-loaded prior to the loading of the FileIds table. There are no sequences or triggers associated with the FileIds table, and as of yet indexes have not been created for the table.
推荐答案
问题
解析时间可能会随着某些类型的语句呈指数增长,尤其是 INSERT ALL
.例如:
Parsing time may increase exponentially with certain types of statements, especially INSERT ALL
. For example:
--Clear any cached statements, so we can consistently reproduce the problem.
alter system flush shared_pool;
alter session set sql_trace = true;
--100 rows
INSERT ALL
INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
...
repeat 100 times
...
select * from dual;
--500 rows
INSERT ALL
INTO FileIds(Id,FileTypeGroupId) VALUES(1, 1)
...
repeat 500 times
...
select * from dual;
alter session set sql_trace = false;
通过tkprof运行trace文件,你可以看到大量行的Parse时间急剧增加.例如:
Run the trace file through tkprof, and you can see the Parse time increases dramatically for a large number of rows. For example:
100 行:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.05 0 1 0 0
Execute 1 0.00 0.00 0 100 303 100
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.06 0.05 0 101 303 100
500 行:
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 14.72 14.55 0 0 0 0
Execute 1 0.01 0.02 0 502 1518 500
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 14.74 14.58 0 502 1518 500
解决方案
- 将您的大陈述分解为几个较小的陈述.很难找到最佳尺寸.在某些版本的 Oracle 上,有很多行会导致问题.我通常使用大约 100 行 - 足以获得分组语句的大部分好处,但足够低以避免解析错误.或...
- 尝试使用
insert into ... select ... from dual union all ...
方法代替.它通常运行得更快,但它的解析性能也可能随着大小而显着降低. - 升级 Oracle.新版本的解析性能有所提高.我无法再在 12.2 版中重现此问题.
- Break your large statement into several smaller statements. It's difficult to find the optimal size. On some versions of Oracle there's a magic number of rows that will cause the problem. I usually go for about 100 rows - enough to get most of the benefits of grouping statements, but low enough to avoid the parsing bug. OR...
- Try the
insert into ... select ... from dual union all ...
method instead. It usually runs much faster, although it's parsing performance may also degrade significantly with size. - Upgrade Oracle. Parsing performance has improved in newer versions. I can no longer reproduce this issue in version 12.2.
警告
不要从中吸取错误的教训.如果您担心 SQL 性能,那么在 99% 的情况下,您最好将相似的东西组合在一起而不是将它们分开.你以正确的方式做事,你只是遇到了一个奇怪的错误.(我搜索了 My Oracle Support,但找不到官方错误.)
Don't learn the wrong lesson from this. If you're worried about SQL performance, 99% of the time you're better off grouping similar things together instead of splitting them apart. You're doing things the right way, you just ran into a weird bug. (I searched My Oracle Support but couldn't find an official bug for this.)
相关文章