SQL中临时表与物理表的比较速度是多少?

2021-09-10 00:00:00 sql sql-server-2005 tsql sql-server

我有一个脚本需要临时提取数据以对其进行额外操作,但在脚本运行后不需要进一步存储它.我目前在一系列临时本地表 (CREATE TABLE #table) 中有相关数据,然后在使用完成后将其删除.我正在考虑切换到以相同方式处理的物理表(CREATE TABLE 表),如果它的脚本速度会有所提高(或者其他优势,也许?).

I have a script that needs to extract data temporarily to do extra operations on it, but then doesn't need to store it any further after the script has run. I currently have the data in question in a series of temporary local tables (CREATE TABLE #table), which are then dropped as their use is completed. I was considering switching to physical tables, treated in the same way (CREATE TABLE table), if there would be an improvement in the speed of the script for it (or other advantages, maybe?).

...那么,临时表和物理表在性能上有区别吗?从我读到的内容来看,临时表只是物理表,只有运行脚本的会话才能查看(减少锁定问题).

...So, is there a difference in performance, between temporary tables and physical tables? From what I'm reading, temporary tables are just physical tables that only the session running the script can look at (cutting down on locking issues).

我应该指出我在谈论物理表与临时表.有很多关于临时表与表变量的信息,例如http://sqlnerd.blogspot.com/2005/09/temp-tables-vs-table-variables.html.

I should point out that I'm talking about physical tables vs. temporary tables. There is a lot of info available about temporary tables vs. table variables, e.g. http://sqlnerd.blogspot.com/2005/09/temp-tables-vs-table-variables.html.

推荐答案

临时表是 SQL Server 中的一个大问题.

Temporary tables are a big NO in SQL Server.

  • 它们会导致成本高昂的查询计划重新编译.
  • 创建和删除表也是您添加到流程中的成本高昂的操作.
  • 如果有大量数据进入临时数据,您的操作将因缺乏索引而变慢.您可以在临时表上创建索引.但我永远不会为任何有大量记录的东西推荐一个临时表.

您的另一种方法:创建然后删除常规表只会产生相同的开销.

Your other approach: To create and then drop regular tables just creates the same overhead.

另一种方法:使用现有表,用附加列扩充行以区分可以使用与每个用户/会话相关的行.消除了创建/删除表的负担,但是,您将需要对生成值以区分行的代码保持偏执,并且您必须开发一种方法来维护会话过早结束的情况下的表并且还有剩余部分(处理结束时未删除的行).

Another approach: Using existing tables, augmenting the rows with an additional column to differentiate which rows pertain to each user/session could be used. Removes the burden to create/drop the tables but, then, you will need to be paranoid with the code that generate the value to differentiate the rows AND you will have to develop a way to maintain the table for those cases where a session ended prematurely and there are leftovers (rows that were not removed at the end of the processing).

我建议您重新考虑您的处理策略.一些替代方法就像使用相关查询、派生表或表变量一样简单.看看:http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx

I recommend you to rethink your processing strategy. Some alternatives are as easy as using correlated queries, derived tables or table variables. Take a look at: http://www.sql-server-performance.com/articles/per/temp_tables_vs_variables_p1.aspx

创建和删除常规表的方法以及重用带有附加字段的常规表的方法:两者都会生成查询计划重新编译,因为更改的数据量将触发重新评估表统计信息.同样,您最好的方法是寻找其他方法来处理您的数据.

The approach of creating and dropping regular tables and the approach of reusing a regular table augumented with an additional field: Both will generate query plan recompilations because the amount of data changed will trigger the reevaluation of table statistics. Again, your best approach is to find alternate ways to proccess your data.

相关文章