我需要每天更新一张非常大(300M记录)而且比较宽的TABLE1
表。更新的源数据位于另一张表UTABLE
中,该表的行数只有TABLE1
的10%-25%,但是该表比较窄。这两个表都使用record_id
作为主键。
目前,我正在使用以下方法重新创建TABLE1
:
<!-- language: sql -->
1) SELECT (required columns) INTO TMP_TABLE1
FROM TABLE1 T join UTABLE U on T.record_id=U.record_id
2) DROP TABLE TABLE1
3) sp_rename 'TMP_TABLE1', 'TABLE1'
然而在我的服务器上执行这个操作需要将近40分钟(SQL Server占用了60GB的RAM)。我希望能够获得50%的性能提升 - 我还可以尝试哪些其他选项呢?
MERGE
andUPDATE
- something like the code below works faster only for a very smallUTABLE
table - at full size, everything just hangs:<!-- language: SQL --> MERGE TABLE1 as target USING UTABLE as source ON target.record_id = source.record_id WHEN MATCHED THEN UPDATE SET Target.columns=source.columns
I heard that I can perform a batch MERGE by using ROWCOUNT - but I don't think it can be fast enough for a 300M row table.
Any SQL query hints that can be helpful?