Hello,
We are currently in the process of migrating an existing clustered SQL Server 2008 R2 instance over to a clustered SQL Server 2012 instance as we phase out the Windows Server 2008 with SQL Server 2008 R2.
The setup is identical for the SQL Server 2012 instance as it is on the SQL Server 2008 R2 instance. (meaning the RAM and CPU are both the same or better on the SQL Server 2012 instance)
The process in which we are migrating is that we're moving a few databases over to the new SQL Server 2012 instance each night. What we've noticed is that the CPU usage is much higher on the SQL Server 2012 instance than on the previous SQL Server 2008 R2 instance even though the there is only 1/2 of the databases migrated to the 2012 instance.
Running the following script:
;with cte ([totalCPU]) as (select sum(cpu) from master.dbo.sysprocesses) select tblSysprocess.spid , tblSysprocess.cpu , CONVERT(BIGINT,(tblSysprocess.cpu * CONVERT(BIGINT,100))) / CONVERT(BIGINT, cte.totalCPU) as [percentileCPU] , tblSysprocess.physical_io , tblSysprocess.memusage , tblSysprocess.cmd , tblSysProcess.lastwaittype from master.dbo.sysprocesses tblSysprocess cross apply cte order by tblSysprocess.cpu descProduces the following results:
In a clustered environment, is this normal and if not, does anyone know what this means or how to reduce the CPU usage?
Thanks.