Quantcast
Channel: SQL Server Database Engine forum
Viewing all 12963 articles
Browse latest View live

Who will be announced as the next SQL Server Database Engine Guru? Read more about July 2019 competition!!

$
0
0


What is TechNet Guru Competition?

Each month Microsoft TechNet Wiki council organizes a contest of the best articles posted that month. This is your chance to be announced as MICROSOFT TECHNOLOGY GURU OF THE MONTH!

One winner in each category will be selected each month for glory and adoration by the MSDN/TechNet Ninjas and community as a whole. Winners will be announced in dedicated blog post that will be published inMicrosoft Wiki Ninjas blog, a tweet fromMicrosoft Wiki Ninjas Twitter account, links will be published atMicrosoft TNWiki group on Facebook, and other acknowledgement from the community will follow.

Some of our biggest community voices and many MVPs have passed through these halls on their way to fame and fortune.

If you have already made a contribution in the forums or gallery or you published a nice blog, then you can simply convert it into a shared wiki article, reference the original post, and register the article for the TechNet Guru Competition. The articles must be written in July 2019 and must be in English. However, the original blog or forum content can be from beforeJuly 2019.

Come and see who is making waves in all your favorite technologies. Maybe it will be you!


Who can join the Competition?

Anyone who has basic knowledge and the desire to share the knowledge is welcome. Articles can appeal to beginners or discusse advanced topics. All you have to do is to add your article to TechNet Wiki from your own specialty category.


How can you win?

  1. Please copy/Write over your Microsoft technical solutions and revelations to TechNetWiki.
  2. Add a link to your new article on THIS WIKI COMPETITION PAGE (so we know you've contributed)
  3. (Optional but Recommended) Add a link to your article at the TechNetWiki group on Facebook to get feedback and tips from the council members and from the community. The group is very active and people love to help. You can even get direct improvements to your article before the contest starts.

Do you have any question or want more information?

Feel free to ask any questions below, or Join us at the official MicrosoftTechNet Wiki groups on facebook. Read More about TechNet Guru Awards.

If you win, people will sing your praises online and your name will be raised as Guru of the Month.

PS: Above top banner came from James van den Berg.


Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
----------------------------
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page


Azure SQL Server VM CPU spikes every minute

$
0
0

In a SQL VM the CPU has spikes every minute or so. To reproduce just create a SQL VM, connect with RDP and watch the Task Manager or Performance monitor.

This come from a "Free SQL Server License: SQL Server 2017 Express on Windows Server 2016" instance, the vm size is Standard B2s (2 vcpus, 4 GiB memory).





I see these events in system_health with similar cycle:

nametimestamp
scheduler_monitor_system_health_ring_buffer_recorded2019-07-04 18:11:54.9572837
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552314
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552381
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552402
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552417
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552432
memory_broker_ring_buffer_recorded2019-07-04 18:12:32.8552441
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613111
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613161
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613178
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613195
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613255
memory_broker_ring_buffer_recorded2019-07-04 18:12:37.9613263
scheduler_monitor_system_health_ring_buffer_recorded2019-07-04 18:12:55.0391458
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639521
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639583
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639606
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639618
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639630
memory_broker_ring_buffer_recorded2019-07-04 18:13:39.4639639
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485663
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485715
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485737
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485756
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485804
memory_broker_ring_buffer_recorded2019-07-04 18:13:44.4485813
scheduler_monitor_system_health_ring_buffer_recorded2019-07-04 18:13:55.1468387


Running SQL Server instances with different port

$
0
0

Hi All,

     I have installed 2 SQL instances on same server (1 default and 1 named instances respectively).  Now I have a query on which port does these 2 instances listen on, is it default port 1433(as I haven't configured any port in config manager).

     Is it possible to make named instance listen on different port? if possible please let me know how to do it as I could not find the procedure to do so.

Regards,

V

SQLServer and SQLServer Agent Service Login replace with a new login in an AAG

$
0
0

All,

We have a SQLServer 2016 AAG which runs off a Domain AD Login - call it "DomainName\SQLHost_ID". We would like to replace this login with a different login named "DomainName\SQLAAG_ID" with same rights as the older one.

In a standalone server we change it from the "SQLServer Configuration Manager". But considering that the changes will be made to the AAG environment, which also has CNO entries in AD, are there anything specific that I have to keep in mind?

rgn

Plan Cache - Health

$
0
0

We are having some performance issues lately and ive been asked to look into it and also asked to look into how healthy or not our plan cache is, as running sp blitz showed high number of plans for some queries.

Im looking into the Plan cache on a SQL 2014 Instance to check the health, 

I ran the following query

SELECT TOP 50
    creation_date = CAST(creation_time AS date),
    creation_hour = CASE
                        WHEN CAST(creation_time AS date) <> CAST(GETDATE() AS date) THEN 0
                        ELSE DATEPART(hh, creation_time)
                    END,
    SUM(1) AS plans
FROM sys.dm_exec_query_stats
GROUP BY CAST(creation_time AS date),
         CASE
             WHEN CAST(creation_time AS date) <> CAST(GETDATE() AS date) THEN 0
             ELSE DATEPART(hh, creation_time)
         END
ORDER BY 1 DESC, 2 DESC



There are alot of Adhoc queries only used once but the total size of those used once is only 577MB whereas the total size with use count 1 for prepared statements is just over 8GB.

Can anyone give any advice on what to look into based on the figures above.. 

How healthy or unhealthy would you describe the cache?

Should i look into tuning or clearing out the Adhoc queries periodically?

Do you think the high number of Prepared statements with a Use Count of 1 could be an issue?  They are taking up the most space in the cache. 

I would appreciate any advice at all, thanks. 


ilikefondue


How to use OPENQUERY to properly execute a Stored Procedure that updates a linked server table?

$
0
0

Hello there guys, I'm having a hard time trying to figure this out. I'm using OPENQUERY to execute Stored Procedures on a linked server.
I managed to find a way to do it, by using the following:

SELECT*FROMOPENQUERY([LINKEDSRV\SQLSRV],'SET FMTONLY OFF; SET NOCOUNT ON; EXEC [Database_Name_Example].Data.usp_GetNames 5,''S''')
GO 

The reason I use SET FMTONLY OFF and SET NOCOUNT ON is because when I tried the code above without them, this message appeared:

Msg 7357, Level 16, State 1, Line 1
  Cannot process the object "EXEC
[Database_Name_Example].Data.usp_GetNames 5,'S'". The OLE DB provider
"SQLNCLI" for linked server "LINKEDSRV\SQLSRV" indicates that either the
object has no columns or the current user does not have permissions on
that object.


Reading a lot online helped me find the answer.

So far so good, until I stomped into a Stored Procedure that uses SELECT andUPDATE in some tables on the linked server. When I execute the OPENQUERY I get the results from the SELECT statement but I noticed that it does not update the table.

The Stored Procedure goes like this:

CREATEPROCEDURE Data.usp_GetNames (@year tinyint=NULL,@online char(1)=NULL)if@online ='S'BEGINselect nam_codig, nam_desc from Nameswhere nam_status ='P'orderby nam_dateUPDATE NamesSET nam_status ='S'where nam_status ='P'END

When I execute it directly in the linked server it updates the table, so the second time I execute it, there will be no morenam_status = 'P'

But if I execute it with OPENQUERY the results are the same, it doesn'tUPDATE it just select the data.

What can I do to allow the permissions?

Thank you all!!!

Event 17890 on Windows application log

$
0
0

Hi,

After SQL Server 2014 update KB4491539 our server has this issue - A significant part of sql server process memory has been paged out. This may result in a performance degradation. I can't figure it out what is causing this behaviour. The server has enough RAM (16GB) - 14GB is assigned to SQL server. Even a simple querries takes about 7 minutes to run while before the update the same querries run instantly.

I had the same issue half an year ago but it dissapeared after some of SQL Server update - I can't remember which it was. Now it came back again. I was playing wit Lock Pages in Memory setting to the service account that runs the WID service (NT SERVICE\MSSQL$MICROSOFT##WID). Also set the registry for LowMemoryThreshold of 512 but this was still happening.

After I find out that it started happening again after the last update I tried uninstall it but could not. I could not figure it out which part of Sql server update needs to be uninstalled first - on any of them I clicked to uninstall it said I need to uninstall something else thirst. So I just canceled.

Can somebody help me with this issue? Because the server is almost useless - if I run a bit more complicated querry it almost hangs.

Antanas

Execution plan of one specific application call (scan) got different plan other than cache plan(seek). There wasn't recompile, the cache plan(seek) is still exists and re-use.

$
0
0

*Related to SQL Server 2016
Hi,

I have a case which specific SP execution call (regular call from application) behaved very bad with tons of cpu and io (the call date is 2019-07-01).
I looked at db cache plan and saw the plan exists from 2019-06-16, which means it it won't recompile with wrong plan as I suspected. (first screenshot)
but then I looked in redgate db monitor, which store the plan of the executed calls, and saw it shows a totally different plan with index scan that seems correlated with the high io results from this call. this plan creation date, redgate sql monitor saying, is the same as db cache plan (?) (second screenshot)
If it may assist, I'm logging with sp_whoisactive the activity in the server, and it shows that for the bad performance call from 2019-07-01 the plan was NULL (third screenshot) 

Is there any good explanation for this strange case?

Thank you
Asaf


Database Backup Not Working Properly

$
0
0

Hi Sir

I had an issue in database backup. I am taking more than 8 server 53 database backup daily by same below method.

A-Auto Tool (Server 1) ---> Connect Batch Server (Server 2) --> Connect database Server (Server 3) and Take Backup at Common file server.

I am executing below command at  Batch Server (Server 2) 

sqlcmd -S database Server -b -E -l 300 -Q "Execmsdb.dbo.DatabaseBackup 'DB Name','File Server path','FULL','Y',48" START

Basically its works fine everyday but some night  its fails with below message at batch server.

Error Message: An internal error occurred in Execute SqlCmd (reason: error can not be identified).

There is no error at Database Instances. I have doubt whether this request is reaching to database Instance itself.

Can anyone help me to fix this issue?

Regards,

Deepak

Debug option is not there in SQL Management studio

$
0
0

Hi,

we have installed sql server 2016 standard edition.
developers want to debug the stored procedure. In the management studio we didn't see 'Debug' option at all. I tried add remove button but nothing working. Please help me to add the debug option in sql server 2016 management studio.


Thanks,
Jo


pols

Difference between a query and a view - performance is STARKLY different. Why?

$
0
0

I have the following statement, which contains sub-views.  

If I create a query, and paste this, and execute, it takes only a couple of seconds to generate the resultant 691 rows.

If I create the contents below into a view, and do a "Select Top 1000 Rows", it takes over TWO MINUTES to generate!

Also, If I use this view in Powerapps, it acts like its using the latter method, because the app shows the data load progress for several minutes.  

What is going on with this?  I use other tables and views in powerapps that dont take that kind of time, and even with 4X as many rows.  

SELECT        TOP (100) PERCENT dbo.Documents.DocumentID, dbo.VIEW_SUB_PDM_DWGREVISION_LATEST.DwgRevision, dbo.VIEW_SUB_PDM_DESCRIPTION_LATEST.Description, 
                         dbo.VIEW_SUB_PDM_PROJECTNAME_LATEST.ProjectName, dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST.PartNumber, dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST.CurrentStatusID, 
                         CASE WHEN dbo.Documents.CurrentStatusID <> 10 THEN 1 ELSE 0 END AS Pending, dbo.Documents.LatestRevisionNo, dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST.Filename, 
                         dbo.VIEW_SUB_PDM_RvTbl_DESCRIPTION_LATEST.RvTbl_Description, dbo.VIEW_SUB_PDM_RvTbl_DwgDate_LATEST.RvTbl_DwgDate, dbo.VIEW_SUB_PDM_RvTbl_Approved_LATEST.RvTbl_Approved, 
                         dbo.VIEW_SUB_PDM_RvTbl_Revision_LATEST.RvTbl_Revision, dbo.DocumentsInProjects.ProjectID
FROM            dbo.VIEW_SUB_PDM_RvTbl_DESCRIPTION_LATEST RIGHT OUTER JOIN
                         dbo.Documents INNER JOIN
                         dbo.DocumentsInProjects ON dbo.Documents.DocumentID = dbo.DocumentsInProjects.DocumentID INNER JOIN
                         dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST.DocumentID ON 
                         dbo.VIEW_SUB_PDM_RvTbl_DESCRIPTION_LATEST.DocumentID = dbo.Documents.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_DESCRIPTION_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_DESCRIPTION_LATEST.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_DWGREVISION_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_DWGREVISION_LATEST.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_PROJECTNAME_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_PROJECTNAME_LATEST.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_RvTbl_Approved_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_RvTbl_Approved_LATEST.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_RvTbl_DwgDate_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_RvTbl_DwgDate_LATEST.DocumentID LEFT OUTER JOIN
                         dbo.VIEW_SUB_PDM_RvTbl_Revision_LATEST ON dbo.Documents.DocumentID = dbo.VIEW_SUB_PDM_RvTbl_Revision_LATEST.DocumentID
ORDER BY dbo.Documents.DocumentID DESC

.


I am adding the structure of the SQL Sub-tables below.  It shows how the data is organized, to hopefully bring some more understanding on the structure of my statement above.

As shown above, for every DocumentID, there are several VariableIDs that represent properties that I need separated into columns of "Latest" value.  RevisionNo of the largest value represents the latest value.  Thus, for Part Number (VariableID=54) I need the latest ValueText.  So, for this example, the value I need for Part Number is "204069".  That is the logic that needs done for every VariableID value.  That is being done one at a time in those Sub views, and then assembled all-together with the statement above.

The statement that does the logic just described is in this sub-view (dbo.VIEW_SUB_PDM_PARTNUMBER_LATEST):

SELECT        TOP (100) PERCENT DocumentID, VariableID, ExtensionID, PartNumber, RevisionNo, CurrentStatusID, Date, TransitionNr, RevNr, LatestRevisionNo, Filename, Deleted
FROM            (SELECT        dbo.VariableValue.DocumentID, dbo.VariableValue.VariableID, dbo.Documents.ExtensionID, dbo.VariableValue.ValueText AS PartNumber, dbo.VariableValue.RevisionNo, dbo.Documents.CurrentStatusID, 
                                                    dbo.TransitionHistory.Date, dbo.TransitionHistory.TransitionNr, dbo.TransitionHistory.RevNr, dbo.Documents.LatestRevisionNo, dbo.Documents.Filename, dbo.Documents.Deleted, ROW_NUMBER() 
                                                    OVER (PARTITION BY dbo.VariableValue.DocumentID
                          ORDER BY dbo.VariableValue.RevisionNo DESC, dbo.TransitionHistory.TransitionNr DESC) AS Seq
FROM            dbo.Documents INNER JOIN
                         dbo.VariableValue ON dbo.Documents.DocumentID = dbo.VariableValue.DocumentID INNER JOIN
                         dbo.TransitionHistory ON dbo.Documents.DocumentID = dbo.TransitionHistory.DocumentID
WHERE        (dbo.Documents.Deleted = 0) AND (dbo.Documents.ExtensionID = 3) AND (dbo.Documents.CurrentStatusID <> 0) AND (dbo.VariableValue.VariableID = 54)) t
WHERE        Seq = 1
ORDER BY DocumentID DESC, RevisionNo DESC, TransitionNr DESC

The result (for the same example) looks like this in the final view:


The above shows DwgRevision (VariableID=49), Descirption (VariableID=47), ProjectName (VariableID=45), PartNumber (VariableID=54)

By the way, DwgRevision is not the same as RevisionID in the other view.  

PowerApps needs this data in this simple columnar form.  I'm not married to the SQL language I used above.  In fact, people MUCH smarter than me had a great deal to do with how this came about.  If there is a better way to reorganize this data into columns showing the latest values for these VariableIDs, i'm certainly open to it!


Clarify space needed for recreating a clustered index

$
0
0

SQL 2014 SP2
OS version = Windows 2012 R2 Standard  = version 6.3.9600

Database size is 857.38 GB

Our primary problem is DBCC CheckDB process fails 60% of the time with Error Message:
"D:\path\db_name.mdf_MSSQL_DBCC12: Operating system error 665(The requested operation could not be completed due to a file system limitation) encountered. "

My experience is this error was fixed at customer sites by upgrading them from SQL 2012 to 2014. I am surprised to see this problem still cropping up at SQL 2014 sites.

https://support.microsoft.com/en-us/help/2002606/os-errors-1450-and-665-are-reported-for-database-data-files
Recommends:
1.) Ruled out operating system level file fragmentation.
2.) Split the database into multiple files instead of the 1 MDF file currently. 

        We will do this using article     https://blogs.msdn.microsoft.com/sqlserverfaq/2011/08/02/moving-data-from-the-mdf-file-to-the-ndf-files/

My question is please clarify the instruction:

"3a.) Create a new file group and a file (in the new file group) on a different disk with a large initial size, at least twice the size of the data you are moving onto that file."

So if our table is 700 GB and is currently within the mdf file does this mean the new ndf file needs to be created as 1400 GB?

Database is 857 GB all in 1 mdf file on logical drive D. 1 large table is 700 GB. When I drop/recreate the index it will spread the data evenly between the mdf (Drive D) and ndf file (Drive N) so 350 GB of this table into each file. Do I need 700 GB extra space on drive D and 700 GB extra space on drive N totaling 1557 GB on D and 1050 GB on drive N ?

or do I need 350 GB free space on each drive D and N ?

Thanks.

Massive Initial Size Log File

$
0
0

Hi guys, I have a db that with massive initial size transaction logs file amount, it's sucking all the disk.

What happen if I manually decrease the file by 50%? Just slowing down the performance I believe (if the logs are greater than the Initial Size)...

DBCC CheckDB & Tempdb: Not freeing up Space after SQL job finish.

$
0
0

Good Morning All,  

I have tempdb of 500 GB on the shared database SQL server 2012 Environment and it gets filled ( Data files)  up to close enough every time when DBCC CheckDB process running through Maintenance plan Biweekly and noticed that space is not getting freed up after SQL job is finished. 

During the run time i checked and most likely it started filling up when it start to run CHECKDB on one of over 500 GB database on the server. That Database has multiple data Files. 

Now we have another shared database server with larger database then above one but checkdb is running fine specially temdb space wise and haven't have issue with filling up. 

I have multiple Tempdb Files created on the server and has separate drive. 

Any suggestions what's needs to be done or check to reclaim the space automatically. 

Thank you. 




Thank you very much for your time and effort to answer this post. Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach Best -Ankit


SQL DB Mail shut down notification or alert

$
0
0

With server restarts and SQL services restart, DB mail process shut down. Recently we had an issue where the tempdb filled up and the DB mail process shut off. Though we have alerts to let us know about insufficient resources or the node is inactive. It is often forgotten that the DB mail needs to be started after everything is back up online. I am trying to find a way where it will generate an alert or notification to let us know the DB mail is shut off and needs to turn on automatically. has anyone done this before. 

I guess I can query the log and run a SQL agent job but I do not want the job running every hour so I am looking for alternatives. 


TDE enabled Database

$
0
0

Hi,

  We have enabled TDE in our environment and had taken backup of Master Key, Certificate,Private Key.  We have tested the restoration  of the TDE enabled DB on Dev env by restoring Master Key and Private Key successfully. I have couple of queries regarding Master Key, please help me in understandign the following..

1) When we enabled TDE, what is the use of Master Key backup as we are creating a new Master Key on other instance and able to restore the certificate and private Key along with encrypted user Database.

2) For changing the password for Master Key using the below command(googled), please let me know if we have any other command for changing the password other than regenerating.

ALTER MASTER KEY REGENERATE WITH ENCRYPTION BY PASSWORD = 'new_password';

3) IF we change the Master Key password on source database with above command should we create a new backup for certificate and private Key? or old backup would suffice. Ideally wanted to know what is the best practice to be followed if we are changing the Master Key Password to new one.

Regards,

Raj



Create Table in MSSQL DB using storeprocedure in schedule

$
0
0

Hi,

I had already two data tables in DB and already joined these two tables using store procedure.  So,I got one table  with join two tables in DB using store procedure.But Let me know these table(get data form join two tables) will be get update data?

If I got update data, how can I get from excel?

Can I shrink the log file to 0 Mb if that is the minimum?

How should I index a lookup table where I need to check a value is between 2 other values

$
0
0

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/e569ad54-24dc-462d-92d1-2056bc3aee33/how-to-create-index-for-date-range-columns?forum=sqldatabaseengine

This was a similar question. In my case instead of a date range, my lookup is of bank card products (credit and debit cards). The lookup table has low and high card number values, for example Card_Range_Low = 5045950000000000000 and Card_Range_High = 5045959999999999999 which is a Debit card. I then join my day's card transactions (around 3 million) to this table using the actual card numbers to derive the product.

So for example if my card number was 5045 9532 0028 3264 then the product would be that described above.

Sometimes ranges overlap usually because the Chinese card UPI can have ranges that are traditionally MasterCard and Visa so I also need to include Scheme in my join. On top of that, a product can expire and the range reused for a new product so there are start and end dates that need to be compared to the posting date of the transaction.

Could someone suggest an indexing strategy for the lookup table that will improve the current performance of the matching. This is awfully slow and hard on my server, at present.


Nick Ryan MIS Programmer Analyst, ANZ Bank

GetDate() in SQL Server on Linux and UTC

$
0
0

Hi,

Is my understanding correct that in SQL Server on Linux we will always get UTC time when running SELECT GetDate()?

Can't find any documentation related to this at all.

Thanks

Viewing all 12963 articles
Browse latest View live


Latest Images