Quantcast
Channel: SQL Server Database Engine forum
Viewing all articles
Browse latest Browse all 12963

shrinking a SQL Server 2014 database - an experiment

$
0
0

I have read so much about the evils about shrinking SLQ Server databases, but here's our predicament.

We have a DB that is used to support a message broker.  It may temporarily store up to 10-20 million messages (or more) that will be received from between hundreds to thousands of endpoints, then distributed to many applications on a given day. Each message can be up to a few megabytes in length.  Our database, at resting state, has a few megabytes of valid data, but can "blossom" to over a hundred GB during the day.  The lifetime of a message in this database might be as short as a fraction of a second and as long as several weeks or months, depending on the health and speed of the receiver.

You might think this is the best argument ever for not shrinking, but I have found that after not very long, this DB starts to slow down.

Here is an experiment I did - I used the same DB to deliver the same set of test data (we have a test database containing 1.4 million records - this data looks like real data in that it was created from US census data and the Red Gate Data Generator, so it is varied in content and size - we didn't want to get caught with bad test results because SQL Server was able to cache more with our test data than in real life).  Caveat: this test was only done on some development machines and not on production servers - on production machines we would see much higher throughput numbers.

---

Test 1: We delivered test data to 8 endpoints for 1,945 seconds (this is when the first endpoint received all the data it had in its sample)

Results: Average throughput 477.5 messages/second, total 927,949messages delivered

---

Then we took a backup copy (BACKUP A) and ran a maintenance plan that did the following:

 - Checked all databases involved in the test
 - Rebuilt any indexes that had over 25% fragmentation, 20 segments
 - Rebuilt statistics using full scans (probably wasn't needed after rebuilding indexes)
 - Did a full backup (probably wasn't needed either - BACKUP B)

---

Test 2:  Ran the exact same test again for 1,945 seconds (the same amount of time), delivering the same test data to the same endpoints.

Results:  Average throughput 369.7 messages/second.  The total messages delivered was 718,677.

---

Then we restored the source database from BACKUP A.

Then we added a Database shrink to the maintenance plan after the "check DB", before the "rebuild indexes" and ran it on the restored database.  NOTE: I requested that 40% space be left in the database after shrinking, we have the threshold for shrinking pretty high, and our autogrowth increment is quite large.

---

Test 3: Ran the exact same test again for 1,945 seconds, delivering the same test data to the same endpoints.  I somewhat expected, after reading all these articles, for this test to be the worst by far.

Results:  Average throughput was 495.30 messages/second with a total delivered message count of 962,874.

---

Any idea why my results were completely opposite of what everyone appears to say so emphatically (shrinking is evil)?  They also appear to say that there are no exceptions to the rule - ever.

Thanks,
Rob


thanks, Rob Hutchison


Viewing all articles
Browse latest Browse all 12963

Trending Articles