For specific inquiries please contact SATS RDBA Services or call 650 288-1701.
We have all been there, a huge datafile is sitting somewhere on disk drive and you regret allowing it to grow so much in the first place. Disk is almost out of space, you may have found some objects that you can drop (or some rows that could be deleted) and a glance at the free space makes you smile and imagine a world where hard-drives are empty and databases maintain themselves while you enjoy a nice day out in the park (or in most cases, a good sleep right away - because its 3AM and your pager doesn't stop ringing), However all your dreams shatter once nothing could be shrunk even though you have plenty of space to reclaim.
Transaction logs are quite often the culprit of space issues, make sure
you backup transaction logs frequently (since this will clear them) and
instruct the developers to COMMIT (or CHECKPOINT) frequently in their code.
If possible by business, consider running database in *Simple recovery mode* (by executing the SQL statement below)
ALTER DATABASE [MyDatabase] SET RECOVERY SIMPLE;
Simple recovery mode does not allow a recovery to any point in time afterwards,
however it is sometimes the only way around since hard-drives don't grow on trees.
Transaction logs could also be truncated and shrunk relatively fast if necessary using the following statement:
BACKUP LOG [MyDatabase] WITH TRUNCATE_ONLY
DBCC SHRINKFILE (MyDatabase_Log,1)
Notice that by doing this you will invalidate your chain of transaction logs, so its is recommended that you will take a FULL BACKUP straight afterwards.
Disable auto-growth, switch to static data files and install some sort
of monitoring (like Cerberus) that will notify when space is running
low. It is even recommended to create several file groups and grow them manually by yourself.
Also make sure to have monitoring that alerts you when there is something that you can still do (when the disk is 90% full f.e. and not when you have no space at all).
Many (lazy) DBAs create jobs to shrink a database when there is some
free space to reclaim. I have even seen some cases where a database was
set to auto-grow and auto-shrink simultaneously causing a cycle of pain
and agony to all resources. In general - shrinking is not good for your
database and it hogs a lot of resources as well as creating fragmentation.
So before you go any further in this post:
ALTER DATABASE [MyDatabase] SET AUTO_SHRINK OFF;
Don't put all your eggs in one basket
Make sure to have a few partitions and most importantly do separate the data files from log files and the operation system. There are a number of programs that will allow such things to be done even after the installation with minimal downtime.
Or one can do it manually with transact SQL commands.
Shrinking is the act of moving pages from the end of the datafile and
trying to move them as near to the beginning of the file as possible,
during this process all the links are fixed.
It sounds too good to be true? well it is - While there are no indexes
(clustered or non-clustered) involved, this works perfectly. However
when those small pieces of SQL data that makes our life better (and our
queries faster) are being tampered with - this will probably result in
In fact, every type of shrinking (other than TRUNCATE_ONLY) is not very
good for us - but we can always take the few hours and rebuild all
The problem occurs when we try to recover a large amount of space (lets say 50GB), SQL server will attempt to move 50GB worth of pages and only than it will try and truncate the end of the file. This could take days - and since we are low on space we need a solution fast!
Lets say our file MyDatabase_Data is around 300GB right now, we need to
release about 50GB. Assuming that you tried the truncating thefile by running the following:
DBCC SHRINKFILE (MyDatabase_Data, TRUNCATEONLY);
At this moment you have probably freed very little (if any) space and
you will resort to the following statement:
DBCC SHRINKFILE (MyDatabase_Data, 250000);
This will run for a very long time during which you will not regain any
space on the drive, so offer you the following T-SQL script which will
shrink the data in leaps instead and do the work much faster overall:
declare @from int
declare @leap int
declare @to int
declare @datafile varchar(128)
declare @cmd varchar(512)
set @from = 300000 /*Current size in MB*/
set @to = 250000 /*Goal size in MB*/
set @datafile = 'MyDatabase_Data' /*Datafile name*/
set @leap = 1000 /*Size of leaps in MB*/
print '--- SATS SHRINK SCRIPT START ---'
while ((@from - @leap) > @to)
set @from = @from - @leap
set @cmd = 'DBCC SHRINKFILE (' + @datafile +', ' + cast(@from as varchar(20)) + ')'
print '==> SATS SHRINK SCRIPT - '+ cast ((@from-@to) as varchar (20)) + 'MB LEFT'
set @cmd = 'DBCC SHRINKFILE (' + @datafile +', ' + cast(@to as varchar(20)) + ')'
print '--- SATS SHRINK SCRIPT COMPLETE ---'
Instructions - Edit the settings section to fit your needs and execute against the database where the datafile is located, it would still take about the same time - however you will be able to see the progress of the operation as well as use the space freed more quickly. After you have shrunk the data I would recommend rebuild the fragmented indexes on the database and take the needed steps to avoid this from ever happening again. Please, do not shrink in very small steps - keep the total amount of steps under 100.
Questions/Comments? - feel free to contact Cerberus (which is me) on
RDBA.net forums under Oracle DBA corner.