Upgrade to Vault 4.1.1

If you are having a problem using Vault, post a message here.

Moderator: SourceGear

Post Reply
Gary_Berg
Posts: 29
Joined: Fri Oct 05, 2007 11:34 am
Location: Dayton, OH

Upgrade to Vault 4.1.1

Post by Gary_Berg » Tue Apr 29, 2008 1:07 pm

I'm looking at upgrading my Vault 4.04 clients and server since I want to move to VS2008 programming. This is running on a Windows XP workstation with SQL server, it's not running under Windows 2003. I've got a few questions:

1) I'm really not certain what account I used for the service. The processmodel section says "autoconfig=true". I do find that when I go in to the Security/Logins section of SQL Management that "machinename\ASPNET" is present and created about the right time. How can I confirm/determine this?

2) In looking at the machine, which has been in use for about 6 months, I find that the sgvault.mdf is 98Mb, but the sgvault_log.ldf is about 4.25Gb in size! I did a search in the forums, but it seemed as if the responses were designed for SQL admins, which I'm not. I've got a maintenance plan running that does a backup, then a shrink database task, then a rebuild index task. So how do I shrink the LDF file, which is obviously way too large?

3) Is there a step-by-step upgrade script for going from 4.0.? to 4.1.?

4) From what I've read I need to upgrade the server and clients simultaneously?

lbauer
Posts: 9736
Joined: Tue Dec 16, 2003 1:25 pm
Location: SourceGear

Post by lbauer » Tue Apr 29, 2008 2:23 pm

I could use a couple more bits of info:

What version of SQL Server are you using? It sounds like SQL Server is on the same machine, is that correct?
Linda Bauer
SourceGear
Technical Support Manager

Gary_Berg
Posts: 29
Joined: Fri Oct 05, 2007 11:34 am
Location: Dayton, OH

Post by Gary_Berg » Tue Apr 29, 2008 2:51 pm

SQL Server 2005 (not Express), on the same machine.

Gary_Berg
Posts: 29
Joined: Fri Oct 05, 2007 11:34 am
Location: Dayton, OH

Post by Gary_Berg » Wed Apr 30, 2008 2:24 pm

Linda - did this get overlooked?

lbauer
Posts: 9736
Joined: Tue Dec 16, 2003 1:25 pm
Location: SourceGear

Post by lbauer » Thu May 01, 2008 9:43 am

If you look in the Vault server log where there has been a restart (search for "System Started"), you'll see Identity=(some account). This is the account that Vault service is running under. If you are using Windows authentication for SQL Server, this is also the user Vault is using for SQL Server. If you are using SQL Server authentication, the account would be sgvaultuser.

To shrink your log file, run this SQL command:

USE [master]
GO

BACKUP LOG sgvault WITH TRUNCATE_ONLY
DBCC SHRINKFILE ( sgvault_log )


To upgrade from 4.0 to 4.1, backup your Vault databases. both sgvault and sgmaster, and install 4.1. You will need to upgrade clients at the same time.
Linda Bauer
SourceGear
Technical Support Manager

Gary_Berg
Posts: 29
Joined: Fri Oct 05, 2007 11:34 am
Location: Dayton, OH

Post by Gary_Berg » Thu May 01, 2008 11:38 am

If I want to move the Vault SQL databases to another location (another drive letter), can I use the SQL Management tool to drop them, move the files, and then re-add them? Or does Vault specify the path to the files too?

I've got a second 20Gb partition on this system which would be a great place to store the files.

Also, I have a maintenance plan running on this machine which does some backup and a few tasks such as index rebuilds. Is there something I can do, as part of the backup task or other maintenance tasks, which will truncate the database log? Or do I just have to remember to trash the database log from time to time?

lbauer
Posts: 9736
Joined: Tue Dec 16, 2003 1:25 pm
Location: SourceGear

Post by lbauer » Sat May 03, 2008 7:56 pm

You can detach the databases, move them to another location and then reattach them.

You should be able to incorporate the backup into your maintenance. After you have a valid database backup, look at running - BACKUP LOG sgvault with TRUNCATE_ONLY, and then DBCC SHRINKDATABASE('sgvault', 0, TRUNCATEONLY) to remove the unused portion of the database's log.
Linda Bauer
SourceGear
Technical Support Manager

Mike Dimmick
Posts: 28
Joined: Tue Feb 17, 2004 7:42 am
Location: UK
Contact:

Post by Mike Dimmick » Mon May 12, 2008 3:00 am

The transaction log is a fundamental part of SQL Server's operation. It records every transaction that is made, in order to redo or undo the operations in the case of a database failure. (This allows SQL Server to perform write-back caching on the actual data pages, and to support rolling back a transaction without having to keep details of all active transactions in memory.) The exact behaviour of what is logged depends on your Recovery Model setting.

In Full mode, the default on production editions of SQL Server, the before and after image of every row on every update is logged. Each row inserted is logged, and the data of every row deleted is also logged. In addition, indexes built are logged, and allocations of file space to specific objects are logged. This is the most extensive logging model but it allows for the log to be backed up even if the database is damaged; the backed-up log can then be restored over a restored database to bring you back to point-of-failure.

In Bulk-Logged mode, bulk operations are not logged completely. Instead, SQL Server marks in a bitmap which pages of the database have been modified by a bulk operation. When the log is backed up, the pages that were modified by bulk operations are also backed up, so the log backup can be bigger than the actual active part of the log. If any of these pages are damaged or unreadable the log backup will fail, preventing recovery to point of failure. You can see this option - backing up the log - as an incremental backup.

In both of the above options, as soon as you make a full database backup, SQL Server assumes that you're keeping a rolling backup setup and retains the log. The only way to clear the existing log records is to back up the log. That's why you use the BACKUP LOG statement to clear it with the TRUNCATE_ONLY option. Backing up the database does not clear the log - you must back up the log as well.

The Simple recovery model works like bulk-logged, but as soon as a transaction has committed, its log records are marked inactive. You cannot make log backups in this mode, so you cannot recover to point-of-failure, only to your last full backup.

The log file will grow if the end of the current file is reached and the beginning of the file is still marked active. This can still happen on a Simple database if there is a long outstanding transaction.

If you want the ability to recover to point of failure, I would recommend leaving the recovery model set to Full. You will need to commit to backing up the transaction log as well as the data - transaction logs form a chain and you need all the log backups since the full backup to recover.

If you don't want to make that commitment or you're happy with only being able to recover to your last good full backup, I recommend setting the recovery model to Simple, and you'll never have this problem again.

lbauer
Posts: 9736
Joined: Tue Dec 16, 2003 1:25 pm
Location: SourceGear

Post by lbauer » Wed May 14, 2008 9:14 am

Thanks for the information. This would make a great KB article. Can I use your description?
Linda Bauer
SourceGear
Technical Support Manager

Mike Dimmick
Posts: 28
Joined: Tue Feb 17, 2004 7:42 am
Location: UK
Contact:

Post by Mike Dimmick » Fri May 16, 2008 10:41 pm

Sure, go ahead.

The only other thing I would add is that for availability reasons, I would normally put my transaction log on a separate physical disk, so that if the disk or filesystem containing the data files failed you'd still have the log files available to back up. This also has the advantage of keeping the log I/O separate from the database I/O. Transaction log I/O is 100% sequential write-only unless transactions are rolled back (and that normally can be done from the log buffer), except when recovering the database from a dirty shutdown, in which case it's 100% sequential read-only. Sequential I/O is fastest when the head is at or near the right place to write to or read from already (the bounding factor in most disk I/O is the head location). You need transaction log I/O to be fast as the log records must be fully committed to disk before SQL Server will return after committing the transaction (at this point it guarantees that the effects of the transaction are permanent even if a failure occurs). Mixing data and log I/Os on the same disk causes the head to move to a location not optimal for writing the log records.

Because transaction log I/O is always sequential, it does not benefit at all from striping, so if you're using a RAID setup, just use RAID 1 (mirroring) for your transaction log disks. It should also be evident that you should keep transaction logs for different databases on different disks, otherwise the head has to keep moving to different locations as the different logs are written to. This property is the same whether you're dealing with physical hard disks in the same box, external arrays, SAN fabric or virtual machine virtual hard disks - they have to ultimately point to separate physical spindles.

If you are going to use dedicated disks, it's a good idea to create your data files and log files close to the size of the disk. Autogrowth is a problem because it leads to the file becoming fragmented, so what appears to SQL Server to be a sequential scan can be random access. You might want to keep the files a few GB smaller than the disk (volume), set a small autogrow size and watch for the autogrow events to know when to upgrade your storage. Never enable autoshrink or regular shrinking on a live database, it just wastes time.

In terms of sizing, the log file even for the Simple recovery model should be a few GB. It should be big enough that the expected rate of logging won't cause it to have to wrap around to the start for at least a few minutes. Remember that absolutely every change is logged to the transaction log, and that can include adding a new large file to Vault. (From memory, doesn't Vault store compressed diffs between file versions?)

Since Vault is designed for small enterprises the database setup is likely to be compromised for most people. We're running Vault on a 2 x RAID-1 configuration, sharing a database server with database-based applications under development and also with Windows Server Update Services. The database server is also the web server. The database files are on one RAID-1 array, and the log files for all databases are on the other. We use the Full recovery model for sgmaster and sgvault, and some other production databases, and the Simple model for everything under development so that log file expansion doesn't get out of control. Our source tree is currently a bit less than 1GB and we have fewer than 10 developers, making relatively few check-ins per day, so commit time is insignificant. We rarely have more than three developers working on the same solution concurrently but everything's currently all in one repository, although I may review this now that we have the Folder Move feature after upgrading to Vault 4.1.

Post Reply