Monday, March 19, 2012

Can the transaction log be turned off?

I have a couple of databases that see a lot of updates (WEB user session state stuff) and I don't care at all if I lose any or all of the data. They don't get backed up and fresh copies can be created in seconds. Is there a way I can turn off the transaction logging to cut down on overhead?

ThanksNo

Why what the problem?

Waht's your recovery model set to?|||Recovery model is 'Simple'. Just thought I'd be able to cut down overhead. I've been getting timeouts and am grasping at straws.|||Originally posted by grahamt
Recovery model is 'Simple'. Just thought I'd be able to cut down overhead. I've been getting timeouts and am grasping at straws.

When you say timeouts, what do you mean by that?

From QA, from the application layer?

Are you blocking yourself (sp_lock)

How many connections are open (sp_who2)

How big is the database?

How often do you take backups?|||Setting recovery model is nothing to do with blocks or timeouts on the database. Ensure there are no issues with network, client connections and query conditions involved which may help causing timeouts.

Whatever the recovery model ensure to maintain & schedule full backup of the database.|||The SQL Server timeouts are reported by the ASP.
A bit of config info.

Hardware

SQL Server 2000
Dual Xeon 2.4GHz CPU
Ultra 320 SCSI controller
Two Ultra 320 18GB, 15000 RPM Hard Disks (C: and D:)
Two GB RAM (1.5GB assigned to SQL Server)
Windows 2000 Server

WEB Server connected to SQL server by Gigabit Ethernet.

Two databases involved. One has a single table of perhaps 2000 records (Session state info for WEB users) and is heavily used (Insert, Delete, and Select). It resides on drive C:.

Second DB residing on drive D: has one Master table and 26 others (A-Z). Master table has about 16000 6K records, Primary key on a varchar(50) and a char(2) column. Other tables run from 1000 to 50000 records 70 bytes/record (PK on a varchar(50) field). No updates are ever done (well, every few months maybe) and the Selects are done using stored procedures, one for the Master table (2 parameters) and 26 for the other tables (1 parameter).

Every few minutes (time varies as does the DB with the DB on drive D: getting the most) the WEB app logs an SQL Server timeout. With this horsepower driving these small DBs (and nothing else running) I wouldn't expect any timeouts at all.

Since I don't care about the data in one table and the data in the other never changes, I don't bother with backups.

Any thoughts?|||What are the timeout settings on SQL & ASP script?
IT may be worth if you enable DBCC DBREINDEX and other maint.plan checks on database which will addup performance.

And also consider network settings and take help of netadmin.|||http://vyaskn.tripod.com/sql_odbc_timeout_expired.htm - a useful guide to troubleshoot timeouts when using SQL & ASP.|||I've been monitoring network traffic with 3COMs Network Monitor and everything seems fine. The WEB server showed high FTP traffic so I've shut that service down but there's no perceptable load on SQL server. Performance monitor shows Disk Idle Time averaging 98% or better on both drives and the CPU load never exceeds 3-4%. I've run the SPs through QA and because they are so basic the execution cost is almost nil. The two SPs are

CREATE PROCEDURE [DBO].[prGetOrigins_A]
@.SomeVar Varchar(50)
AS

SET NOCOUNT ON

SELECT Code, Master FROM A Where SomeVar = @.Somevar
GO

and

CREATE PROCEDURE [DBO].[prGetMasterRec]
@.master varChar(20),
@.code varCHar(3)
AS
SET NOCOUNT ON

SELECT id, code,SomeData,
FROM Masters WHERE Master = @.master AND Code = @.Code
GO

and still I get random timeouts.

Personally I am not concerned. A timeout every 5 minutes or so with 250 users on the WEB server probably shouldn't happen but I can live with it. It's the boss who panics and worries that he may have lost a $5.00 sale. 8-)

No comments:

Post a Comment