Redheadatabase

About me: My name is Maria Zakourdaev. I have more than 10 years experience with SQL Server (starting from SQL Server 6.5). The last five years have been spent mostly on benchmarking different SQL Server features and flows, like data import, indexes impact on DML flows, star transformations in RDBMS, Hierarchic queries and custom OLAP-like aggregations. I was a speaker in the Microsoft Teched (Israel) on the SQL Server track. I am also an active member of the Israel SQL Server Group.

Monday, February 27, 2012

I'm moving !!!!!

I am truly exited to move this blog to SQLblog.com !

http://sqlblog.com/blogs/maria_zakourdaev/default.aspx

You are more than welcomed to drop by.

Stay tuned

Over and out ;)

Labels:

Wednesday, November 2, 2011

DBCC LOGINFO on SQL 2005 causes “non-yielding processes on scheduler” errors and server timeouts

If you are using DBCC LOGINFO in the production environment and you are lucky enough to still have an instance of SQL Server 2005 which, suddenly out of the blue, started throwing “non-yielding processes on scheduler” errors into the error log and becomes unresponsive, then you should know that there can be a connection between the two.
It is most probably a bug in the DBCC, reproducible on SQL 2005 instances only.
It’s been fixed for SQL 2008 and up.
Maybe this is one of the reasons why this DBCC is undocumented…

I have the following procedure which loops through all the databases and checks the number of VLFs (virtual log files - logical sections in the transaction log file allocated each time a log growth occurs).
It’s using an undocumented, but very useful for this purpose, DBCC LOGINFO. Unfortunately, there was a database snapshot on the server and the procedure got executed there as well as on other databases (not on purpose, of course, just used sp_msforeachdb procedure without too much thinking):



ALTER PROCEDURE [ResourceMonitoring].[CheckVLFnumber]

AS

BEGIN

CREATE TABLE #vlf(field int, filesize bigint,startoffset bigint,fseqno bigint,

status int,parity int, createlsn varchar(50))

CREATE TABLE #countVLF(dbname varchar(256), VLFcount int, LogSize int, LogGrowth int)

EXEC sp_msforeachdb '

TRUNCATE TABLE #vlf;

DECLARE @LogSize int, @LogGrowth int

SELECT @LogSize = (size * 8.) / 1024.,

@LogGrowth = (growth * 8.) / 1024.

FROM ?.dbo.sysfiles

INSERT INTO #vlf

EXEC(''DBCC LOGINFO (''''?'''') WITH NO_INFOMSGS'');

INSERT INTO #countVLF

SELECT ''?'',count(*) ,@LogSize,@LogGrowth

FROM #vlf; '

IF EXISTS (SELECT * FROM #countVLF WHERE VLFcount > 300)

BEGIN

SELECT *

FROM #countVLF

WHERE VLFcount > 300

ORDER BY VLFcount desc

END

ELSE BEGIN

SELECT 'None'[Output]

END

END



According to my investigation, when you issue DBCC LOGINFO against a database snapshot (which, again, can happen by mistake only because there is no database log file in the database snapshot anyway) the session gets stuck with ‘spinloop’ status in sys.dm_exec_requests and never finishes. As soon as any DDL/update/insert/delete statement is issued against the snapshot source database, you get nice mini-dump together with the following error in the error log (143 is the session_id issuing DBCC)

Process 143:0:0 (0x684) Worker 0x000000008086A1C0 appears to be non-yielding on Scheduler 0. Thread creation time: 12963945475631. Approx Thread CPU Used: kernel 0 ms, user 359 ms. Process Utilization 0%. System Idle 99%. Interval: 70015 ms.

Your server then starts to suffer from log waits on ALL databases:










At this point in time mysterious system ‘TASK MANAGER’ processes then start coming into the picture, taking exclusive lock on system tables sysdbreg and sysobjvalues. Take a look below ( click to enlarge) how the locking picture looks like:




Interesting to know that those system processes are used during flushing changes into the log file, WRITELOG waittype indicates a worker thread that is waiting for the LogWriter to flush log blocks).



All locks stay in place and are never released. All other statements against the source database are suspended. My favorite part here is, when I tried to restart the server, SQL Agent wouldn’t go down properly because it too was impacted by metadata locking and without stopping it the SQL Server Service will not go down as well. Well, no failover. Restart.




In my opinion they shouldn’t allow using this procedure on a database snapshot In the first place. Well, they do it with DBCC LOG





Adding a filter, that ignores database snapshots, can avoid this problem and many other potential issues if you like using sp_msforeachdb for a database loop:

EXEC sp_msforeachdb '

IF EXISTS ( SELECT * FROM sys.databases

WHERE source_database_id is null and name = ''?'' )

BEGIN

TRUNCATE TABLE #vlf;

DECLARE @LogSize int, @LogGrowth int

SELECT @LogSize = (size * 8.) / 1024.,

@LogGrowth = (growth * 8.) / 1024.

FROM ?.dbo.sysfiles

INSERT INTO #vlf

EXEC(''DBCC LOGINFO (''''?'''') WITH NO_INFOMSGS'');

INSERT INTO #countVLF

SELECT ''?'',count(*) ,@LogSize,@LogGrowth

FROM #vlf;

END

'



During server monitoring we usually filter out system processes and look at user connections only. It is important to know that TASK MANAGER system processes are taking active part in user processes when they execute distributed queries, extended stored procedures and CLR routines and also participate in user transactions, taking active role in the transaction logging process.

Labels: , , , , , , ,

Tuesday, October 18, 2011

Easy load testing using Management Studio

There is an easy way to open several connections to your SQL Server in order to perform load testing. You can use Management studio "Multiple Server Query Execution" feature. This feature is usually used to run the same query against multiple servers but we also can use it for a load or stress testing.
In the registered servers tab create a group with the same name as your server that you are planning to use for the load testing. Inside the group create 10 or more connections to the same server, just making sure that in the server alias field you add number to your server name otherwise Management Studio will not allow to create two server connections with the same alias. Easy way to add multiple connections is by editing Management Studio connections xml file that you can find in "C:\Users\<> \AppData\Roaming\Microsoft\Microsoft SQL Server\110\Tools\Shell\RegSrvr.xml", you can simply copy the connection multiple times, still changing alias name for each new connection.

After all connections have been created, right click on Group name and choose the "New query" option. Management studio will open a one window but everything that you will be executing through this window, will be executed in 10 different connections simultaneously against your server.




As you see in the status bar, at looks like you are connected to the 10 different servers which are, in fact, one.



In the results of any executed query, Management Studio by default will append another column with the server name alias, as you defined in the connection settings. Below you can see how many connections with the different SPIDs you are holding.



You can change this behavior by going to Tools -> Options and select to stop adding server name to the results of each query.


In case you need more than 10 connections you can define more connections under the server group or open several Management Studio query windows.

Labels: , , , , ,

Monday, October 17, 2011

PASS 2011 summit excitement

The PASS summit is an amazing event, it is fantastic to see more than 4000 SQL Server DBAs and BI developers from all over the world at the one place. Everyone here is talking SQL, joking SQL and breezing SQL. The most terrific thing about an SQL Server DBAs is that crazy and passionate light in their eyes when they talk about their job and an SQL Server in general.

I'm very new to the twitter world but here everything that happens on summit is always commented on twitter ( note the big screen on the photo - it's all twitting) , all the tricky questions and the funny remarks during the sessions, mentors criticizing notes and, of course, "thank you very much for a great session" comments as well as all activities like #sqlrun in the morning, #sqlawards , #sqlquiz and #sqlclinic during the day and #sqlkaraoke in the evening. Second day of the PASS was #sqlkilt day, many #sqlpeople were going around in kilts.



It's quite difficult to choose which lecture to attend because there are 15 at each timeslot and always at least 3 that looks like "the must". I have attended many, some of them were breathtaking, some were more interesting, some less, some were just to meet the mentors who actually write those books that we read, those sqlblogs and articles that we discuss. Towards the end of the day there are more and more people with melted brain were seen around. The absolute top winners of all sessions were:

  1. Pre-conf "All about execution plans" by Gail Shaw and Grant Fritchey
  2. Pre-conf "An enlightened approach to performan tuning" by Adam Machanic
  3. "Inside Tempdb" by Bob Ward
  4. "Important trace flags that every DBA should know" by Victor Isakov
  5. "More DBA mythbusters" by Paul Randal
  6. "Zen and Art of workspace memory" by Adam Machanic

The time flew so fast, summit is over and I’m back home, need somehow to stop Twitter addiction, cannot make myself stop following #sqlpass twits because this way I feel I’m still back there among this incredible #sqlfamily.

Labels: , , , , ,

Monday, March 7, 2011

DDL Triggers and SET options

I have just had an unforgettable experience with DDL triggers and, as far as I see on Google search, I'm not alone.

DDL triggers, as you know, fire stored procedure in a response to server or database DDL events, like metadata or security changes. We use them to track changes on publishing server (those changes that replication cannot take care of) and apply them on subscriber. So, DDL triggers procedure writes statement into Statement table and other process later is executing them on subscriber.

Here is a code of DDL trigger and statements logging table below.

################################################################################

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TABLE dbo.DDLOutput(

ID int IDENTITY(1,1) NOT NULL,

DateAdded datetime NULL,

EventType nvarchar(255) NULL,

DatabaseName sysname NULL,

SchemaName sysname NULL,

ObjectName sysname NULL,

ObjectType nvarchar(255) NULL,

Statement nvarchar(max) NULL,

LoginName sysname NULL,

CONSTRAINT PK_DDLOutput PRIMARY KEY NONCLUSTERED ( ID )

)

GO

################################################################################

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TRIGGER [TRG_DDLServer]

ON ALL SERVER

FOR

DDL_EVENTS

AS

BEGIN TRY

DECLARE @PostTime datetime

DECLARE @EventType nvarchar(max)

DECLARE @DatabaseName nvarchar(max)

DECLARE @SchemaName nvarchar(max)

DECLARE @ObjectName nvarchar(max)

DECLARE @ObjectType nvarchar(max)

DECLARE @CommandText nvarchar(max)

DECLARE @LoginName nvarchar(max)

DECLARE @error nvarchar(2048)

SELECT

@PostTime = EVENTDATA().value('(/EVENT_INSTANCE/PostTime)[1]','datetime'),

@EventType = EVENTDATA().value('(/EVENT_INSTANCE/EventType)[1]','nvarchar(max)'),

@DatabaseName = EVENTDATA().value('(/EVENT_INSTANCE/DatabaseName)[1]', 'nvarchar(max)'),

@SchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]','nvarchar(max)'),

@ObjectName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]','nvarchar(max)'),

@ObjectType = EVENTDATA().value('(/EVENT_INSTANCE/ObjectType)[1]','nvarchar(max)'),

@CommandText = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)'),

@LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]','nvarchar(max)')

INSERT INTO DDLOutput(DateAdded,EventType,DatabaseName,SchemaName,ObjectName,ObjectType,Statement,LoginName)

VALUES(@PostTime,@EventType,@DatabaseName,@SchemaName,@ObjectName,@ObjectType,@CommandText,@LoginName)

END TRY

BEGIN CATCH

SET @error = 'Failed execute DDL trigger:' + ERROR_MESSAGE()

ROLLBACK;

RAISERROR (@error,18,1) WITH LOG

END CATCH

GO

################################################################################

After table and DDL trigger creation, nice thing that is worth mentioning, is that you cannot drop or rename this table, you will get “Invalid object” error message. Only after disabling the trigger you can drop or rename it.

After Creating trigger and table try to execute the following statement

CREATE TABLE test1 ( i int)

GO

Now, you can see in the logging table all the details of your statements: script itself, loginname, database where the statement was executed and so on.

Now try to execute the following:

SET ANSI_PADDING OFF

GO

CREATE TABLE test2 ( i int)

GO

You will get error message:

Msg 3609, Level 16, State 2, Line 1

The transaction ended in the trigger. The batch has been aborted

Failed execute DDL trigger:SELECT failed because the following SET options have incorrect settings: 'ANSI_PADDING'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

The reason for this failure ( I must admit that it took me hours to get it work properly) is because trigger is using XQuery and in order for use XQuery those settings must be on: QUOTED_IDENTIFIER, , ANSI_WARNINGS, ANSI_PADDINGS, ANSI_NULLS, CONCAT_NULL_YELDS_NULL. NUMERIC_ROUNDABORT must be off.

I have tried setting ANSI_DEFAULTS ON, which should include first four settings but, for some weird reason, trigger still fails if I do not explicitly add each setting separately.

Your trigger should look like that:

################################################################################

SET ANSI_NULLS ON

GO

SET QUOTED_IDENTIFIER ON

GO

CREATE TRIGGER [TRG_DDLServer]

ON ALL SERVER

FOR

DDL_EVENTS

AS

BEGIN TRY

DECLARE @PostTime datetime

DECLARE @EventType nvarchar(max)

DECLARE @DatabaseName nvarchar(max)

DECLARE @SchemaName nvarchar(max)

DECLARE @ObjectName nvarchar(max)

DECLARE @ObjectType nvarchar(max)

DECLARE @CommandText nvarchar(max)

DECLARE @LoginName nvarchar(max)

DECLARE @error nvarchar(2048)

SET ANSI_PADDING ON

SET ANSI_NULLS ON

SET ANSI_WARNINGS ON

SET QUOTED_IDENTIFIER ON

SET CONCAT_NULL_YIELDS_NULL ON

SET NUMERIC_ROUNDABORT OFF

SELECT

@PostTime = EVENTDATA().value('(/EVENT_INSTANCE/PostTime)[1]','datetime'),

@EventType = EVENTDATA().value('(/EVENT_INSTANCE/EventType)[1]','nvarchar(max)'),

@DatabaseName = EVENTDATA().value('(/EVENT_INSTANCE/DatabaseName)[1]', 'nvarchar(max)'),

@SchemaName = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]','nvarchar(max)'),

@ObjectName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]','nvarchar(max)'),

@ObjectType = EVENTDATA().value('(/EVENT_INSTANCE/ObjectType)[1]','nvarchar(max)'),

@CommandText = EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)'),

@LoginName = EVENTDATA().value('(/EVENT_INSTANCE/LoginName)[1]','nvarchar(max)')

INSERT INTO DDLOutput(DateAdded,EventType,DatabaseName,SchemaName,ObjectName,ObjectType,Statement,LoginName)

VALUES(@PostTime,@EventType,@DatabaseName,@SchemaName,@ObjectName,@ObjectType,@CommandText,@LoginName)

END TRY

BEGIN CATCH

SET @error = 'Failed execute DDL trigger:' + ERROR_MESSAGE()

ROLLBACK;

RAISERROR (@error,18,1) WITH LOG

END CATCH

GO

################################################################################

Labels: , , , , , , , , , , , ,