Online learning

​fREE

SQL DBA SCHOOL

Founder Abubeker Refaw

If you want to always capture certain trace events when SQL Server is running, such as auditing events, you can create a stored procedure that uses the sp_trace stored procedures to create a trace and specify the events to be captured. You can use the above code listed as a basis to create the stored procedure. Then you can mark the procedure as a startup procedure by using the sp_procoption procedure to set the autostart option. The trace automatically starts when SQL Server is started, and it continues running in the background. Just be aware that although using server-side traces is less intrusive than using the SQL Profiler client, some overhead is necessary to run a trace. You should try to limit the number of events and number of columns captured to minimize the overhead as much as possible

Monitoring Running

Traces SQL Server 2008 provides some additional built-in user-defined functions to get information about currently running traces. Like the fn_trace_gettable function discussed previously, these functions return the information as a tabular result. The available functions are as follows:

  •  fn_trace_getinfo( trace_id) —This function is passed a traceid, and it returns information about the specified trace. If passed the value of default, it returns information about all existing traces. An example of the output from this function is shown BELOW.
  •  fn_trace_geteventinfo( trace_id) —This function returns a list of the events and data columns being captured for the specified trace. Only the event and column ID values are returned. You can use the information provided in Tables 6.2 and 6.3 to map the IDs to the more meaningful event names and column names.
  • fn_trace_getfilterinfotrace_id) —This function returns information about the filters being applied to the specified trace. Again, the column ID and logical and comparison operator values are returned as integer IDs that you need to decipher. See BELOW for a listing of the column operator values.


An Example of Using the Built-in User-Defined Functions for Monitoring Traces

SELECT * FROM ::fn_trace_getinfo(default)
traceid     property    value

--------------------------------------------------------------------------------------------------------------------

1                   1            2

1                   2            C:\Program Files\Microsoft SQL Server\MSSQL.1\ MSSQL\LOG\log_376.trc

1                   3           20 

​1                   4           NULL 
​1                   5           1
2                   1           0 
2                   2           c:\trace\mytrace.trc.trc 
2                   3           5 
2                   4           NULL 
2                   5           1


select * from :

:fn_Trace_getfilterinfo(2)

columnid    logical_operator comparison_operator value 
--------------------------------------------------------------------------------------------------------------

3                   0             0                   6 
10                 0             7                   Profiler 
10                 0             7                   SQLAgent


You may be wondering why there is always a traceid with a value of 1 running when you run the fn_trace_getinfo procedure. This is the default trace that SQL Server automatically initiates when it starts. The default trace is enabled by default. You can identify which trace is the default by selecting from the sys.traces catalog view and examining the is_default column. The default trace captures a number of different types of events, including object creates and drops, errors, memory and disk changes, security changes, and more. You can disable this default trace, but it is generally lightweight and should be left enabled.


The output from the functions that return trace information is relatively cryptic because many of the values returned are numeric. For example, the property values returned by fn_trace_getinfo are specified as integer IDs. Table 6.5 describes of each of these property IDs.


 Property ID Description

------------------------------------------------------------
1                       Trace options specified in sp_trace_create 
2                       Trace filename

3                        Maximum size of trace file, in MB
4                       Date and time the trace will be stopped
5                       Current
trace status
 ------------------------------------------------------------------------

Stopping Server-Side Traces

It is important to keep track of the traces you have running and to ensure that “heavy” traces are stopped. Heavy traces are typically traces that capture a lot of events and are run on a busy SQL Server. These traces can affect the overall performance of your SQL Server machine and write a large amount of information to the trace output file. If you specified a stop time when you started the trace, it automatically stops and closes when the stop time is reached. For example, in the SQL script in Listing shown above, if you wanted the trace to run for 15 minutes instead of indefinitely, you would set the value for the stoptime variable at the beginning of the script, using a command similar to the following:


                 set @stoptime = dateadd(minute, 15, getdate())


​To otherwise stop a running server-side trace, you use the sp_trace_setstatus stored procedure and pass it the trace ID and a status of 0. Stopping a trace only stops gathering trace information and does not delete the trace definition from SQL Server. Essentially, it pauses the trace. You can restart the trace by passing sp_trace_setstatus a status value of 1.


After you stop a trace, you can close the trace and delete its definition from SQL Server by passing sp_trace_setstatus the ID of the trace you want to stop and a status value of 2. After you close the trace, you must redefine it before you can restart it.


If you don’t know the ID of the trace you want to stop, you can use the fn_trace_getinfo function or the sys.traces catalog view to return a list of all running traces and select the appropriate trace ID. The following example shows how to stop and close a trace with a trace ID of 2:


-- Set the trace status to stop

exec sp_trace_setstatus 2, 0

go

​-- Close and Delete the trace

exec sp_trace_setstatus 2, 2

go


​If you want to stop and close multiple traces, you must call sp_trace_setstatus twice for each trace. Listing script below provides an example of a system stored procedure that you can


--A Sample System Stored Procedure to Stop Profiler Traces


use master

go

 if object_id (‘sp_stop_profiler_trace’) is not null

             dropproc sp_stop_profiler_trace

go


​create proc sp_stop_profiler_trace @TraceID int = null

as


if @TraceID is not null

begin

         -- Set the trace status to stop

         exec sp_trace_setstatus @TraceID, 0


        -- Delete the trace

exec sp_trace_setstatus @TraceID, 2

end

else

begin

 -- the following cursor does not include the default trace

     declare c1 cursor for

SELECT distinct traceid FROM :: fn_trace_getinfo (DEFAULT)

         WHERE traceId not in (select ID from sys.traces where is_default = 1)

open c1

fetch c1 into @TraceID

while @@fetch_status = 0

begin

           -- Set the trace status to stop

exec sp_trace_setstatus @TraceID, 0

          -- Delete the trace exec sp_trace_setstatus @TraceID, 2 fetch c1 into @TraceID

end

close c1

deallocate c1
end



Analyzing Slow Stored Procedures or Queries

After you identify that a particular stored procedure is running slowly, what should you do? You might want to look at the estimated execution plan for the stored procedure, looking for table scans and sections of the plan that have a high cost percentage. But what if the execution plan has no obvious problems? This is the time you should consider using the SQL Profiler.


You can set up a trace on the stored procedure that captures the execution of each statement within it, along with its duration, in milliseconds. Here’s how:

  1. Create a new trace, using the TSQL_Duration template.
  2. Add the SP:StmtCompleted event from the stored procedure event class to the trace.
  3. Add a filter on the Duration column with the duration not equal to 0. You can also set the filter to a larger number to exclude more of the short-running statements


If you plan to run the procedure from SSMS, you might want to add a filter on the SPID column as well. Set it equal to the process ID for your session; the SPID is displayed at the bottom of the SSMS window next to your username, in parentheses. This traces only those commands that are executed from your SSMS query editor window.


When you run the trace and execute the stored procedure, you see only those statements in the procedure that have nonzero duration. The statements are listed in ascending duration order. You need to look to the bottom of the Profiler output window to find your longer-running statements. You can isolate these statements, copy them to SSMS, and perform a separate analysis on them to determine your problem.


You can also add showplan events to your Profiler trace to capture the execution plan as the trace is running. SQL Server now has showplan events that capture the showplan results in XML format. Traces with this type of XML output can have a significant impact on server performance while they are running but make the identification of poorly performing statements much easier. When you are tracing stored procedure executions, it is a good idea to add a filter on the specific stored procedure you are targeting to help minimize the impact on performance.


After you run a trace with an XML showplan event, you can choose to extract the showplan events to a separate file. To do so, in the SQL Server Profiler you select File, Export, Extract SQL Server Events, Extract Showplan Events. At this point, you can save the showplan events in a single file or to a separate file for each event. The file(s) is saved with a SQLPlan file extension. This file can then be opened in SSMS, and the graphical query execution plan is displayed.


Deadlocks


Deadlocks are a common occurrence in database management systems (DMBSs). In simple terms, deadlocks occur when a process (for example, SPID 10) has a lock on a resource that another process (for example, SPID 20) wants. In addition, the second process (SPID 20) wants the resource that the first process has locked. This cyclic dependency causes the DBMS to kill one of the processes to resolve the deadlock situation. Resolving deadlocks and identifying the deadlock participants can be difficult. In SQL Server 2008 and past versions, trace flag 1204 can be set to capture the processes involved in the deadlock.


The output is text based but provides valuable information about the types of locks and the statements that were executing at the time of the deadlock. In addition to this approach, SQL Server 2008 offers the capability to capture detailed deadlock information via the SQL Server Profiler. This type of tracing can be accomplished as follows:


  1. Create a new trace, using a Blank template; this leaves the selection of all the events, data columns, and filters to you.
  2. Add the Locks:Deadlock graph event to the trace from the Locks category. An additional tab named Event Extraction Settings appears on the Trace Properties window.
  3. Click the Save Deadlock XML Events Separately check box. This causes the deadlock information to be written to a separate file. You could also export the results after the trace has been run by using the File, Export option. When you run this trace, it captures any deadlock event that occurs and writes it to the XML file specified. To test this, you can open two query editor windows and execute the following statements, in the order listed, and in the query window specified:


-- In Query Window # 1

--Step1

USE ADVENTUREWORKS2008

GO

BEGIN TRAN

         UPDATE HumanResources. Employee SET Modified Date = GETDATE ()

-- In Query Window # 2

--Step2

USE ADVENTUREWORKS2008

GO

BEGIN TRAN

        UPDATE HumanResources.Department SET ModifiedDate = GETDATE()

        SELECT * FROM  HumanResources.Employee

-- In Query Window # 1
 --Step3

      SELECT * FROM  HumanResources.Department


When the deadlock occurs, the results pane for one of the query windows contains a message similar to the following:

Msg 1205, Level 13, State 51, Line 3 

Transaction (Process ID 55) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.


Identifying Ad Hoc Queries

One problem that can plague a production system is the execution of ad hoc queries against the production database. If you want to identify ad hoc queries, the application, and the users who are running them, SQL Profiler is your tool. You can create a trace as follows:


  1. Create a new trace, using the SQLProfilerStandard template.
  2. Add a new ApplicationName filter with Like Microsoft%


When this trace is run, you can identify database access that is happening via SSMS or Microsoft Access. The user, the duration, and the actual SQL statement are captured. An alternative would be to change the ApplicationName filter to trace application access for all application names that are not like the name of your production applications, such as Not Like MyOrderEntryApp%.


Identifying Performance Bottlenecks

Another common problem with database applications is identifying performance bottlenecks. For example, say that an application is running slow, but you’re not sure why. You tested all the SQL statements and stored procedures used by the application, and they were relatively fast. Yet you find that some of the application screens are slow. Is it the database server? Is it the client machine? Is it the network? These are all good questions, but what is the answer? SQL Profiler can help you find out. You can start with the same trace definition used in the preceding section. For this scenario, you need to specify an ApplicationName filter with the name of the application you want to trace.


You might also want to apply a filter to a specific NTUserName to further refine your trace and avoid gathering trace information for users other than the one that you have isolated. After you start your trace, you use the slow-running application’s screens. You need to look at the trace output and take note of the duration of the statements as they execute on the database server. Are they relatively fast? How much time was spent on the execution of the SQL statements and stored procedures relative to the response time of the application screen? If the total database duration is 1,000 milliseconds (1 second), and the screen takes 10 seconds to refresh, you need to examine other factors, such as the network or the application code. With SQL Server 2008, you also combine Windows System Monitor (Perfmon) output with trace output to identify performance bottlenecks.


This feature helps unite system-level metrics (for example, CPU utilization, memory usage) with SQL Server performance metrics. The result is a very impressive display that is synchronized based on time so that a correlation can be made between system-level spikes and the related SQL Server statements. To try out this powerful new feature, you open the Perfmon application and add a new performance counter log. For simplicity, you can just add one counter, such as % Processor Time. Then you choose the option to manually start the log and click OK. Now, you want to apply some kind of load to the SQL Server system. The following script does index maintenance on two tables in the AdventureWorks2008 database and can be used to apply a sample load:


​USE [AdventureWorks2008]

GO

ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID]

ON [Sales].[SalesOrderDetail]
 REORGANIZE WITH ( LOB_COMPACTION = ON )

GO

PRINT ‘FIRST INDEX IS REBUILT’

WAITFOR DELAY ‘00:00:05’

USE [AdventureWorks2008]

GO

ALTER INDEX [PK_Person_BusinessEntityID]

   ON [Person].[Person] REBUILD WITH

   ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON,

     ALLOW_PAGE_LOCKS  = ON,

     SORT_IN_TEMPDB = OFF )

GO

PRINT ‘SECOND INDEX IS REORGANIZED’

Next, you open the script in SSMS, but you don’t run it yet. You open SQL Profiler and create a trace by using the Standard Profiler template. This template captures basic SQL Server activity and also includes the StartTime and EndTime columns that are necessary to correlate with the Perfmon counters. Now you are ready to start the performance log and the SQL Server Profiler trace. When they are running, you can run the sample load script. When the script has completed, you stop the performance log and Profiler trace. You save the Profiler trace to a file and then open the file in the Profiler application.


The correlation of the Perfmon log to the trace output file is accomplished from within the Profiler application. To do this, you select File, Import Performance Data. Then you select the performance log file that was just created; these files are located by default in the c:\perflogs folder. After you import the performance data, a new performance graph and associated grid with the performance counters is displayed in the Profiler as shown below picture.








Defining Server-Side Traces
Much of the SQL Server Profiler functionality can also be initiated through a set of system stored procedures. Through these procedures, you can define a server-side trace that can be run automatically or on a scheduled basis, such as via a scheduled job, instead of through the Profiler GUI. Server-side traces are also useful if you are tracing information over an extended period of time or are planning on capturing a large amount of trace information. The overhead of running a server-side trace is less than that of running a client-side trace with Profiler.


To start a server-side trace, you need to define the trace by using the trace-related system procedures. These procedures can be called from within a SQL Server stored procedure or batch. You define a server-side trace by using the following four procedures: .

  • sp_trace_create—This procedure is used to create the trace definition. It sets up the trace and defines the file to store the captured events. sp trace create returns a trace ID number that you need to reference from the other three procedures to further define and manage the trace.
  • sp_trace_setevent—You need to call this procedure once for each data column of every event that you want to capture. .
  • sp_trace_setfilter—You call this procedure once for each filter you want to define on an event data column.
  • sp_trace_setstatus—After the trace is defined, you call this procedure to start, stop, or remove the trace. You must stop and remove a trace definition before you can open and view the trace file. 

​​You will find that manually creating procedure scripts for tracing can be rather tedious. Much of the tedium is due to the fact that many numeric parameters drive the trace execution. For example, the sp_trace_setevent procedure accepts an eventid and a columnid that determine what event data will be captured. Fortunately, SQL Server 2008 provides a set of catalog views that contain these numeric values and what they represent. The sys.trace_categories catalog view contains the event categories. The sys.trace_events catalog view contains the trace events, and sys.trace_columns contains the trace columns. The following SELECT statement utilizes two of these system views to return the available events and their related categories:


​​select e.trace_event_id, e.name ‘Event Name’, c.name ‘Category Name’

     fromsys

.trace_events e

         joinsys.trace_categories c one.category_id = c.category_id

     order by e.t​

race_event_id


​For listof Trace Events and Their Related Categories and

Trace Columns Available for a Server-Side Trace please look to here https://msdn.microsoft.com/en-us/library/ms186265.aspx​


​You have to call the sp_trace_setevent procedure once for each data column you want captured for each event in the trace. Based on the number of events and number of columns, you can see that this can result in a lot of executions of the sp_trace_setevent procedure for a larger trace definition. To set up filters, you must pass the column ID, the filter value, and numeric values for the logical operator and column operator to the sp_trace_setfilter procedure. The logical operator can be either 0 or 1. A value of 0 indicates that the specified filter on the column should be ANDed with any other filters on the column, whereas a value of 1 indicates that the OR operator should be applied. 


Fortunately, there is an easier way of generating a trace definition script. You can set up your traces by using the SQL Profiler GUI and script the trace definition to a file. After you define the trace and specify the events, data columns, and filters you want to use, you select File, Export, Script Trace Definition. The SQL commands (including calls to the aforementioned system stored procedures) to define the trace, start the trace, and write the trace to a file are generated into one script file. You have the option to generate a script that works with SQL Server 2000, 2005 or 2008. Listing 6.2 shows an example of a trace definition exported from the Profiler. It contains the trace definitions for the TSQL trace template. You must replace the text InsertFileNameHere with an appropriate filename, prefixed with its pathname, before running this script.


​A SQL Script for Creating and Starting a Server-Side Trace /****************************************************/ /* Created by: SQL Server 2008 Profiler */ /* Date: 05/10/2009 07:20:54 PM */ /****************************************************/


-- Create a Queue

declare @rc int

declare @TraceID int

declare @maxfilesize bigint

set @maxfilesize = 5


— Please replace the text InsertFileNameHere, with an appropriate

filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .

trc extension

will be appended to the filename automatically. If you are writing from

remote server to local drive, please use UNC path and make sure server has

write access to your network share


exec @rc = sp_trace_create @TraceID output, 0, N’InsertFileNameHere’, @maxfilesize,

NULL

if (@rc != 0) goto error


— Client side File and Table cannot be scripted


— Set the events

declare @on bit

set @on = 1

exec sp_trace_setevent @TraceID, 10, 2, @on

exec sp_trace_setevent @TraceID, 10, 12, @on

exec sp_trace_setevent @TraceID, 10, 13, @on

exec sp_trace_setevent @TraceID, 12, 1, @on

exec sp_trace_setevent @TraceID, 12, 12, @on

exec sp_trace_setevent @TraceID, 12, 13, @on


— Set the Filters

declare @intfilter int

declare @bigintfilter bigint


— Set the trace status to start

exec sp_trace_setstatus @TraceID, 


displaytrace id for future references

select TraceID=@TraceID

gotofinish error:

select​​ ErrorCode=@rc

finish:

go



SQL Server Profiler continued 

Now the fun begins! If you click one of the statements captured in the Profiler grid, a vertical red line appears in the Perfmon graph that reflects the time at which the statement was run. Conversely, if you click a location in the graph, the corresponding SQL statement that was run at that time is highlighted in the grid. If you see a spike in CPU in the Perfmon graph, you can click the spike in the graph and find the statement that may have caused the spike. This can help you quickly and efficiently identify bottlenecks and the processes contributing to it.


Monitoring Auto-Update Statistics

 SQL Profiler can be used to monitor auto-updating of statistics as well as automatic statistics creation. To monitor auto-updating of statistics, you create a trace and include the AutoStats event from the Performance event category. Then you select the TextData, Integer Data, Success, and Object ID columns.


When the AutoStats event is captured, the Integer Data column contains the number of statistics updated for a given table, the Object ID is the ID of the table, and the TextData column contains names of the columns together with either an Updated: or Created: prefix. The Success column contains potential failure indication. If you see an excessive number of AutoStats events on a table or index, and the duration is high, it could be affecting system performance.


You might want to consider disabling auto-update for statistics on that table and schedule statistics to be updated periodically during nonpeak periods. You may also want to utilize the AUTO_UPDATE_STATISTICS_ASYNC database setting, which allows queries that utilize affected statistics to compile without having to wait for the update of statistics to complete.

Monitoring Application Progress

The 10 user-configurable events can be used in a variety of ways, including for tracking the progress of an application or procedure. For instance, perhaps you have a complex procedure that is subject to lengthy execution. You can add debugging logic in this procedure to allow for real-time benchmarking via SQL Profiler.


The key to this type of profiling is the use of the sp_trace_generateevent stored procedure, which enables you to launch the User configurable event. The procedure needs to reference one of the User configurable event IDs (82 to 91) that correspond to the User configurable event 0 to 9.


If you execute the procedure with eventid = 82, then User configurable event 0 catches these events. Listed below contains a sample stored procedure that (in debug mode) triggers the trace events that SQL Profiler can capture.
 

--A Stored Procedure That Raises User configurable Events for SQL Profiler


CREATE PROCEDURE SampleApplicationProc (@debug bit = 0)

as

declare @userinfoParm nvarchar(128)

select @userinfoParm = getdate()


--
if in debug mode, then launch event for Profiler

--    indicating Start of Application Proc

if @debug =1

begin

            SET @userinfoParm = ‘Proc Start: ‘ + convert(varchar(30),getdate(),120)

           EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

end


--
Real world would have complex proc code executing here

--The WAITFOR statement was added to simulate processing time

WAITFOR DELAY ‘00:00:05’


---
if debug mode, then launch event indicating next significant stage

if @debug =1

begin

             SET @userinfoParm = ‘Proc Stage One Complete: ‘

                                                       + convert(varchar(20),getdate(),120)

             EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

end


--
Real world would have more complex proc code executing here

--The WAITFOR statement was added to simulate processing time

WAITFOR DELAY ‘00:00:05’ —5 second delay


---
if debug mode, then launch event indicating next significant stage

if @debug =1 begin

             SET @userinfoParm = ‘Proc Stage Two Complete: ‘

                                                      + convert(varchar(30),getdate(),120)

             EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm

end
GO

-----------------------------------------------------------------------------------------------------------------------------------------------------

Now you need to set up a new trace that includes the UserConfigurable:1 event. To do so, you choose the TextData data column to capture the User configurable output and any other data columns that make sense for your specific trace. After this task is complete, you can launch the sample stored procedure from Listing 6.5 and get progress information via SQL Profiler as the procedure executes. You can accumulate execution statistics over time with this kind of trace and summarize the results. The execution command for the procedure follows:


EXEC  SampleApplicationProc @debug = 1


There are many other applications for User configurable events. How you use them depends on your specific need. As is the case with many Profiler scenarios, there are seemingly endless possibilities.


Summary
Whether you are a developer or database administrator, you should not ignore the power of the SQL Profiler. It is often one of the most
underused applications in the SQL Server toolkit, yet it is one of the most versatile. Its auditing capabilities and ability to unravel complex server processes define its value. This chapter wraps up the introduction to the tools and utilities available with SQL Server. Now you should be equipped to start administering and working with SQL Server.
 

---------------------------------------------------------------------------------------------------------------------------------------------------------