Archive for category Tracing

Reading extended event files using client side tools only

For any serious tracing working you’ll be using extended events and sending the output to XEL target files. Various options exist when it comes to reading these files. You can open the files using the file open function in Management Studio. Or you can use the sys.fn_xe_file_target_read_file function to have your SQL Server database engine read the files.

If you want to read the XEL files programmatically and don’t want to be constrained by having a database connection you can use a .Net program and use the extended event reader functionality in Microsoft.SqlServer.XEvent.Linq.dll. For example, this post describes an approach using C#:

http://sqlblog.com/blogs/extended_events/archive/2011/07/20/introducing-the-extended-events-reader.aspx

If you prefer to use Powershell here’s one solution:

# If doing this on a client that just has the client tools and no database engine then you have to use 32 bit Powershell as Microsoft
# have not provided a 64 bit assembly for the client tools distribution.
# If you are running this on a server that has the database engine then you don't need to use 32 bit Powershell as the 64 bit assembly
# is present in the C:\Program Files\Microsoft SQL Server\120\Shared directory

# For SQL 2014 you have to add the reference to Microsoft.SqlServer.XE.Core.dll. You don't need this for SQL 2012

Add-Type -Path 'C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\Extensions\Application\Microsoft.SqlServer.XE.Core.dll'
Add-Type -Path 'C:\Program Files (x86)\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\Extensions\Application\Microsoft.SqlServer.XEvent.Linq.dll'

$events = New-Object Microsoft.SqlServer.XEvent.Linq.QueryableXEventData("c:\temp\test.xel")

$events | Foreach-Object { $_.Actions | Where-Object { $_.Name -eq 'client_hostname' } } | Group-Object Value

This can be used if, for example, you’re tracing the sql_batch_completed event and have added the client_hostname action. The script outputs the counts for each host name in the XEL file.

As can be seen in the comments it’s unfortunate that for the client side only solution we have no choice but to use the 32 bit assemblies. This is still the case for the SQL 2014 client side of things.

 

2 Comments

Time spent in app, time spent in SQL Server

Application teams will sometimes believe they have a database issue when a recurring feed starts to perform poorly. Typically they will call on the DBA to perform a trace to identify the poorest performing SQL calls. When the trace is run and the analysis completed you may find that there are no obvious SQL issues. All the statements perform well. The performance problem may not necessarily reside on the database tier. It may reside on the application tier. It may be that the application tier is doing a large amount of number crunching or string processing between the recurring SQL calls. It may be the case that these routines have suddenly become the slow step in the feed.

Having performed the SQL trace it would be great if, in addition to the duration of the SQL calls, we could determine the elapsed time between SQL calls. One method for obtaining this figure is by subtracting the start time of an SQL call from the end time of the previous SQL call. If you export the trace data to a table and filter on, for example, the application name, SPID, hostname or proc name that is doing the load and order by start time you can use the LAG function to obtain the elapsed time between SQL calls.

In the following trace code the procedure that is performing the load is named myproc. The last two columns in the final result set show the total time spent doing SQL work and the total time spent in the application between the SQL calls.


WITH ctetracetimes AS
(
SELECT
starttime,
endtime,
LAG(endtime) OVER (ORDER BY starttime) AS previousendtime
FROM dbo.tracetbl
WHERE objectname = 'myproc'
),
cteappandsqltimes AS
(
SELECT
starttime,
endtime,
previousendtime,
DATEDIFF(millisecond,starttime,endtime) AS timespentsql,
DATEDIFF(millisecond,previousendtime,starttime) AS timespentapp
FROM ctetracetimes
)
SELECT *,
SUM(timespentsql) OVER() AS sumtimespentsql,
SUM(timespentapp) OVER() AS sumtimespentapp
FROM cteappandsqltimes
ORDER BY starttime

The results for repeated execution of the myproc stored procedure were as follows:

timespentsqlandapp

You can see that 50,000 milliseconds was spent doing the SQL related work (processing the stored procedure ten times) but over 150,000 milliseconds was spent in the application tier in between the SQL calls.

This type of analysis may be quite enlightening even for what you consider to be your well performing feeds. You may be making the assumption that the vast majority of your feed time is associated with database processing when, in fact, it is associated with application processing time. For example, one of my own feeds had a total elapsed time of twenty minutes. When the above analysis was performed the SQL time was only two minutes. Eighteen minutes was being spent in the application tier. A poorly performing number crunching routine was identified. It was then optimised to reduce the overall run-time of the feed.

Leave a comment