Database structure to store performance metrics

In this I will continue previous one What performance metrics are important and how to collect them and will describe the database structure to store performance statistics. I will take as basis the structure designed by me and generalize it for common use.

The database can be divided into two parts:

  • Log tables, where every activity in the system is putting down. The data is stored for different types of records from two days to three months. These data is used to track errors and analyze details of system behavior for short periods of time.
  • Report table (in the future I’m planning to transfer it to Analysis Services), in which general information for the day is stored. This table is used for reports. We do not delete data in this table.

Below I will describe some basic structures that can be used to store performance metrics, and in the next posts I will discuss in details how to use each table to calculate system performance metrics.

Log Errors table

Let’s start from the table, in which we’ll put all errors that occur in the system. Its structure can be the following:


Brief Description

ErrorTime Date time when error occurs
User User name, who has received the error
Message Error message
Source, StackTrace etc Any number of fields to store all additional information about error.
Type It will be great to log an error type. You may use the following type: system exception, business logic error, sql server error, etc.

Reports log table

In the next table, we will store all data about work of the reports.


Brief Description

ReportName Unique name of the report.
User User name, who has executed the report
DateBegin Date time when the report was launched
DateEnd Date time when the report was received.
Paremeters What filters were transferred to the report procedure.

System objects operations’ duration


Brief Description

ObjectName Unique name of the object.
User User name, who has performed the save\open operation
DateBegin Date time when the operation was started on the server
DateEnd Date time when the operation was finished on the server
Type Type of the operation (in our case SAVE, OPEN, LIST OPEN).
LogXML Additional XML field for some additional logs. I will dwell on how this field can be used in the future posts.

General table to be used for reports


Brief Description

LogType Type of the record (e.g. SAVE operation, number of timeouts, etc.)
ObjectName Name of the object (for global records (such as number of timeouts) – null value)
PeriodDate Date without time
LogXML All needed information for reports in XML format. It will be different for each type, so XML is the best option.

This is how I’m presenting structure to store performance metrics. Here I’ve listed only basic fields and these tables can be expanded with additional fields as it is required for each system.

In the next post I will talk about custom reports that indicate problem directions.

Posted in Database development, SQL Server, Technology | Tagged , , | 1 Comment

What performance metrics are important and how to collect them

In the previous post How to measure your system performance, I’ve given common description of your own ERP system performance. Now I would like to concentrate on what performance metrics we can collect.

First of all let’s try to list metrics that are important from users’ point of view and that cannot be measured by basic SQL Server and Windows Server mechanisms:

  • Time that is required to open lists/journals of documents and ref books.
  • Time that is required to open documents for viewing and editing.
  • Time that is required to save changed documents.
  • Amount of database timeouts.
  • Amount of internal system exceptions.
  • Time of reports creation.
  • System response time.

To collect listed above metrics the architecture of ERP system must fulfill some requirements:

  • System must have thin client – all logic must be on server side.
  • System must have basic methods for all operations with business documents – so all custom logic of specific documents must be inherited from these basic methods.
  • System must have common procedure for processing of all exceptions.

Following these principles the basic architecture will look like:


Class structure can be presented like:


So, in this case we have just one place where we must implement collection of the performance metrics. Now let’s try to consolidate all performance counters we have listed above and determine where we’re going to collect them.



Time that is required to open lists/journals In basic procedure that responsible for getting and returning to user data of the required list/journal.
Time that is required to open documents In basic procedure that responsible for getting and returning to user data of the required document.
Time that is required to save documents In basic procedure that responsible for checking and storing data of the document in the database.
Amount of exceptions In the module that is responsible for exception processing. Before sending exception to the user must store it in the database.
Amount of database timeouts The special case of the previous point.
Time of reports creation The structure of reports module must be the same as open/save one. In this case the logic would be similar.
System response time Can be measured by external (such as IP-monitor) or by self-created programs. It’s important to build a correct method that would be called by utility.

To sum up, I have described common architecture principles that are desirable to keep in order to build scalable and easy to maintain system of performance metrics collection. Also I have listed basic metrics that cannot be caught by server tools but rather important to understand when something has gone wrong. And finally I have identified places where we can catch these metrics.

In the next post of the series I’m planning to speak about common Statistics database structure – how to store all collected data.

Posted in Database development, SQL Server, Technology | Tagged , , | 2 Comments

How to measure your system performance? First look

I would like to start a new set of posts about system performance. I would not write about Windows server performance or database performance, but speak how to measure your ERP system work. If you face situations when server is ok, database is ok, but users are crying that everything is slow and they cannot work and you need to find the bottle neck – I hope this series of posts can help you.

So, I will not speak about how to know that your service is down – there is a plenty of applications that can help you. I will speak about all aspects of YOUR system performance, how to measure it, how to find problem places, when everything is working but users are not happy. Also I will not speak about hardware, complex net problems – the blog is about SQL, so I will concentrate on database side. Taking into consideration that from my experience ERP systems with large databases (millions of records) have 90% of performance issues on SQL Server side – this looks logically. Of course, I’ll pay attention to other aspects of ERP system work. As, while speaking about complex solutions, you cannot divide system into several parts – it’s important to have a common vision and take into consideration all components of the system.

To sum up, most of the techniques will be about database side, but I also try to cover other aspects of ERP system work.

First of all let’s draw a basic ERP system scheme and try to mark all places where performance issues may arise.


Let’s try to list all places where we can have issues that affect our system performance:

  1. User himself ))). Sometimes they say that everything is bad, when it works fine. May be user has bad mood, or some problems – but from time to time everybody faces such situations. Such moments are usually solved by HelpDesk group, by connecting to users’ screen and making sure that everything is ok.
  2. Something wrong with users’ desktop/laptop. Such issues must be solved by field support.

I will not speak about points (1) and (2) in this blog )))

  1. Performance issues with channel between user and server. I will dwell on this point but not very precise.
  2. Performance issues on servers side. This will be the main theme of future posts. I deliberately did not break up this block into several, as in case of performance problems you can find them anywhere, but in my practice – this anywhere in most cases was SQL Server.

So, let’s try to create some kind of agenda for future posts in order to create some logic sequence.

  • Suggest what metrics are important, also how to set performance targets.
  • Description of basic system architecture and how we can calculate work for each element of the system (net + server).
  • Some examples of custom reports that indicate problem directions.
  • Problem solving. Looking for bottle neck.
    • Description of some custom methods and SQL Server tools than can be used for investigating performance issues.
    • Some examples of solving performance issues.

May be I will add something to this list, but in any case now I’ve got a plan that I can follow ))).


Posted in Database development, SQL Server, Technology | Tagged , | 1 Comment

Service Broker: add message to queue

In this post let’s dwell on adding messages to queues. In our system we prefer doing it inside triggers. Why triggers? There are several reasons:

  • You cannot miss trigger event, whenever you perform update through web-service, or direct in the database.
  • Trigger is always inside your transaction.
  • Also you can get data from the record state before update. This can be very useful.

On the other hand, triggers are not really good way to store any logic, but if it contains only adding messages to queue, this may be rather good option. In any way, we’ve decided to use triggers.

I will describe AFTER UPDATE trigger, as AFTER INSERT will be the same, but simpler as only one record will be proceeded through AFTER INSERT trigger.

— Head of the script will make it executable whether it exists in the database or not.
SET NOEXEC OFF                              
— Creating empty object
CREATE TRIGGER dbo.<TriggerName>
on dbo.<TableName>
RAISERROR (‘Object is empty. Error!’, — Message text.
               16, — Severity.
               1 — State.
ALTER TRIGGER dbo.<TriggerName>
ON dbo.<TableName>
after  UPDATE
       DECLARE @conversationHandle UNIQUEIDENTIFIER
— Declare cursor for adding messages
— as while multi row update
— table inserted will have data for all updated records,
— and trigger will be executed at the end of batch update
— not after update of each record
— some additional logic
                    when (inserted.IsDMark > deleted.IsDMark) then 3
                    when (inserted.IsDMark < deleted.IsDMark) then 4
                    when (inserted.IsDMark = deleted.IsDMark) then 1
             end as actionid
       FROM inserted
       left join deleted on =
       — some conditions
       (inserted.IsPosted > deleted.IsPosted)
       and inserted.IsReturned=0
    declare @documentID int
    declare @actionid int
    declare @xml xml
       OPEN cur
       FETCH NEXT FROM cur INTO @documentID, @actionid
       WHILE (@@fetch_status = 0)
       — open the dialog
             BEGIN DIALOG CONVERSATION @conversationHandle
                    FROM SERVICE [//TMS/ServiceBroker/WarehouseManagementProcessing/Init]
                    TO SERVICE ‘//TMS/ServiceBroker/WarehouseManagementProcessing/Target’
                    ON CONTRACT [//TMS/ServiceBroker/WarehouseManagementProcessing]
                    WITH ENCRYPTION = OFF;
       — create xml for Service Broker message
                    set @xml = (select
                                  @documentID as ‘ObjectID’
                             , @actionID as ‘ActionID’
                             , getdate() as ‘VerDateTime’
                    for xml path(”), type, root (‘root’));
       — send message
                    SEND ON CONVERSATION @conversationHandle
                           MESSAGE TYPE [//TMS/ServiceBroker/WarehouseManagementProcessing/StandardMessage]
       — close dialog
             END CONVERSATION @conversationHandle
             FETCH NEXT FROM cur INTO @documentID, @actionid
       CLOSE cur
       DEALLOCATE cur


After trigger execution the queue will have one or several messages waiting for other service to process them. Now you can view not processed messages in the queue:

select * from dbo.<Queue name>


If you want to clear the queue, you have to execute the following script:

declare @conversation uniqueidentifier
while exists (select * from <Queue name> )
      set @conversation = (select top 1 conversation_handle from <Queue name> )
      end conversation @conversation with cleanup


Also, it is important to say, that when you deactivate the queue, the messages will still be received but not processed, and when you activate the queue they all will appear in the queue.

Posted in Database development, SQL Server, Technology | Tagged , , , | 2 Comments

Why Service Broker? + Service Broker customization.

First, let’s start from advantages of Service Broker (SB) – why it can be useful. I will list main points and examples how we use it.

  1. SB provides mechanism of asynchronous queue. This gives us an opportunity to move some hard SQL procedures from users’ transactions.
    1. E.g.: we had stored procedures that log changes in the object to special history tables. If object contains head table and several line tables, this procedure can take time equal to document save time. We move execution of this stored procedure to the Service Broker thus improving system performance.
  2. The most important thing is that SB performs its message creation inside main transaction. In case of any error everything will be rollbacked.
    1. Database integrity is really everything. If we cannot trust Service Broker it would be of no use.
  3. SB gives us a possibility to transfer data to other systems and applications without any additional scheduling.
    1. Our systems have a huge number of interfaces to other databases and applications (about 100). Before SB we have to create for each interface a job (or add a new step to existing job). Jobs were scheduled to execute at some time intervals – this was not very good for business as they need data transfer ASAP, also it added additional load on server, as we have to look into history tables to find if anything was changed. SB solves all this problems, as data is transferred just in several minutes after document is saved and this activity is performed only when document is modified.
  4. In case of error it would automatically stop processing. No endless attempts.
    1. Before SB implementation jobs were sending errors any time a problem had occurred. We have to create complex algorithms to stop processing record after a number of unsuccessful attempts. They were unique for each object, so hard to maintain and expand. SB has solved this problem.

Below I would like to dwell on some disadvantages of SB.

  1. No simple way to view error log of message processing.
  2. Quite impossible to restore message that failed with error to be processed again.
  3. No mechanism to execute external Web-method as message processing (we have found only way through CLR with UNSAFE option, but it was banned by our admins).

So, in order to maximize advantages and get rid of disadvantages we have created the special log table, which contains all information we required to analyze the work of SB and in case of need to reinitialize the message for retreatment.


The description of fields:

  • Contract – contract that is used to process queue;
  • TargetService – target service;
  • MessageType – message type;
  • MessageBody – message body, packed in XML. We usually put there information to directly identify the processed record (ID, type of operation, date time when in was put into queue, etc)
  • ConversationHandle – dialog handler;
  • MessageSequenceNumber – we do not use this field now ))).
  • MessageError – if error occurs while processing message, we store it in this field;
  • StateId – result of the message processing (1 – ok, 2 – error, 3 – the message was excluded from the queue);
  • VerDateTime – message processing date time.
  • InitService – service that initiates message queuing;
  • QueueStatID – reference to the table in which we store queue names.

So in this table we have all information required for analysis and re-queuing. In the next post I’m planning to speak about how we put messages to queues.

Posted in Database development, SQL Server, Technology | Tagged , | Leave a comment

Service Broker (SB): first look

In this post I would like to start a series how we use Service Broker in our enterprise system. Today I would like to speak about target scheme that we were planning to archive several years ago, later I would dwell on how we have developed it. I will not speak about how Service Broker works, what objects it is using, as all this information can be found in MSDN.

It was planned to use Service Broker to minimize time that was taken on server side to save document. Before SB implementation during save process we made modifications to some other tables, transfer the modified records to other databases, etc. And during all this operations users had to wait till system finished all required activities. This caused documents to be saved slowly and users to be unhappy.

So, the basic target scheme can be presented somehow like this:


Few words about the scheme:

  • Changes in the document trigger the addition of the message to SB queue. All needed information for message processing is packed in XML.
  • 90% of messages are processed on the Web Service, which is polling message queues and then performs special activities depending on queue.
  • The results are logged in separate table, so at any time we can look at results of each message processing. We also have created a form in our application giving users ability to put messages that failed with an error into the queue repeatedly. The form looks like the following and gives us the possibility to manage queues through user interface, without going to database level:


In the next post I will concentrate on advantages and disadvantages of SB, how we deal with them on the database structure level, also I will start to post some code examples.

Posted in Database development, SQL Server, Technology | Tagged , | Leave a comment

Job creation template

Not so long ago we have faced the following problem while implementing SQL jobs to test and production environment:

  • Job is developed be developer by deployed to test or production by DBA.
  • E-mail lists, folders are different on each environment.

So the process was the following: developer come and together with DBA was trying to recreate job on test/production and not miss a thing. Of course, as this was done manually the process usually caused errors in jobs’ configuration. As we’re changing and creating SQL jobs regularly it was quite a big problem.

So, it was decided to create SQL job template and to create job using script, designing in such a way to create correct result on any of our environment.

First, we have to create parameters for all our servers and put them into variables:

— Set parameters
— Job owner
declare @joboperator sysname
— operator email
declare @email_operator_BU   sysname
declare @email_operator_IS   sysname
— path to java scripts
declare @localPath varchar(1000)
— job identity
declare @jobid binary(16)
— var for parameter @command
declare @path nvarchar(3201)
— additional parameters
declare @Job_Enable bit
declare @Delete_Job bit
declare @database_user_name sysname
declare @returncode int
select @returncode = 0
      — Setting job owner
      @joboperator =
      case @@servername
            when ‘PROD_1’ then ‘sqljoboperator’
            when ‘PROD_2’ then ‘sqla’
            when ‘PROD_3’ then ‘sqlaedd’
            when ‘TEST_4’ then ‘devsqljoboper’
            when ‘TEST_3’ then ‘devsqljoboper’
            when ‘TEST_1’ then ‘sqlaeddd’
            when ‘TEST_2’ then ‘sqlaeddd’
— Setting path to java ascripts
      @localPath =
      case @@servername
            when ‘PROD_3’ then ‘D:\’
            when ‘PROD_2’ then ‘not application’
            when ‘PROD_1’ then ‘not application’
            when ‘DEV_1’ then ‘D:\DEV1\’
            when ‘DEV_2’ then ‘D:\DEV2\’
            when ‘DEV_3’ then ‘D:\DEV3\’
            when ‘TEST_1’ then ‘D:\TEST1\’
            when ‘TEST_2’ then ‘D:\TEST2\’
            when ‘TEST_4’ then ‘not application’
            when ‘TEST_3’ then ‘not application’
— Setting operator’s email
      @email_operator_BU =
      case @@servername
            when ‘PROD_3’ then ‘Job BU’
            when ‘PROD_1’ then ‘Job BU’
            when ‘PROD_2’ then ‘Job IS’
            else ‘Job IS’
      @email_operator_IS = ‘Job IS’,
      — Enable or disable job depending on server name
      @Job_Enable =
      case @@servername
            when ‘PROD_1’ then 1
            when ‘PROD_3’ then 1
            when ‘PROD_2’ then 1
            when ‘DEV_1’ then 0
            when ‘DEV_2’ then 0
            when ‘DEV_3’ then 0
            when ‘TEST_4’ then 0
            when ‘TEST_3’ then 0
            when ‘TEST_1’ then 0
            when ‘TEST_2’ then 0
      — Delete job depending on server name
      @Delete_Job =
      case @@servername
            when ‘PROD_1’ then 1
            when ‘PROD_3’ then 1
            when ‘PROD_2’ then 1
            when ‘DEV_1’ then 0
            when ‘DEV_2’ then 0
            when ‘DEV_3’ then 0
            when ‘TEST_4’ then 1
            when ‘TEST_3’ then 1
            when ‘TEST_1’ then 1
            when ‘TEST_2’ then 1
      @database_user_name =
      case @@servername
            when ‘PROD_1’ then N”
            when ‘PROD_2’ then N”
            when ‘PROD_3’ then N”
            when ‘TEST_4’ then N”
            when ‘TEST_3’ then N”
            when ‘TEST_1’ then N”
            when ‘TEST_2’ then N”
— If server name is not listed – rollback with error
if @localPath is null or @email_operator_IS is null or @email_operator_BU is null
            RAISERROR (N’Parameters are not set for the specified server’, 16, 1)
            GOTO QuitWithRollback

Then, we have to delete job if it’s already exists on the server. We do not do this for developer server – as developers can delete and create SQL jobs on dev environment. [ Job Name ] – must be replaced by real job name in the template.

IF @Delete_Job = 1
      — Delete the job with the same name (if it exists)
      SELECT @JobID = job_id FROM msdb.dbo.sysjobs with(nolock) WHERE (name = N'[ Job Name ]’)
      IF (@JobID IS NOT NULL)
            — Check if the job is a multi-server job
            if (exists (select * from msdb.dbo.sysjobservers with(nolock) where (job_id = @jobid) and (server_id <> 0)))
                  — There is, so abort the script
                  RAISERROR (N’Unable to import job ”[ Job Name ]” since there is already a multi-server job with this name.’, 16, 1)
                  GOTO QuitWithRollback
                  — Delete the [local] job
                  execute msdb.dbo.sp_delete_job @job_name = N'[ Job Name ]’
                  select @jobid = null

Now we are ready for job creation.

— Add the job
EXECUTE @ReturnCode = msdb.dbo.sp_add_job @job_id = @JobID OUTPUT ,
@job_name = N'[ Job Name ]’,
@owner_login_name = @joboperator,
@description = N'[ Job Description ]’,
@category_name = N’Application jobs’,
@enabled = @Job_Enable,
@notify_level_email = 2,
@notify_level_page = 0,
@notify_level_netsend = 0,
@notify_level_eventlog = 0,
@delete_level= 0,
@notify_email_operator_name = @email_operator_IS
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

Now let’s add job steps. In our case in the template we have provided all possible variants that can be used by our team:

  • Execute of java script,
  • Execute of SQL command,
  • Execute SSIS package.
— Add the job steps
— Job step template for use @subsystem = N‘CmdExec’
— @on_[success/fail]_action 1 –  exit without error, 2 – exit with error, 3 – go to next step
set @path = N’cscript ‘ + @localPath + ‘WScripts\[ Script Name ].js
echo %errorlevel%’
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID,
            @step_id = 1,
            @step_name = N’Send WebMethod.[ Web Method Name ] ‘,
            @command = @path,
            @database_name = N”,
            @server = N”,
            @database_user_name = @database_user_name,
            @subsystem = N’CmdExec’,
            @cmdexec_success_code = 0,
            @flags = 0,
            @retry_attempts = 0,
            @retry_interval = 1,
            @output_file_name = N”,
            @on_success_step_id = 0,
            @on_success_action = 3,
            @on_fail_step_id = 0,
            @on_fail_action = 2
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

— Job step template for use @subsystem = N’TSQL’
— @on_[success/fail]_action 1 –  exit without error, 2 – exit with error, 3 – go to next step
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID,
            @step_id = 1,
            @step_name = N'[STEP NAME]’,
            @command = N’SQL Script’,
            @database_name = N’database name’,
            @server = N”,
            @database_user_name = @database_user_name,
            @subsystem = N’TSQL’,
            @cmdexec_success_code = 0,
            @flags = 0,
            @retry_attempts = 0,
            @retry_interval = 1,
            @output_file_name = N”,
            @on_success_step_id = 0,
            @on_success_action = 3,
            @on_fail_step_id = 0,
            @on_fail_action = 2
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

— Job step template for use @subsystem = N’SSIS’
— @on_[success/fail]_action 1 –  exit without error, 2 – exit with error, 3 – go to next step
EXECUTE @ReturnCode = msdb.dbo.sp_add_jobstep @job_id = @JobID,
            @step_id = 1,
            @step_name = N'[STEP NAME]’,
            @command = @path,
            @database_name = N”,
            @server = N”,
            @database_user_name = @database_user_name,
            @subsystem = N’SSIS’,
            @cmdexec_success_code = 0,
            @flags = 0,
            @retry_attempts = 0,
            @retry_interval = 1,
            @output_file_name = N”,
            @on_success_step_id = 0,
            @on_success_action = 1,
            @on_fail_step_id = 0,
            @on_fail_action = 2
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback


And the final step – add job schedule:

— Add the job schedules
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @job_id=@jobId,
            @name=N'[ Schedule Name]’,
 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

Final step – adding target servers and commit or rollback transaction:

 — Add the Target Servers
 EXECUTE @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @JobID,
 @server_name = N'(local)’
 IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
GOTO EndSave

Now I can say, that implement or correct jobs on different servers it’s really not a problem in our day-to-date activities. All jobs’ scripts are stored in Team Foundation Server

Also if you’re restaging test environment for UAT and want to restore there all jobs that are on production, you just send a list of scripts to DBA and he just execute them and that’s all. No manual work, no errors, no human factor. From my point of view it’s very elegant decision.

Posted in Database development, SQL Server, Technology | Tagged , , | Leave a comment