Power Pivot in Excel. First look. Part 2

This post is continue of the PowerPivot in Excel. First look Part 1.  Today I will get the currency rate for each transaction that is done not in rubles currency. The rates are entered usually monthly (but there can exceptions), and rate for the last day of the month determines the rate for the whole month. For example:

SQL0025.1

All transactions for the period from Feb 24 till Feb 28 in EURO must be converted using rate 40.1; from Feb 1 till Feb 23 – rate 40.11; from Jan 1 till Jan 31 – rate 40.71, etc.

Let’s calculate correct rates using DAX language that is available in Excel 2010 PowerPivot. Instead of showing the final formula, let us build it step-by-step.

First we must get the minimum rate date for the specified transaction. For this purpose we have to filter rate table on two columns – Date and Currency. We can do it using CALCULATE function:

CALCULATE(
min(Rate[date]);
FILTER(Rate; Rate[Date] >= MainData[Date]);
FILTER(Rate; Rate[Currency] = MainData[Currency])
)

We create a formula to get minimum rate date where rate date is greater or equal to transaction date and rate currency is equal to transaction currency. After having applied these filters for each transaction we get the minimum rate date for each transaction, as it can be seen below:

SQL0025.2

Now we need to get rate for the date determined in the calculation. For this purpose we will use CALCULATE operator one more time:

=CALCULATE(
MAX(Rate[Rate]);
FILTER(
Rate;
Rate[Date] = MainData[Calculation1]
);
FILTER(Rate; Rate[Currency] = MainData[Currency])
)

The formula is quite similar – we just get the rate for the specified date and for the specified currency. The MAX operator is using just as a requirement of CALCULATE operator – as it must return only one value. We can use here MIN and the result will be the same.

To sum up, the final formula not to create additional fields:

=CALCULATE(
max(Rate[Rate]);
FILTER(
Rate;
Rate[Date] = CALCULATE(
min(Rate[date]);
FILTER(Rate; Rate[Date] >= MainData[Date]);
FILTER(Rate; Rate[Currency] = MainData[Валюта])
)
);
FILTER(Rate; Rate[Currency] = MainData[Валюта])
)

And the final columns in the PowerPivot:

SQL0025.3

Summa rub is calculated quite simple:

=[Rate]*[Sum]
Advertisements
Posted in Business Intelligence, Database development, SQL Server, Technology | Tagged , , , , | Leave a comment

How to create currency conversion in Microsoft SQL Server Analysis Services (SSAS)

In this post I would like to speak about currency rates calculations in standard dimension model, and in the next post I would return to Tabular model, DAX language and perform the same calculations there.

It’s quite a common business task when you have to provide business with prices and sums in different currencies. The task is quite actual for business intelligence systems based on SSAS technology.

You can create currency conversion model using Business Intelligence Wizard or manually. This exercise is described in details in Christian Wade blog: http://consultingblogs.emc.com/christianwade/archive/2006/08/24/currency-conversion-in-analysis-services-2005.aspx. But following this guide you would face the problem: sum of values on day level would not be equal on month/year level as for these levels last currency rate is using.

SQL0025.1

For example:

  • Converted price for 2013-01-01 is 90.9,
  • Converted price for 2013-01-02 is 1666.7.
  • The sum is 90.9 + 1666.7 = 1757.6.

But pivot shows 1750 as a result of conversion if original price using last rate 1.2. Let’s make the situation more complex and for each position in fact table create three different dates. And the task is to calculate the same sum for each date based on the currency rate for that date. The fact table would contain data in different currencies and we have to calculate all sums in USD.

First of all we have to create a view, based on the fact table, in which all rates and sums must be calculated. The text of the view can be the following:

TFS0027.2

In this case the Cube structure will look like:

TFS0027.3

Forecast basis – is a dimension to analyze facts. Also, we do not need Currency rate table in cube structure as we have performed all needed calculations in the view.

If we have only one date dimension, the task will be already solved, but in our case we have just started))). If we create one Measure group, users would have to select new price when they want select other date. It they forget to change measure – the calculation would be incorrect.

TFS0027.4

This is rather not user friendly and contains high risks of wrong data analysis. On the other hand, we cannot create new measure group based on the same fact table getting the following blocking message:

TFS0027.5

But what is blocked in designer can be done in XML. This is rather flexible tool and you can (of course with some cautions) create required measure groups. I’ve duplicated code of the existent measure group three more time, changed its GUID and name, and then cut price measures from original measure group to new groups. Then I come back to designer and see the following structure:

TFS0027.6

Now let’s go to dimension usage tab and set relationships to get the following structure:

TFS0027.7

As you can see, I link each measure group only with date that determines its currency rate. Also, to prevent including members of measure groups to show top level values for the unrelated dimensions, it’s required to set IgnoreUnrelatredDimensions property to False.

TFS0027.8

We have eliminated the risk of performing analysis based on wrong values (as data will be empty), but users still have to select new measure when they change date dimension. That’s not good, so to fix this issue let’s create new calculation and place it in the main Measure group and then make all measures in manually created groups not visible.

TFS0027.9

The same calculation must be created for Rate measure.

And that is – at last we’ve got the required result:

TFS0027.9.1

Posted in Business Intelligence, Database development, SQL Server, Technology | Tagged , , , , | Leave a comment

PowerPivot in Excel. First Look. Part 1.

In this post let’s have a look on PowerPivot add-on for Excel 2010-2013. Be aware, PowerPivot is not available for Office 2013 (365) Home Premium, so it was preferable for me to stand on Office 2010 rather than migrate to new version of the product.

So, what PowerPivot is and what innovations it brings to us. In few words – it gives the possibility to work with not huge data from different data sources directly in Excel, without warehouse database and analysis services. For me, sounds really great. And I will show you on one example from my personal practice.

I will speak about my private Excel file, to which I upload on monthly basis my financial operations (incomes and outcomes) and then perform some analysis. So, before PowerPivot the process was rather uncomfortable and included too much manual work. In order to optimize my work I have had to create some application or macros in Excel, or move data to Analysis Services. But I did not want to do any of these cases for my private activities, so I was continuing doing the following:

  • For each line I have to calculate Year and Month,
  • Enter currency rate (if the operation in  the line was not in rubles (my country currency)),
  • Calculate total operation amount in rubles.

The part of the sheet can be found below. I’ve marked with green color manually filled and calculated columns.

SQL0023.1

Also, I have to feel all data for balance report manually, as it contains some calculations and categories that were not presented in source data. The simplified result was:

SQL0023.2

The same work I was doing for the other report, which indicates on what projects money on key accounts were designed to.  Also Excel helps me with several pivot tables for detailed analysis.

To sum up, I was spending about 30-40 minutes each month to prepare all data and finish all required analysis. As this work was done in my private time, I was not always attentive, made some mistakes, missed some data, etc.

Now let’s go to PowerPivot and how it has changed the described below process. The main idea is: I was able to create database scheme right in Excel, without any external tools.

First I’ve created the following additional tables on other Excel sheets:

  • Calendar table (Date),
  • Currency rate table (Rate),
  • Hierarchy for balance report (Categories).

SQL0023.3

And then I linked these tables to my source data:

SQL0023.4

Everything, that I’ve describe below was done in MS Excel without using any other tools or programs. It was really very flexible and easy. To add table to PowerPivot you just have to click one button:

SQL0023.5

And to create links between tables you just have to click button and select tables and fields:

SQL0023.6

In the next post I will finish PowerPivot theme, showing some usage of DAX language (real analysis service language inside Excel) and the final view of my financial analysis process.

Posted in Business Intelligence, Technology | Tagged , | 1 Comment

Date Range in SQL

While creating SQL queries for reports, it’s usually required to get a continuous list of dates for a given period. E.g.: get all dates from 1980 Jan 01 till 2013 Dec 31 (12419 records). In this post I would like to describe several ways how this can be done and pros and cons of these methods.

First I will briefly describe each method and then I will move them all into one table to make some conclusions.

Use system tables (or large tables in your database)

The query looks like:

DECLARE @startDate datetime, @endDate datetime;SET @startDate = {d N’1980-01-01′};
SET @endDate = {d N’2013-12-31′};
select DATEADD(d, n, @startDate)
from (
select 0 n
union all
select row_number() over (order by a.ID) n
from sysobjects a with(nolock)
cross join sysobjects b with(nolock)
) tab1
group by n
having DATEADD(d, n, @startDate) <= @endDate
order by n

Advantages:

  • Small amount of code.
  • Great performance: CPU 63, Reads 1500, Duration 0.09 seconds.

Disadvantages:

  • We are working with system tables and Microsoft can change their names or structure in future versions of SQL Server.
  • Depend on table size – for small databases this method cannot be used as there would be too little records in sysobjects table.

Use large union query

DECLARE @startDate datetime, @endDate datetime;
SET @startDate = {d N’1980-01-01′};
SET @endDate = {d N’2013-12-31′};
with
t0(n) as
(
select 1
union all
select 1
),
t1(n) as
(
select 1
from t0 as a
cross join t0 as b
),
t2(n) as
(
select 1
from t1 as a
cross join t1 as b
),
t3(n) as
(
select 1
from t2 as a
cross join t2 as b
),
t4(n) as
(
select 1
from t3 as a
cross join t3 as b
),
t5(n) as
(
select 1
from t4 as a
cross join t4 as b
),
Numbers(n) as
(
select row_number() over (order by n) as n
from t5
)
select dateadd(d, n – 1, @startDate) as n
from Numbers
where n <= datediff(d, @startDate, @endDate) + 1

Advantages:

  • Ideal performance: CPU – 16, Reads – 0, Duration – 0,009 seconds.

Disadvanteges:

  • Code is rather bulky.

Use CTE

DECLARE @startDate datetime, @endDate datetime;
 
SET @startDate = {d N’1980-01-01′};
SET @endDate   = {d N’2013-12-31′};
 
WITH [dates] ([Sequence], [date]) AS
   (SELECT 1 AS [Sequence]
          ,@startDate AS [date]
    UNION ALL
    SELECT Sequence + 1 AS Sequence
          ,DATEADD(d, 1, [date]) AS [date]
    FROM [dates]
    WHERE [date] < @endDate)
 
SELECT [Sequence]
      ,[date]
FROM [dates]
OPTION (MAXRECURSION 32747);

Advantages:

  • Code is ok – it’s rather compact.

Disadvantages:

  • Performance is rather poor, especially for number of reads: CPU – 172, Reads – 111000, Duration – 0,3 seconds.
  • We cannot use it on SQL Server 2000.
  • You are limited with 89 years – but I think for most cases this will be enough.

To sum up, the small table with all three methods in one place, with some my thoughts about each of them.

Method Advantages Disadvantages Conclusion
System Tables
  • Small Code
  • Good performance
  • Depends on internal  SQL objects
  • Depends on size of the database
From my point of view not reliable method. The performance lead is not so much higher than the others methods.
Large union queries
  • The best performance
  • Code not very good
Great method. It  is flexible enough to feed needs for any date ranges and works very quickly.
CTE
  • Small code
  • Poor performance (especially for reads).
Can be used for not large date ranges (it shows quite good performance for 1-2 years period). But in general case the second method is much better.
Posted in Database development, SQL Server, Technology | Tagged , | 6 Comments

ERP System Performance: availability and network metrics.

In this post I will finish the ERP system performance block. Today I have chosen topic not directly connected with SQL Server, but it’s rather important for building complete picture of the process.

So, we have analyze in details how to look for performance bottlenecks inside system business logic, but how quickly to determine problems with server software (e.g. SQL Server) or net problems? Do not forget that I’m speaking about monitoring from the side of the user. In order to track system availability you have to ping it from client area regularly using IP monitor or other similar tool.

But how can you ping availability and network speed properly? I’ll try to share my experience and hope to answer this question.

Availability

Let’s start from the availability. As example I will use the two-tie system with business logic on Web-services and database on SQL Server. You can scale this practice to any other systems.

First you have to create additional method that must simply return “0” and ping it with IP-monitor. It can look like:

SQL0022.1

But this method would show us only if IIS is down. But it still would return “0” when SQL Server does not respond. Let’s make some modifications to fix this issue:

SQL0022.2

The new method will not create locks on SQL Server objects, but simply indicate when it is completely down. Of course you can try to expand it, but from my point of view this is enough, as more complex metrics can be monitored by SQL Server itself. Presented method gives us an ability to quickly understand whether users from the current location can access system or not. Also using the log of this process, we can easily calculate percent of time when system was available for users. Quite important metric for IT.

Network speed

In this metric it is interesting for us not to ping production server and get respond, but to measure how much time it takes for network to return from the server to the user area a volume of data equal to standard document. Let’s suppose that business logic server returns to windows form client a document as XML in string. I’m taking the easiest way, just to describe the idea.

So, the web-method for IP monitor must look like:

SQL0022.3

We do not connect to SQL Server, we do not perform any operations – we just returning the string variable as it is real document. Just monitoring the time needed to get this text, we can measure network speed for our application. Now, if you have SAVE and OPEN document metrics (as were described What performance metrics are important and how to collect them) you can add those values to network speed value and get total time a user has to wait while document opening or saving. And if it takes 3 seconds to get data through network and 1 second to get data on the server side – you do not need to optimize you SQL procedures or build some indexes – you have to start solving network problem that holds your system performance on low level.

So, that’s all I want to blog about ERP system performance metrics. Hope this was useful )))).

Posted in Technology | Tagged , | Leave a comment

ERP system performance. How it works in practice.

As I promised, this post will be dedicated to some real life practice. We’ll try to find the bottle neck in one document save process.

The principles and structures for ERP performance were described in the previous posts you can find them in my blog.

The work starts when we receive a report and see that we’ve got some problems:

SQL0020.3

Document 1 is ok – its trend is falling. But Document 2 is a real issue – it has problems with save duration nearly every day. So, we need to drill throw in order to find the bottle neck in the save logic of this document. All tests will not be performed on development environment with the help of automated unit tests that will open, made modifications and save the problem document for about 500 times. We care for the percentage of each method in total save duration, so we do not need real production load and real production environment. When we fixed the most long-running procedure on development environment in 90% it would cause positive changes on production also.

So, let’s start. The detailed log will be collected for all iterations the same way and would look like the following:

  • Each method must be wrapped in time measurement sections:

SQL0021.1

  • The result XML is stored in database. So each SAVE record in statistics database would have XML field with all needed for further analysis details. The XML is:
      <ROOT>
            <Open
                  AfterClearCashes=”0″
                  AfterPreviousObjectOpen=”312,5″
                  AfterOnPreOpen=”0″
                  AfterOpenItem=”62,5″
                  AfterGetRights=”15,625″
                  AfterOnPostOpen=”125″ />
      </ROOT>
 
  • Then results are transferred to MS Excel and in Pivot table we can easily find the method that takes maximum percentage on total save duration.

So, let’s try to explore out document and find the bottleneck.

On the first iteration we will log the highest level of save procedure. In this case all logic for logging is already implemented and we have only to turn on special flag. Start unit tests, get results, transfer them to Excel and below you can find what we’ve got in Pivot:

SQL0021.2

Great, we’ve got one candidate that takes about 45% of document save time. Let’s drill through and log the AfterSaveItem method. For this purpose we have to wrap in all methods in after save procedure, run unit tests one more time, move results to Excel and analyze:

SQL0021.3

Here it is. 31% from 45% takes one method – WriteComments. It seems to me that we have found our issue. After code analysis, we see that inside the method the SQL stored procedure is called and it seems that it works not so good as it has to. You can start optimizing it right now, but I prefer to collect the SQL stored procedure executions with the help of MS Profiler (to find the most hard combinations of the parameters) and only then initiate optimization work.

In this case the issue was not in the stored procedure, but in the logic. This method was executed during each save of the document, but has to be called only during document creation or deletion of the position in the document. Making these changes has accelerated save duration and document was back in targets.

To sum up, it’s usually not enough SQL Server or Windows Server tools to log why some blocks of the system have poor performance. You need some internal mechanisms to look for issues and performance bottlenecks. In the series I’ve tried to describe all aspects а ERP system performance measurement and hope this information would be useful.

In the next post I will speak briefly how to ping system availability and close this theme.

Posted in Database development, SQL Server, Technology | Tagged , , | Leave a comment

ERP system performance metrics

In this post I’d like to show what performance reports we use in our everyday work to track whether system is ok or not.

First section – reports we are receiving on daily basis.

  • Timeout Report.

SQL0020.1

The report consists of two parts: Chart with Timeout Coefficient and List of Timeout errors for the last day. Coefficient is calculated using the formula: SQL0020.5.png.

For our system it must be not more 20, or not more than 10 timeouts in a normal day.

  • Trends of users’ operations with documents (journal open, document open, document save). Below you can find an example for journal open operation.

SQL0020.2

The logic of the report is the following: we take all objects of the system, with operation average value for the last day higher than target metric and show how it changes for the last month. Our targets are: journal open operation and document open operation – not more than 1 second, document save operation – not more than 2 seconds. With the help of this report we know all objects that work not well for the last day, and can easily understand, was it a one-day problem, or there are permanent issues with the specific document.

Next section – reports we are receiving on weekly basis:

  • Pivot results for users’ operations with documents:

SQL0020.3

This report uses the same information as previous one, but shows it a little differently. It calculates how many problem days we have with each document for the last 3 months and put an average value that was higher than our target for each problem day. If a document has more than 30 problem days it’s colored red and is the first candidate for the optimization. Documents with amount of problem days from 10 to 20 – are orange all others are green. We use this report for making decision what operation with what document required optimization.

And the last section – monthly reports:

  • Reports monitor

SQL0020.4

Sections of the report have quite detailed descriptions, so I will dwell on its features. Document print forms – rather small reports, usually print forms of specific document. They must be rather quick in general, that’s why we calculate average time for them. For large reports, average time cannot be used, as the report can be formed without filters and return 100 000 rows or with restrictions and return 10 rows. So for large reports we catch only cases when they work for more than ten minutes, thus providing high load on server resources and making user to wait for quite uncomfortable amount of time.

So, that’s all for today and in the next post I will talk about how all this system of performance control works on a real-life example.

Posted in SQL Server, Technology | Tagged , | Leave a comment