Overblog
Suivre ce blog Administration + Créer mon blog
20 février 2017 1 20 /02 /février /2017 12:39

I have struggled with one client on queries that were dealing with date ranges or type ranges or many filters, on very large tables with low density in many cases, and you knew one index would really improves it but it was based on some filter that was undeterministic.

Let's take a classic example, you have a website / an api that queries the orders that are not yet completed per accounts and for the last day. 

On sql server, you have many indexes that can help and data partitioning can as well but in some case, you have a query that has so many filters and it does only care about the last day or even more precised... the last hour. 

You know this index would be huge if you do not use a filter and your DML operation would suffer greatly! 

Additionally your table is not very dense with tons of columns (well you did not design the table and you are , sure frustrated, but you got to deal with the current design).

Finally the query is like one of the most frequently executed on your platform. 

It there a solution?

YES!

 

 

The solution is with filtered indexes! It is nice because it is a standard or wed edition features and therefore you can use it at ease. 

As the index stays very small, due to the fact that it only cares about the last rows (for example 100K rows intead of the 40 million rows) and the reindex takes about 2-3 seconds online (entreprise edition) and 1 second offline. 

So even on standard edition the blockage would not be dramatic. 

So let's do an example.

Let's asume I have an order table.

First let's prepare the table:

CREATE DATABASE DynFilteredIndexes
GO
USE DynFilteredIndexes
GO
CREATE Schema Orders authorization dbo
GO
CREATE TABLE Orders.Header 
(OrderId int identity(1,1) not null,
DateOrdered datetime not null,
AccountId smallint not null,
OrderTotal decimal(18,2) not null,
OrderStatusId tinyint not null,
)
GO
ALTER TABLE Orders.Header ADD CONSTRAINT PK_OrdersHeader PRIMARY KEY CLUSTERED (OrderId)
GO
SELECT top 20000
CONVERT(SMALLINT,ROW_NUMBER() OVER (Order by A.Object_Id)) AccountId
INTO #AccountId
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO
SELECT top 30000
CONVERT(SMALLINT,ROW_NUMBER() OVER (Order by A.Object_Id)) Day
INTO #DayId
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO
SELECT top 1000
CONVERT(DECIMAL(18,2),ROW_NUMBER() OVER (Order by A.Object_Id))*100.0 OrderTotal 
INTO #OrderTotal
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO

DECLARE @Rand int , @row int, @AccountIdFrom int, @AccountIdto int, @OTFrom int, @OTTo int
DECLARE @Cnt int = 1

WHILE @Cnt < 5000
BEGIN
SET @Rand = RAND()*1000
SET @Row = RAND() * 20000
SET @AccountIdFrom = RAND() * @Rand
SET @AccountIdTo = RAND() * @Row
SET @OTFrom = RAND() * @AccountIdTo
SET @OTTo = RAND() * @AccountIdTo
SELECT @Rand, @Row,  @AccountIdFrom, @AccountIdTo,@OTFrom, @OTTo, @Rand % 2 + @Row % 2 + @AccountIdFrom % 2

INSERT INTO Orders.Header 
(AccountId, DateOrdered, OrderTotal, OrderStatusId)
SELECT TOP (@Row) Accountid, DATEADD(DAY, -Day, GETDATE()) DateOrdered, OrderTotal, @Rand % 2 + @Row % 2 + @AccountIdFrom % 2 OrderStatusId FROM #AccountId A CROSS JOIN #DayId B CROSS JOIN #OrderTotal C
WHERE AccountId BETWEEN @AccountIdFrom AND @AccountIdTo
AND OrderTotal BETWEEN @OTFrom AND @OTTo
SET @Cnt = @cnt + 1
END
GO
 

Let's see how many rows we have then per date, account id, order total , status to see how random we have inserted in the order table:

 

 

 

Counts : statistical distribution of the randomization

Counts : statistical distribution of the randomization

Let's now add indexes : 

Classic indexes:

CREATE INDEX IX_OrdersHeaderDateOrdered ON Orders.Header (Dateordered)
CREATE INDEX IX_OrdersHeaderAccountId ON Orders.Header (AccountId)
 

The index filtered :

CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > '20170219'

is instantaneously created due to the index on dateordered previously created.

Note that this index cannot be created :

CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > DATEADD(DAY,-2,GETDATE()) 
 

Msg 10735, Level 15, State 1, Line 4
Incorrect WHERE clause for filtered index 'IX_FilteredLastDay' on table 'Orders.Header'.Error: 

 

 

When I calculate the size of the indexes:

SELECT 
                            @@SERVERNAME                        AS ServerName,
                            DIPS.database_id                    AS DatabaseId,
                            'DynFilteredIndexes'                 AS DatabaseName,
                            DIPS.OBJECT_ID                        AS TableId,
                            O.name                                AS TableName,
                            DIPS.index_id                        AS IndexId,
                            I.name                                AS IndexName,
                            s.name                                AS SchemaName,
                            DIPS.partition_number                AS PartitionNo,
                            DIPS.alloc_unit_type_desc            AS AllocUnitTypeDesc,                            
                            DIPS.page_count                        AS PageCount,
                            p.rows                                AS RowsCount,
                            DIPS.avg_fragmentation_in_percent        AS Frag,
                            GETDATE()                            AS DateChecked,
                            DIPS.index_type_desc                AS IndexType
                        FROM sys.dm_db_index_physical_stats (23, NULL, NULL , NULL, 'DETAILED') AS DIPS
                        INNER JOIN [DynFilteredIndexes].sys.partitions AS P ON DIPS.object_id = P.object_id
                                                                         AND DIPS.index_id = P.index_id
                                                                         AND DIPS.partition_number = P.partition_number
                        INNER JOIN [DynFilteredIndexes].sys.indexes AS I ON P.object_id = I.object_id
                                                                      AND P.index_id = I.index_id
                        INNER JOIN [DynFilteredIndexes].sys.objects AS O ON I.object_id = O.object_id AND O.type = 'U' 
                        INNER JOIN [DynFilteredIndexes].sys.schemas AS S ON O.schema_id = S.schema_id
                        WHERE DIPS.page_count < 10000

 

The result is clear: 

The filtered index has only 2161 rows for 8 data pages, no wonder it was created super fast!

 

 

 

index size (rows and pages)

index size (rows and pages)

Then we can look at the performance:

Here are three Selects (using "*" to include all columns but we can obviously list the columns which is a better practice):

SELECT * FROM Orders.Header WITH (INDEX = IX_FilteredLastDay) WHERE DateOrdered > '20170219' AND AccountId = 18
GO
SELECT * FROM Orders.Header WITH (INDEX= IX_OrdersHeaderAccountId) WHERE DateOrdered > '20170219' AND AccountId = 18
GO
SELECT * FROM Orders.Header WITH (INDEX= IX_OrdersHeaderDateOrdered) WHERE DateOrdered > '20170219' AND AccountId = 18
GO

Here is the profiler result:

Profiler result

Profiler result

On the profiler result, clearly the filtered index is faster then the date then account filter, which makes sense as it is in the order of the discrimination. 

 

Ad you can see the filtered index footprint is very very tiny: 78 io reads! and 0 ms of CPU.

It is even more clear if I add orderstatusid in the where filter with 26 io reads.

 

 

profiler result with add filter

profiler result with add filter

I could even increase the performance by adding the dateordered into the list of columns as this generates a key lookup:

exec plan with key lookup

exec plan with key lookup

Well you can see that the index is really fast. The next question is how to maintain the index then???

Well you create a job that run this script once every morning:

Use DynFilteredIndexes
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
DECLARE @Date DATE = DATEADD(Day, -1, GETDATE())
DECLARE @DateStr VARCHAR(16) = REPLACE(CONVERT(VARCHAR,@Date),'-','')
DECLARE @Stmt VARCHAR(MAX) = 
'
CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > '''+@DateStr+'''
WITH (ONLINE = ON, DROP_EXISTING=ON)
'
print @stmt
EXEC (@Stmt)

It will run in 1 second and will give you a very nice index! (no real DML impact and super dast querying!)

 

t-sql script is included below.

 

I hope you had a nice reading!

Best!

Partager cet article
Repost0
15 janvier 2013 2 15 /01 /janvier /2013 15:01

Hello Guys again,

Although my blog main concern is about SQL server, I like to share my experience in the telecommuting field, misconceptions and reality.

1. Telecommuters works more efficiently than Onsite people: TRUE

I believe hiring telecommuters is eventually more efficient than OnSite people. A telecommuter would come periodically for meetings and will do client facing but he will daily communicate via web conferences. Whether he works or train people at the client site, a lot are done over the internet. Security with VPN technologies have been strengthened as well.

Most outsourcing in India is done via Skype, teamviewer, Webex, etc... Internet is also the main advantage for those Indian people, besides cost.

Also, flexibility, project-based budget and cost eficiency can be added to that. The telecommuter is happy about being at home most of the time and therefore can take a pay cut OR is ok to be independant and have less paid leave OR cost less in unnecessary additional cost like food, travelling reimbusement, professional phone, car, desk, computer, etc...

I have been successful to work with many consulting companies and direct clients this way. None of them thought I was not up to the task and get their money back easily.

2. Key to social progress

Another big aspect for telecommuter is that TIME IS MONEY! The first thing you gain is your commute. You gain like 1 to 2 hours on average PER DAY! of time, and also transport cost as well; you might even argue that telecommuting is green!

But not only that, it is more difficult to accept part-time jobs when working on-site as your commute is proportionnally more expensive in terms of time and transport cost. Working from home enables you to accept flexibility in working hours. A telecommuter could easily ask to work part-time or 3/4 of the time instead of full-time. The company has interest in this as well as he can cut his cost.

Finally I think it is a key to social progress. Why do we need full-time people. Why a full-time DBA? Isn't it much better to get 3 part-time DBA, so that when one go to vaccation, the other is there. If one does nto know the answer, the other one does... Why not having talent for less cost and having several talents at once? Everybody is happy and this is why I call it social progress! 

Telecommuters usually do not care too much about career or money. They care about their well-being. They accepted to eearn less to spend more time with their family and/or their hobby. 

I believe it is the future for social progress! I hope people agree with me and hope that in the future companies would thinki about this way!

Telecommuting does not mean outsourcing to Indian people in India. It just mean getting more talents that chose well-being instead of career!

I am open to any comments on that.

Best!

 

 

 

Partager cet article
Repost0
15 janvier 2013 2 15 /01 /janvier /2013 14:11

hello guys,

Today's topic relate to a series of trainings I spent doing at one of the top 10 ECommerce in Europe as a database/etl architect for 2 years time. The idea of those series were to train junior to senior level developpers on how to optimize their applicative code with database-related operations.

One of the series that I like particularly was about sending arrays of value to the database in order to do a series of DML operations.

I still have the video series and they have no relation to anything done in this Ecommerce company. The copyright is mind, so I can definitely share it with people. However, they are way too big to share on a blog.

I have also seen similar training sessions on the Net but I would happy to discuss with your projects on how to implement it in Coldfusion, .Net or Php... or any other kind of Web language.

I will summarize my findings and my advices on this article regarding this need that is quite often overlooked in the web industry.

1. Ad-hoc versus stored procedure

As a DBA, there is absolutely no controversy here for me. Any applicative code should be calling stored procedures when reaching the database for any DML operations or even Selects. 

Some DBAs can say it is because of performance. Well, this reason is shaky. It is true that stored procedure helps the SQL server engine to reuse execution plans. However, performance can be sometimes an issue when dealing with searches. 

Many searches like "Give me the orders for specific dates and/or with specific customer names and/or specific cities, etc..." have what we call dynaic portions of (hopefully) SARGable conditions and stored procedure with CASE WHEN conditions will not optimize the queries. 

There have been a lot of debate here but the truth and the solution is somewhat simple. You will eventually use parameterized query.

Parameterized query will help you gain the performance needed and increase security and avoid sql injection with proper parameter check management, at the application layer or at the stored procedure laywer. 

And in the last sentence I gave my preferrence: Stored procedure. 

Yes, indeed! Even though some queries might be very dynamic, for security reason (and not really for performance reason (just a bit), stored procedure is unequivocal for me.

Having that said, if you want to update a table, for instance, on several rows (for example, you like to update the shopping cart on one stored procedure call), you end up passing several values for the same parameters (for exemple, CartDetailIds) to the database. 

2. Fetch stored procedure calls vs one stored procedure call

Keeping the same example, you can decide in one click of a button to update the shopping cart (addition of quantities, removal of products, etc...). As I sais, there are three ways to update your cart:

1 is to build dynamically your DML operation and do one execution dynamically: as I stated, this is not very good for security reason.

Another one is to call n times the same stored procedure that would update each cart detail, one at a time, like fetching stored procedure calls. This solution is not good as well for two reasons. The first one is that you will need to deal with a transacxtion block at the application layer and I would rather keep the transaction as short as possible. The second reason is that the performance is significantly degraded. Do the test. I did the test and I could see significant differences. (I can share the tests upon request).

Finally, you can pass all the details to one stored procedure call and let the stored procedure deal with the transaction and the update operation at once. So in this example, the trick would be to pass some kind of array of data with CartDetailId and Quantity values, quantity 0 would be to remove the line. We will discuss later how we pass such arrays but fist, conceptually, I like to pin point the fact that you could definitely improve performance by using Merge statement (to deal with inserts, updates and deletes of Cart details at the same time) and then the Ouptut clause to render to the application the results of the changes (notably render the inserts as new CartIds would be created. If you did not have any other table to update (which is probably not true as you probably would update the Cart table (for the total amount column or a last update date column), you would then do an implicit transaction and not even need to deal with transaciton block.

3. Solution to pass arrays of data to the stored procedures

There are for this 4 solutions. I tested all three solutions and I will comment worse to best performance solutions.

The worse one is the one that calls several time a stored procedures but this one was already discarded for reasons stated above. On a scale of performance, this solution is noted 1/10.

The second one is passing arrays with lists (texte with a separator). For example in our case we could pass @CartDetailIds = '541001,541002,541004' with @CartDetailQuantities = '1,0,5'. The stored procedure would then parse the texts and allocate to each cartdetailid the corresponding quantity. To do so, you need to create a parsing function. This is pretty classic and this kind of function can be done in any sql server version or any database engines (oracle or Mysql, etc...). This makes this solution universal and therefore useful if you are not committed to a specific technology. Then you would use this function to gather the pieces and insert all data onto one single variable table. The preparation would be done and the DML operation would then be possible. I would rate this solution in terms of performance a good 7/10 performance. This job does the job nicely and answer all the issues: security, performance, scalability.

The third one is passing an xml. You can pass an array with an XML fragment like <CartDetail id="541001" Quantity="1"/><CartDetail id="541002" Quantity="0"/><CartDetail id="541004" Quantity="5"/>. Once passed onto the stored procedure to an xml parameter @CartDetails, you can rebuild a table with xquery and do then the same than above. The performance is good and I would rate it a bit higher than the first one. Actually even in terms of perfornance, it bits the first solution but not by much. I would give a 8/10. Note that Xml uses a lot of verbose. The advantage is the universality of the language for sure. However, the text is longer to pass and if you have to pass thousands of rows, it might be a concern.

The third solution is passing a table-valued parameter (TVP). This solution is by far the most efficient one. Again during my test, performance is very good and much better than the previous solution (if you push the stress to hundreds of thousand of rows at once). Instanciating a table can be done on framework 3.5 and above on .Net. Another advantage is the referential integrity of the data passed into the stored procedure as you can decide the exact datatypes of each column. In our example, quantity should be a smallint, allowing values from 0 to 255 maximum and reducing the storage and eventually improving the performance. Your main concern when you use TVP is how to change a stored procedure if you use the same TVP in any stored procedure. You cannot change a table datatype if it is used already. You might consider then TVP versions to get the release not on one big shot and increase regression testing time. But this can be managed nicely if you know how to deal with versions.

4. Things that make the small difference

Now that I talk about performance (and be more a DBA), I like to talk about architecture. I like TVP for another reason. Sometimes I am a bit too neat on the architecture and TVP helps me being neater again. As I like to enforce a filegroup different of primary on all my tables, as I like to define datatypes like telephone or email and use those user-defined datatypes on my tables, I like TVP because I can enfore integrity on all my parameters, even the array ones. 

When integrity struggles with performance, like FK might, there can be a debate but we are talking about defining a datatype or a check constraint, I am in favor of spending the time on it. Once we spent the time on it, maybe use an additional day or two to cover all grounds on each table, then we achieve top-of-the art storage. Data partitioning is in the same kind of things when you need to spend time to render the best copy for your database structure.

So TVP is the same kind of thing. Do not hesitate to spend extra-time and ask extra question to the porject manager/business analyst to get the best storage and best performance form your database.

Then everything will fall into pieces nicely: integrity, security, scalability.

Do not hesitate to ask me question and ask me for the videos related to this training.

Have a good day!

Partager cet article
Repost0
30 décembre 2012 7 30 /12 /décembre /2012 14:34

 

There are many ways to set up your cube but there are definite best practices to implement if you want to scale your cube. The first point will just remind us that a cube is an analytical tool and should not be used as a source of detailed report. Drillthrough utilization is according to me to avoid as this mislead the users to the purpose of a cube.

Your cube is an analytical tool

Main issues that we encounter if the misleading use of OLAP cube. Because end users see potentials and look at the data onto Excel (that is very friendly for them to set up reports), they sometimes believe that details can be reported by doing MDX queries. Let’s take a simple classical example. We have a bunch of orders from an Ecommerce. One can analyse the ordered sales and quantity per day, month or year but we cannot set up a report per order or customer! Well technically you can do all but the cube is not meant to do this and defeat the pupose of aggregation designs for that! A cube should be an analytical tool! You will solve just by these principles probably 50% of performance issues!

Avoid complex measures if you can

By complex measures, I mostly refer to measures that cannot be well aggregated such as Distinct count or last child measures. Sums, Counts are the measures that should be use the most as they are easily aggregatable.  I am not saying not to use the other measure aggregates, I am just advising not to do them if not necessary. Your processing will then be optimized because your aggregation designs would be much easier for the engine to set. That is another 20-30% easy increase in performance there.

Partition your cube

Microsoft Best practice advises to partition your measure groups when it reaches 200 million rows. Microsoft also advise to avoid small partitions. Well, both are right and I strongly advise to partition your measure groups for the following reasons. The first one is to ease your processing. By separating your facts, you avoid to reprocess old historical facts that would never change. However those facts still remain in your cube as they are part of your analysis and mining strategies. I have many clients with million rows facts. There are many ways to optimize performance and one of them is to aggregate into fact tables granularities that are not useful. For example, we can aggregate order sales by day and product just because we would never analyze by customer. However, this is risky because your sales and marketing users might require additional dimensions such as the Age of the customer at the time of the order and your aggregate would then need to be recalculated. In other words, sure you can optimize your cube by having less fact rows but in general I advise more towards partitioning your fact tables and generally by date or incremental ids. By doing so, your daily, hourly processing would then be focus on your current partitions. Even with proactive caching it is nice to have a way to set up incremental processing on a specific partition.

Set up your cube partitioning based on life cycle of your facts

This is especially true when it comes to transactions, which is usually the main measures source of any cube. A transaction has always a life cycle, whether is it a one-day cycle, one-month cycle or one-year cycle. Rarely, your transaction can change at any time or if it does, this means your system has flaws. Your transaction should follow a clear deterministic life cycle. Otherwise you might have “Zombie” transactions in your system. Let’s take again the example of an order in an Ecommerce system. I will take a regular case. Your order is passed the same day you pay, then is fulfilled within the month and then can be reimbursed/return within the last 90 days. After 90 days, the order is closed and cannot be touched. Exceptions should be made only by manually requesting to the IT people to reopen the order status. Then we do statistics and we notice on this Ecommerce that the orders are 100% created, 90% paid, 70% fulfilled and 10% reimbursed within the 90 days and then 0.1% changed after 90 days. Of course we can say that incremental processing is not feasible due to the fact that orders changed at any time but this would prevent the Sales team to have a fair amount of analysis in almost real time. What I would do in the case of this Ecommerce is run incremental processing one every hour on the current partition, run a current partition processing daily (and set the partition to be worth a month of data) and then do a full partition processing daily of the previous three months. Finally I would merge the monthly partitions if they are small, every year, for the Year minus 2 (or merge in separate partitions if a year worth of data is more than 200 million rows). Finally, I would reprocess the whole thing once a month if not too big and if it is, I would process fully partitions up to 1 year old. Updating dimensions is based on how frequent the dimensions change but as we re process the current partition daily, I would just update dimensions usually once a day just before the partition processing.
This way, it would make the creation of orders appears within the hour on your cube (without proactive caching). The payment usually would appear within the hour but if not, it would appear the next day. The fulfillment and returns/reimbursement and order closing status would appear the next day as well.
Again, there is no need to be absolutely accurate for the real time because this is an analysis tool. If people likes to be accurate about the a specific order status, an OLAP cube is not the tool to use. A simple OLTP query on the production database for a specific order detail is what you need. As for how many orders are still in open status, one can do an asynchroneous query (which would be almost similar to our cube) or a synchroneous query on the production system (if this system is well indexed/ status partitioned/sharded). 

 Set up your cube processing schedule

You can then automate your processing by setting a small table that would gather all necessary information for your Measure group partitions (Cube name, measure group name, Partition Id, Minimum and maximum values for your indremental processing, query binding source). Once you have done that. You can use SSIS to set up all your processing needs.


Partager cet article
Repost0
21 novembre 2012 3 21 /11 /novembre /2012 20:03

1. Understanding the concept

One of the main challenges to a distributed system is how to handle transactions across several servers and technologies. 

What if your application connect for the same transactions to several web services or database servers and if any failure occurs, need a rollback of all that is been implemented? 

How do you handle this also during an ETL workflow with several servers on several data centers?

I can give you many examples where transactions are not easy to handle. Noting that distributed transactions could also be dangerous and block the entire system if a rollback cannot occur

 

2. Do not mix technologies

One of the nightmares I encounter during my consulting life is the following example. Let’s assume that a stored procedure ran from a SQL server job does push from a linked server a large amount of data and then update several tables from the source database and finally bcp a bunch of data to flat files and send an email with an attached file with error (errors bcp’ed to flat files too).

During all of the process, a transaction block is designed with try catch block to roll it all back.

In some case the bcp portion (run with Xp_cmdshell) gets blocked by an OS that do not respond for various reasons (that could be a memory leak). The production supervisor decides to stop the job, triggering a rollback. Unfortunately he waits hours and the rollback does not occur and the main table where data were pushed on the destination server gets frozen by a lot of locks? The supervisor calls the DBA on-call that eventually fix the issue by ending the OS process that runs the bcp and was hanging on the SQL server box.

In this case, the main issue is the distribution of the transactions on several software and here the rollback is initiated by SQL server engine (by stopping the job) but has no permission to rollback the DOS cmd (the bcp) initiated by a process on the OS level.

What does this mean? Do not mix technologies! Because if everything works well, the transaction will be committed and nothing will fail (because the Dos cmd is actually already committed as the transaction scope within SQL server does not have any authority).

In this very case, it is much better to consider the tasks to be independent and play each step only if the previous step works. In this case, it is better to implement it within an ETL/Workflow system. Each step, you can constrain the success and failure and based on a log system, mark a rollback or not on the specific technology of the step that failed.

3. Keep it simple

Obviously, a distributed environment require a certain level of complexity when dealing with transactions but if you can make it simple, make it simple!

Try to identify what is really transactional and what is not for 2 reasons: one is for performance but one is also to make your transaction simple.

A good architect will always look for new technology, look at various ways to set up transactions on distributed environment but will use manly his common sense to make the system viable especially if one of the components fails.

4. Keep it scalable

The “keep it simple” paragraph is probably the hardest one to describe as by means, a transaction on a distributed environment is complex. 

“Keep it scalable” is easier to explain and is plain common sense. Most of the developers would make it work! Most of the architect would make it work with the right environment setup and the right software layers.

For example, if the task is to purge a multi-million row tables on a SQL server platform. As the production environment has man y concurrencies, he will make sure to keep the transactions short by implementing/using indexes and by keeping the DML operations as small as possible (by packets usually).

5. Keep it manageable

Another thing to focus on is to make sure that complex system has a very clear documentation and technology that is easily to manage. Another classic mistake of many companies is to setup technology that they cannot manage. 

Another one is to setup a system that is hard to maintain when there is a new feature to implement or hard to restart when there is a failure that blocked the system for a while. Both things need to be considered when implementing such system.

6. ETL system

What I like about ETL! There is one thing that I really like. It is the way you have to follow a strict workflow! Each task is independent of the transaction scope and therefore if a failure occurred, you need to setup the contingency plan within your ETL workflow. In SSIS, Event handlers make sure you can set up generic tasks for many different tasks, which is good.

I usually put together SQL tasks that I consider a transaction block (within the same stored procedure call). This allows me to rollbacks several DML operations at once.

However, when changing technology (like BCP, FTP, Email, Zip, etc...), there are handled outside the same transaction scope with their independent tasks. 

My ETL flow then needed to set up contingency plans if needed for any error on any task.

Most of the type, the contingency plan is just to log the error and the system would make sure to alert the Production team and guide him to restart the system or fix the error to get the incriminated transaction victim to be back in the ETL process.

For specific tasks, you can eventually set up a rollback plan.

 

7. Development cycle

The main classic mistake that IT service do is believing that the development phase is the longest part of the project. Here are the main phases for me of a project:

1. Architectural studies: see if the project is possible, goes well within the current Information system

2. Architectural implementation: define how to setup servers and code layers to accommodate the needs

3. Infrastructure installation: install the necessary network and servers

4. Development: develop the different code layers (this task can be done during step 3) based on architectural implementation

5. Q&A phase: tests and validation of the code layers and load/stress tests

6. Production deployment: deploy on production

7. Production supervision: supervise for a period the deployment.

Based on my experience, one step that is too often ignored is the step 2, the architectural implementation. 

It reminds a project. I was in charge of the architecture and the development (oh well, it happens). Phase 1 was fairly quick as it involved current system in place. Phase 2 could have been very fast as well but it took me about 2 weeks to really grasp all the functional concepts and render them feasible technically. The project manager answered most of the functional needs but technical limitations rendered the specs largely under-documented. Technical documentation was written by me but required a lot of changes or at least a lot of assumptions from the part of the functional specs, which were not précised enough, especially when errors were occurring (classic mistake as well).

The project manager was giving me 2 months to deploy on production the specific system (it was a master data management system) and was worry about the fact that I spent 2 weeks on step 2. 

Well thanks to it, the development phase took 1 week (he thought it would take 3 weeks).  The Q&A phase took 1 week (the project manager thought it would take 2 weeks) and the production phase took 1 day to deploy and 1 week to supervise. 

Even though the project manager estimated the step 2 to last only 1 week instead of 2 weeks, he was agreeably surprised that the project overall took 5 weeks instead of the previously planned 8 weeks.

Since the deployment, there were no bugs fixing to implement. Only new features.

I wanted to spend time explaining this story because I think this is why I wanted to be an Architect. I believe that development starts way too quickly and main issues are coming from a bad architecture and it is especially true in a distributed environment with distributed transactions!

 

 

Partager cet article
Repost0
9 octobre 2012 2 09 /10 /octobre /2012 09:13

On Sql Server Standard

On Sql server Standard edition, this is actually easier in a way. You do not manage partition scheme or function or at least not in the way the enterprise version does. You actually set up tables to be your partitions. If you want to access several storage units, you can then affect different filegroup to the different tables.  In our case, I did not work on it but this would be easy to set this up. Identically to the previous example, here we will keep only three months worth of log data, keeping 4 rolling monthly partition scheme.

First, we create the 4 partitions (4 tables):

USE[DataPartitions]

GO

CREATE SCHEMA Standard Authorization dbo

GO

CREATE TABLE Standard.[LogTrace_Partition1](

      [id] [int] IDENTITY(1,1) NOT NULL,

      [insertDate] [datetime] NULL CONSTRAINT [DF_LogTrace_InsertDate_Partition1]  DEFAULT (getdate()),

      [sessionID] [varchar](36) NULL,

      [userAgent] [varchar](500) NULL,

      [eventID] [int] NULL,

      [requestID] [int] NULL,

      [frontEndSessionID] [varchar](36) NULL,

      [trace] [varchar](5000) NULL,

      [serverName] [varchar](50) NULL,

      [userIP] [varchar](15) NULL,

      [note] [varchar](50) NULL,

PRIMARY KEY CLUSTERED

(

      [id] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

) ON [PRIMARY]

 

GO

CREATE TABLE Standard.[LogTrace_Partition2](

      [id] [int] IDENTITY(1,1) NOT NULL,

      [insertDate] [datetime] NULL CONSTRAINT [DF_LogTrace_InsertDate_Partition2]  DEFAULT (getdate()),

      [sessionID] [varchar](36) NULL,

      [userAgent] [varchar](500) NULL,

      [eventID] [int] NULL,

      [requestID] [int] NULL,

      [frontEndSessionID] [varchar](36) NULL,

      [trace] [varchar](5000) NULL,

      [serverName] [varchar](50) NULL,

      [userIP] [varchar](15) NULL,

      [note] [varchar](50) NULL,

PRIMARY KEY CLUSTERED

(

      [id] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

CREATE TABLE Standard.[LogTrace_Partition3](

      [id] [int] IDENTITY(1,1) NOT NULL,

      [insertDate] [datetime] NULL CONSTRAINT [DF_LogTrace_InsertDate_Partition3]  DEFAULT (getdate()),

      [sessionID] [varchar](36) NULL,

      [userAgent] [varchar](500) NULL,

      [eventID] [int] NULL,

      [requestID] [int] NULL,

      [frontEndSessionID] [varchar](36) NULL,

      [trace] [varchar](5000) NULL,

      [serverName] [varchar](50) NULL,

      [userIP] [varchar](15) NULL,

      [note] [varchar](50) NULL,

PRIMARY KEY CLUSTERED

(

      [id] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

SET ANSI_PADDING OFF

GO

CREATE TABLE Standard.[LogTrace_Partition4](

      [id] [int] IDENTITY(1,1) NOT NULL,

      [insertDate] [datetime] NULL CONSTRAINT [DF_LogTrace_InsertDate_Partition4]  DEFAULT (getdate()),

      [sessionID] [varchar](36) NULL,

      [userAgent] [varchar](500) NULL,

      [eventID] [int] NULL,

      [requestID] [int] NULL,

      [frontEndSessionID] [varchar](36) NULL,

      [trace] [varchar](5000) NULL,

      [serverName] [varchar](50) NULL,

      [userIP] [varchar](15) NULL,

      [note] [varchar](50) NULL,

PRIMARY KEY CLUSTERED

(

      [id] ASC

)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]

) ON [PRIMARY]

GO

SET ANSI_PADDING OFF

GO

 

Second, we create the partition scheme (1 table):

 

CREATE TABLE Standard.LogTracePartition

(

PartitionId tinyint,

TableName varchar(256),

IsCurrentPartition Bit,

NextPartitionId tinyint,

PurgePartitionId tinyint null,

CONSTRAINTPK_LogTracePartition PRIMARY KEY CLUSTERED (PartitionId)

)

Go

insert into Standard.LogTracePartition

(PartitionId, TableName, IsCurrentPartition, NextPartitionId, PurgePartitionId)

Values (1,'Standard.LogTrace_Partition1',1,2,2)

GO

insert into Standard.LogTracePartition

(PartitionId, TableName, IsCurrentPartition, NextPartitionId, PurgePartitionId)

Values (2,'Standard.LogTrace_Partition2',0,3,3)

GO

insert into Standard.LogTracePartition

(PartitionId, TableName, IsCurrentPartition, NextPartitionId, PurgePartitionId)

Values (3,'Standard.LogTrace_Partition3',0,4,4)

GO

insert into Standard.LogTracePartition

(PartitionId, TableName, IsCurrentPartition, NextPartitionId, PurgePartitionId)

Values (4,'Standard.LogTrace_Partition4',0,1,1)

GO

 

Third, we create the stored procedure to switch partition to be the current one (using Synonym and truncate the data we do not need to have on production):

 

/*

Author: Chuge

Creation Date: 2012-10-08

Reason: Partition scheme on standard edition to switch current partition

Sample: EXEC Standard.P_SwitchLogTracePartition @SynonymName = 'Standard.LogTraceS', @Debug = 1

select object_id('Standard.LogTraceS')

select * from Standard.LogTraceS

*/

CREATE PROCEDURE Standard.P_SwitchLogTracePartition

@SynonymName VARCHAR(256) = 'Standard.LogTraceS',

@Debug bit = 0

AS

 

SET NOCOUNT ON

DECLARE@TableNameNewPartition varchar(256), @TableNamePurgePartition varchar(256), @Stmt VARCHAR(1024), @NextPartitionId tinyint, @PurgePartitionId tinyint

 

BEGIN TRY

 

SELECT TOP 1 @NextPartitionId = NextPartitionId, @PurgePartitionId = PurgePartitionId

FROM Standard.LogTracePartition

WHEREIsCurrentPartition = 1

 

SELECT@TableNameNewPartition = TableName

FROM Standard.LogTracePartition

WHEREPartitionId = @NextPartitionId

 

SELECT@TableNamePurgePartition = TableName

FROM Standard.LogTracePartition

WHEREPartitionId = @PurgePartitionId

 

IF OBJECT_ID(@TableNameNewPartition) IS NULL

BEGIN

      SELECT 1

      RETURN

END

IF OBJECT_ID(@TableNamePurgePartition) IS NULL

BEGIN

      SELECT 2

      RETURN

END

 

IF OBJECT_ID(@SynonymName) IS NOT NULL

BEGIN

      SET @Stmt = 'DROP SYNONYM ' + @SynonymName

      IF @Debug = 1

            PRINT @Stmt

      ELSE

            EXEC ( @Stmt )

END

 

SET @Stmt = 'TRUNCATE TABLE ' + @TableNamePurgePartition

IF @Debug = 1

      PRINT @Stmt

ELSE

      EXEC ( @Stmt )

SET @Stmt = 'CREATE SYNONYM ' + @SynonymName + ' FOR ' + @TableNameNewPartition

IF @Debug = 1

      PRINT @Stmt

ELSE

      EXEC ( @Stmt )

 

IF @Debug = 1

      RETURN

 

UPDATE P

SETIsCurrentPartition = CASE WHEN PartitionId = @NextPartitionId THEN 1 ELSE 0 END

FROM Standard.LogTracePartition P

 

END TRY

BEGIN CATCH

      DECLARE @Error VARCHAR(512)

      SET @Error = ERROR_MESSAGE()

      RAISERROR(@Error,16,1)

      RETURN @@Error

END CATCH

SET NOCOUNT OFF

GO

 

 

Fourth, we create the scheduled job to switch partition at midnight the 1rst of each month;

 

 

USE [msdb]

GO

/****** Object:  Job [WebApper - Data partition Switch out]    Script Date: 02/01/2012 12:08:05 ******/

BEGIN TRANSACTION

DECLARE@ReturnCode INT

SELECT@ReturnCode = 0

/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 02/01/2012 12:08:05 ******/

IF NOT EXISTS (SELECT name FROM msdb.Standard.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)

BEGIN

EXEC@ReturnCode = msdb.Standard.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

 

END

 

DECLARE @jobId BINARY(16)

EXEC@ReturnCode =  msdb.Standard.sp_add_job @job_name=N'WebApper - Data partition Switch out',

            @enabled=1,

            @notify_level_eventlog=0,

            @notify_level_email=0,

            @notify_level_netsend=0,

            @notify_level_page=0,

            @delete_level=0,

            @description=N'No description available.',

            @category_name=N'[Uncategorized (Local)]',

            @owner_login_name=N'sa', @job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/****** Object:  Step [EXEC Standard.P_SwitchLogTracePartition @SynonymName = 'Standard.LogTrace', @Debug = 0]    Script Date: 02/01/2012 12:08:05 ******/

EXEC@ReturnCode = msdb.Standard.sp_add_jobstep @job_id=@jobId, @step_name=N'EXEC Standard.P_SwitchLogTracePartition @SynonymName = ''Standard.LogTrace'', @Debug = 0',

            @step_id=1,

            @cmdexec_success_code=0,

            @on_success_action=1,

            @on_success_step_id=0,

            @on_fail_action=2,

            @on_fail_step_id=0,

            @retry_attempts=0,

            @retry_interval=0,

            @os_run_priority=0, @subsystem=N'TSQL',

            @command=N'EXEC Standard.P_SwitchLogTracePartition @SynonymName = ''Standard.LogTrace'', @Debug = 0',

            @database_name=N'DataPartitions',

            @flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC@ReturnCode = msdb.Standard.sp_update_job @job_id = @jobId, @start_step_id = 1

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC@ReturnCode = msdb.Standard.sp_add_jobschedule @job_id=@jobId, @name=N'Webapper - Monthly',

            @enabled=1,

            @freq_type=16,

            @freq_interval=1,

            @freq_subday_type=1,

            @freq_subday_interval=0,

            @freq_relative_interval=0,

            @freq_recurrence_factor=1,

            @active_start_date=20110303,

            @active_end_date=99991231,

            @active_start_time=0,

            @active_end_time=235959

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

EXEC@ReturnCode = msdb.Standard.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:

    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION

EndSave:

 

GO

 

On Sql server Express

There are even more limitations for the Express version such as no agent (need to have different scheduler) and database size is limited to 1 GB.
Your partitioning would then be managed by an application (scheduled services and web services probably).
The partitions would then be identical to the standard version one except that you would need to specify the context of the shard.
In the order example, one can decide to set up 24 databases, one for each monthly partition, assuming that a month worth of data/index would not be bigger than 1 GB. SQL server express data partitioning will work best with a service oriented architecture where all applications will need to go through intelligent web methods that will select the proper database to select or update the data.
Obviously you can use the system used in standard version, above, within the databases as well. You just need to be aware of the 1 GB limitation.

 

Partager cet article
Repost0
9 octobre 2012 2 09 /10 /octobre /2012 09:11

Before going into details in how to setup data partitioning on SQL server, it is important to understand data partitioning concept. So first, I will give a simple definition of what it is.

 

Understanding the concept

Data partitioning is a way to spread data on the same logical structure into different physical storage with a specific organization.  The goal of data partitioning is to accelerate mainly IOs reads and writes.

Indeed, when you write your data, you can access several different physical storage units. Usually, when you insert data, you access the same partition but when you update, it can be on several partition. An example is when you update orders with a specific status (orders open, to be archived for example) and your partition scheme (this corresponds to the way you spread your data into different physical storage units) follows range of order seeded ids, you access several partitions at once, making your update faster. On the contrary, when you insert orders with the same partition scheme, you will access one partition, that we can usually the current partition, as orders will be inserted with incremental order id.

Following the same idea, when you read the orders by status, you will access several physical storage units at once, increasing your IO read parallelism, while you will read one partition when you like to access the most recently inserted orders.

Finally, maintenance on partitions is easier than on one big data storage unit. For example, old orders would not be accessed often, reducing the risk of real downtime when rebuilding offline indexes. Additionally data compression is completely understandable of old partitions as the access would be limited. Switching out to a different storage unit or purging an entire partition (for example, we like to keep only one year worth or orders) is much easier with a partition scheme set up.

One can define partition scheme as the mapping between the technical segmentation (range of order ids in our example) and the physical storage unit (on Sql server, filegroup and eventually physical file). Obviously Microsoft added additional layer such as the partition function, filegroup before files but it ends up been the segmentation and the physical storage unit mapping.

 

Implementing data partition

Before going into scripting, one should understand that there are many ways to implement data partitions.  Some MSSql Dbas will say that data partitioning is only available on enterprise version. Those DBAs just simplify that data partitioning is, just an implementation, instead of a concept. Data partitioning can be implemented in any database engine on any edition as soon as one knows the limitation of the edition. We will review methods of data partitioning on the three main editions of Sql server, three methods that are different due to limitation of the Sql server edition. Obviously the limitations of the engine are only due to the cost license of Microsoft SQL server. Microsoft thinks that Data partitioning is only relevant for large corporation and therefore should not be a matter on smaller editions.  Well, this is a point of view, rather a sales pitch, I would say.
In my experience, I set up data partitioning on the three different editions: Express, Standard and Enterprise. However small or big your company is, data is irrelevant to the budget you have. I do recall a very successful company using Sql server express edition. They were just using commodity – type servers with Express license to dedicate database shards. This was in telecommunication. In the mean time, I do remember another set up on standard version, with a table schema representing the partition scheme. Finally, for large table on enterprise, the use of Microsoft partition scheme and function was used.
I will present the three different set ups and first the one that most of people know, the enterprise edition data partitioning.

On Sql Server Enterprise

After SQL server 2005, Microsoft decided to add several features for VLD (very large database) only for the very few lucky companies that could afford Enterprise licensing.
Setting the partitioning first knows how your data will be stored, used and maintained. The classic example is your orders data.
Here is an example: your data will be extracted on data warehouses and only 1 year worth is relevant to the production. Your orders are inserted every day and your orders can be updated (mostly status of the order: open, fulfilled, paid, etc…) within 3 months. You have a billion orders per year.
You decide to ask your DBA to improve access to orders and making sure that this very large database is easy to maintain and no downtime will have to be endured when reindexing indexes occur.

The first think is to know how many partitions you will need. In this example, we will set up. Let’s say we like to set monthly partitions. As the minimum to keep is 1 year, the minimum to keep is 12 filled partitions. There are two options here.
The first one is to have your DBA creating every month a new partition and empty remove the old partition. This means that your DBA will need to be maintaining every month the partitions, which does not seem irrelevant. It is actually fairly simple if this becomes a routine deployment to do.
The other option is to set up a rolling partition scheme, where a month will overlap the other based on the “month number”. Therefore using 24 months to simplify the schema symmetry (you can use less month, but your maintenance routine would be more complex).

I did both. In the first option, you can use the order id to be your partition key as you will set up id range within your partitions.

In the second option, you do not have this choice because your id ranges would not be consistent, so you would use a persisted column to indicate the partition number to allocate your data. The persisting column would be a function based on the order creation date and therefore the month number. You can for example decide that all odd year would have month from 13 to 24 and all even year would be from 1 to 12.
Do not forget to align your partition scheme to the index cluster for performance boost. Always put your partition key at the end of each index.

Here is how you would set your persisted column based on the OrderCreationDate column and your partition schemes:

CREATE DATABASE DataPartitions

GO

UseDataPartitions

GO

CREATE SCHEMA Enterprise Authorization dbo

GO

CREATE PARTITION FUNCTION pfMonthNumber (tinyint)

AS RANGE RIGHT FOR

VALUES (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23);

GO

CREATE PARTITION SCHEME psMonthNumber

AS PARTITION pfMonthNumber

TO ([PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY],[PRIMARY] );

GO

CREATE TABLE Enterprise.TblOrders

(OrderId Int identity(1,1) not null, OrderCreationDate Datetime Not NULL Default (GetDate()),

OrderPartitionNumber AS CONVERT(TINYINT,(DATEPART(M, OrderCreationDate) + CASE WHEN DATEPART(YEAR, OrderCreationDate) % 2 = 0 THEN 0 ELSE 12 END)) PERSISTED NOT NULL

, Primary Key Clustered (OrderId, OrderPartitionNumber))

ONpsMonthNumber (OrderPartitionNumber)

GO

INSERT INTO Enterprise.TblOrders

(OrderCreationDate)

OUTPUT INSERTED.OrderId, INSERTED.OrderCreationDate, INSERTED.OrderPartitionNumber

VALUES ('2011-06-25'),('2011-07-18'),('2012-06-25'),('2012-07-18')

GO

 

Obviously, for the sake of your performance, your partition function would be spread on several filegroups that will redirect your data in different storage units. In our case, I did not work on it but this would be easy to set this up.

One can see above that an insert of different years would associate a different partition number, leading to keep 2 years worth of data in your table. By separating the data into 24 months, you can then easily keep 12 months by purging the data on the partitions of the other year.

The purge can be done with “switching out” partitions but will probably use a DELETE statement to keep the partition and a rebuild. It is longer and resource-wise taking more horse power from your server but it keeps the database to be “DBA” free of complex manipulation (switch out, merge, creation of partitions).

 

But obviously you can do the operation on partitions regularly instead.

Partager cet article
Repost0
7 octobre 2012 7 07 /10 /octobre /2012 00:38

Data density: method of scoring your database

A database is a container for data. The aim for a database is mainly storing in a way that we can read and write data in an efficient way. An efficient way in terms of securing your data, read your data in the most up-to-date state, write your data in the best organizational way, etc…
Back in the days, databases were quite small and concurrency or performance issues were rather small. Now this is the opposite. Those are huge issues and we often call DBAs to the rescue for that matter. PCI or other Audit compliance adds to the mix the security of the data, that was not a main issue back in the days as well.

Obviously there are many issues a DBA faces but I think it comes down to one main issue: Density. Density of data is the same as people density. So often, public services have difficulties to keep Post offices open in rural areas just because the need for it is small or scarce. It is like a luxury! Government would rather manage medium and large cities. It is easier to deliver services, easier to clean streets, easier to deal with things. Of course, public services need to be organized but they gain a nice service for the buck spent.

Well in database, one can say the same thing. If your data map in your databases were going everywhere on the physical disk, you would have harder time to manage to clean your data and all. I am not talking about having thousands of tables/ indexes. I am talking about having data all over on your physical or logical storage space. For example, a table would gather information about orders but the orders would be all over the place with data split everywhere.

The first thing would be that you would not know how to index your data if you did not know how the orders were coming. Fortunately, you would probably index your order by an identifier and then index by the date and the user that ordered mainly. But let’s say for the sake of the example that you would not know, the orders would be all over the data map of your data. In other words, unmanageable.

The second thing is you would store large empty space because of the data splits in all your indexes. Storage would be fragmented and you would need more of them.

The third thing is you would use a lot of resources just for index maintenances when you do DML operations or reindexing.

The fourth thing is you would have decrease performance regularly because of the instability of your indexes and the cost of the maintenance. The concurrency locks would be longer, the transactions would take longer to perform, etc…

All bad stuff. But what is common to all the issues is DENSITY. Ah!

What do I mean by DENSITY? Simple definition. Data density, for me, is how much data I can fit in the same physical block of storage space. That is it. We do not need to use big words. DBAs/Architects would use, what is the depth of your indexes, how many data splits it performs, what is the chronic fragmentation, etc….

And we can go on and on with different terms. They are all exact and to be a good DBA/Architect, you need to know all the concepts because you are usually called when the damage is done.

BUT if you are called before the damage is done. The answer is plain simple! Try to put as much data in the smallest storage space.

Obviously, you do not want to buy a cabinet if you need to store at the end a car. Same for an index/table. You have consideration such as fillfactors and covering indexes, etc… But all of those are just strategy to minimize in the long run the same concept: Density.

Yes indeed! The goal for fillfactor is to say: I want to make sure I give enough room when I rebuild or create an index so that there is less data split and overall less space taken so my data are all together nicely ordered and not too much fragmented.

The goal for covering indexes is to limit the number of indexes to create on a table so that we take less space and then DML operations are overall smarter and reads smarter than if we did create several indexes instead of the covering one!

So my assumption, maybe not share by many DBAs is this one. DENSITY is the key!

There are exception of course, otherwise the work of a DB architect and DBA would be boring. Also Front end Ecommerce databases are not into the mix. Why because obviously the main concern is not storage it is performance and lock concurrency with volatile database. So the NoSql deal is the best one. The column store indexes in some case and the denormalization is some other is the best option,   especially when you work with large cache stores.

No, I am talking about when you need to access the data on real time and in a rich way. For example, CRM database or Logistics database, or Payment transactions, etc…

My first principle for a good production database is to get only the data you need in the table or partition you care. I have seen so many client databases will the entire history of orders or transactions on production database. This fit the principle. Obviously, if you have issue of density in a large table, your density issue is smaller if your table is smaller.

My second principle is to make sure that your table does not have hundreds of columns or large empty “large” data-typed columns, such as Text, Varchar(2000), xml… In other words when you write or read data on such tables, your key lookups and index updates get faster to be done. Usually developers think: why care about data types, why not putting an int instead of smallint. This is where the architect/DBA kicks in! Obviously a Key-value type table would be pretty dense as there would be not much columns (assuming the clustered index would follow roughly the order where the data are inserted).

The rest: putting the right indexes, not too many indexes, avoid subselect into Select statement, write the columns on your select instead of “*”, etc… make perfect sense and usually end up being following the same principle of DENSITY.

I would not say a database is only that, of course. There are set-based programming where the database is very useful, like Rating and Billing of transactions or direct marketing for example, etc… There are transactions to deal with on the diverse DML operations to set up. There are set-based programming and security compliance to do and your database engine can assist you with that (on logins, permissions, database, backup and data encryption). Good for us.

But overall, I still believe the main important thing is DENSITY!

So, I decided to set up for all my audits a DENSITY scorecard for all the databases besides all the server settings I need to audit when I audit a client database production environment.

If you execute this statement on the specific table:
EXEC sp_SpaceUsed ‘<Schema>.<Table>’, you obtained very useful information such as the number of rows and the total size allocated in KB. There are other ways to know the size by using dmvs but I like this way as it is pretty simple.

Then you build up a stored procedure to run the statement on all tables and then store the results in one table.

You then calculate the number of data pages based on the total allocated, knowing that a data page is equal to 8 kilobytes. 8 kilobytes are the minimum SQL server will read. It is therefore important to know the number of data pages. You then divide the number of rows with this number of data page to get the ratio Rows per page. The higher is the number the denser is your table (this included all indexes).
Obviously, you can do the same calculation per index (using dmvs again), but I like to keep it simple at the table level.
You can then identify the tables that have low density to table with high density. You can suspect issues on the table with low density.
You can also scorecard your database by calculating the same ratio by adding up the numbers.

Sum (Rows) / {Sum(Total Allocated Storage In Kb) / 8 kb}.

You can also remove the outliers (the tables that are too dense and not dense enough when they are really far off the median value. However, if their weight is big on the overall average, I would not advise it.

In my experience, when the table density was higher than 10 rows per page, we were usually in the clear regarding performance and storage. Usually we obtained issue when the density was between 1 and 3 rows per page.

This scoring methodology allowed us to pinpoint the tables with storage issues and pinpoint overall the database with low density.

As stated above, there are many ways to improve density and the consequences of improving density are usually quick on performance and storage.

 

 

 

 

Partager cet article
Repost0
2 octobre 2012 2 02 /10 /octobre /2012 11:02

During the past ten years, websites have witnessed different stages on their needs of database tools. In the beginning, websites stored nothing at all on database because they were primarily giving simple services based on application algorithms or they were institutional websites. Then came e-Commerce and extranet were companies needed to store some information to secure and give access to specific information. Finally, internet became a tool to track every single click (like Xiti or other major advertisers). Information has never been so important. Google, based on their search engine, is able to get so many people using their software technology that it does compete directly with Microsoft. And, sure the search engine algorithm are important but they still need to crunch billions of data.

But as we elevate the starting point of sale or acquisition of revenues, we elevate the complexity of the services provided on internet. Users demand performance and sophisticated services. Therefore, the application servers needed to insert intelligence, usually driven by a humongous amount of data,  in data mining technologies for example. E-commerce is becoming the first main channel of purchase on specific industries such as computer electronics. Payment transactions have to be extremely secure and performing very well. These systems have to respect strict rules, after the catastrophic Enron event that require company to log every single event that could be a point of failure within the transaction process.

All of that makes databases much more complex to manage and I noticed that, now, you have to manage not only your databases efficiently but also your data. Some might think that data management is just ensuring referential integrity but it is not. It is not at all actually. In a highly transactional environment, we might even think that referential integrity should be delegated, in profit of performance and maintenance.

And here is why my title is “…data versus database management”. A database server enables you to have the ability to store the data in an ACID transaction and the ability to help you query (select, update, delete) data in a more comprehensive way and with a server dedicated for that.

All of that is great and nice but this is theoretical. Fortunately, they are simple practices to follow. No matter how great the server engine is, the DBA is still the man! There is a shift now with what the DBA is.

Now, the DBA does not have to ensure referential integrity at all cost. He has to balance performance and integrity.

Now the DBA does not have to normalize the data at all cost. As the storage is cheaper, he has to make sure performance is ensured and sometimes accept some denormalization.

Now the DBA does not have to do a big transaction. It can separate the process into small transactions and perform the actions asynchronously. Once again, performance makes the difference. Also, in transactional environment, as auditors want to make sure that there is no breach, DBA have to do asynchronous tasks to follow specific workflow rules.

This is a challenging time and this is all good for DBAs. More and more, data is becoming the very core of a lot of businesses online. DBAs start being gurus and the persons that carry the essence of the business.

Now DBAs have to elevate their skills. Not only they have to ensure that the database servers are up and running and that they work on full speed but they have to be able to manage DATA more efficiently and give priorities to performance and not integrity.

Integrity is the enemy of performance. Although it is essential, so that you do not store garbage data, it is not anymore the essence of businesses. More and more businesses trust the application layer to take care of the integrity and in some extent, there is some truth about it.

Now I still think we should make a difference so that databases enforce rules of integrity.  And this difference is now based on this question:  Is the server storing more configuration data or transactional data?

To give you an example, let’s say your company is an online payment facilitator. It is very true that you cannot forget to enforce integrity when you set up a new account, a new market, a new type of payment. However, you should not be invasive when the application server inserts payment transactions. If you decide to enforce integrity for example on a billion-row table with several foreign-key, unique and conditional constraints and with applications that write onto this table every 10ms, You might get bottlenecks and heavy reads that enforce the integrity. Now it might be good to set up this integrity at first but if your online payment platform wants to get some money, avoid it afterwards. If you application layer knows that there is only 3 types of payment and insert only within those three types of payment in the database, that is sufficient integrity. At some point, as every manager in a team as to delegate, the database has to delegate integrity to the applications. In other way, database servers should TRUST application servers.

Now I am not going to have a lot of friends by saying that, but this is true. We live in a pragmatic world where a payment should not take more that 1 second to process. Beware of the performance!

And if you manage your data instead of managing your database, you will:

1/ make your life easier.

2/build a system that is not dependable on database engine that might just die on you.

Ideally, you want your database engine to be the best as possible but what you want the most is managing your data the best way. A DBA has a new challenge. This challenge is that his job has to be closer to the business.

Here are some good practices to achieve better performance in a transactional environment:

1/ Federate your transactions: having a lot of servers to write not only gives you more power and more writes and reads parallelism, it also gives a natural way of implementing high availability strategies. If one node is down, you write on the other one.

2/ Get your database transactions as small as possible and do uncommitted reads as much as possible: be pragmatic, how many times do you really need committed reads? Beware of committed reads, they can lock your server processes.  The smaller your transaction is, the better it is as you free memory, connection and tables.

3/ Implement a thorough partition analyses: before starting business, try to have an idea of what transactions are needed on real time for reporting and for updates. Set your partitioning scheme adequately so that you can free “old news” transactions and do maintenance on them (extracting, purging, reindexing, defragmenting) without disturbing your current activity.

4/ Think Data instead of Database. When you think about it, the more your data is small to manage, the easiest your database will be to manage. If you have opportunity to think about a better partitioning (3), a better storage solution to keep your production data small, you will make your life easier.

5/ Identify your group of data: configuration data and transactional data are different and so should be your servers. When you just want to know client’s configuration, your data repository should emphasize reads and integrity and when you insert transactions, your data repository should emphasize writes and performance.

6/ delegate and trust the application layer: for a lot of reason, it is good to cache data within your server in certain situation. It gives the application the ability to run without the database and also the ability to have a better a performance for highly recurring request for data that are not change that often. It frees up database process CPU and Ram. Database engine can always cache execution plans but they are still going to query or at least render the result over the network. DBAs and application engineers have to find the right balance and now where to delegate the query. More importantly, when you insert or update within the database, constraints can slow down performance. The database should trust the application. One way of giving trust is the use of application web services. Those web services will enforce security by limiting Ip ranges and setting up certificates. And web services will enable to set up the trust the database need. Obviously if the database is accessed by 1000 different applications directly, then the database will have a hard time to trust all of them but if the database is accessed via a web service, then it will be more comfortable to delegate some of the integrity rules.

 

Now some DBAs, with the last point (6) might think that database will end up being just a storage space. Do not worry about it. Database engine will still perform better querying as it is a set-based engine. Database will still perform better storage compression. Database will still perform better recovery strategies for your data. And let’s be honest, isn’t it better for our career to think that you manage data and not database? Because, think about it, data is moving all the time, not database. So in essence, it is harder to manage volatile systems than conventional and easy-to-follow systems. So the big bucks will be for people that manage unconventional systems, not the opposite!

 

 

 

 

 

 

 

Partager cet article
Repost0
25 août 2010 3 25 /08 /août /2010 10:30

Hello guys,

 

Today we will discuss the different high availability (HA) strategy you have on SQL server.

 

Basically, it all depends on your needs, your SLA towards your clients and the level of satisfactory you would like to have to your client.

Also, it depends on your wallet. Some HA strategies are very expensive, some are very cheap.

I am going to rate all solution from 1 to 10, 10 being the best and 1 is the worse for the criteria.

 

Here are the question you should consider:

 

1. How much money would I lose if my system would not work for 24 hours?

 

It is pretty reasonnable to think that you would need one full day to rebuild your database server, giving a robust backup regime and a good database administrator. Contrarily to a web server, database server can take more time to rebuild. If your primary server crashed and you do not have an HA strategy in place, you would have to reinstall Sql server on another machine, restore all databases, rebuild all server objects that are not within msdb or master and/or remap all your datasource or swith DNS lookup towards the new server.

If 1 day downtime does not make your company suffer, you do not need an HA strategy! It would be too expensive to maintain, dardware and software wise and it is not necessary for your company. Obviously as a DBA, it is hard for me to say it but it makes sense.

The How much money you can lose should also indicate how much budget you should dedicate on your Disaster Revovery Plan and High Availability solution, if any.

 

2. What is your SLA or satisfactory level?

 

If 1 day is too much, then it all depends on your degree of HA desired. This can be real-time and low maintenance, real-time and high maintenance, not real-time and low maintenance, etc...

For example, log shipping is less ressource intensive than mirroring but does not supply a real-time solution. For many companies this would be fine to have the risk to lose up to 15 mn. of data and recover immediately a crash. In the mean time, log shipping as well as mirroring (until the next release of Microsoft SQL server 2010, according to my internal contacts at Microsoft) are not providing HA for your server objects. This means you would need to care about SQL server jobs, DTSX packages, credentials, DBMail, Logins, Service brokers, separately. LS and Mirroring would only cover the databases.

Another solution is clustering and give you real-time coverage of your databases but also your server objects. As your storeage is shared on this solution, you need to make sure your storeage area does not become a single point of failure. For performance, you would also need to make all possible for your SAN to be setup for optimized database files' storeage.

 

Here is the rate for all the solution for this particular criteria: minimizing downtime and data loss

Sql server cluster: 9

(not much chance to lose significant downtime and data)

Mirroring: 8

(not much chance to lose significant downtime (maybe logins not deployed correctly) and data but maybe some server objects)

Log Shipping: 6

(chance to lose some downtime as this is a manual failover, also logins not deployed correctly on secondary server and some other server objects)

Replication: 6

(chance to lose some downtime as this is a manual failover, also the publication need to be broken to make your data available and proper installation needs to be done (to avoid identities reseed, not for replication foreign key and triggers activation, etc...)

Sharding: 5

(with proper architecture, sharding can also be good and not losing significant downtime but it is hard to find the right architecture. Sharding is almost th eopposite of one centralized database)

 

3. How much maintenance is required?

 

Some solution require very reccurent care. Some others not. Upgrading/patching can also be far more difficult for one solution compare to another one.

 

Here is the rate for all the solution for this particular criteria: high maintenance

Sql server cluster: 7

(once installed, maintenance is minimal but still require senior level management)

Mirroring: 7

(once installed, maintenance is low but still require specific maintenance)

Log Shipping: 8

(once installed, maintenance is low but still require proper supervision)

Replication: 7

(once installed, maintenance is low, especially if properly installed (separate distributor from publisher) but still require proper supervision)

Sharding: 5

(versioning and deployment is much more challenging, also merging shards or spliting shards while activity is on, more backups to do, more work on datawarehousing...: this is probably the most important drawback)

 

4. How much human ressources (DBA, system engineers) you like to dedicate

 

At the install, SQL server cluster is intensive in human ressource, it requires network engineer to set up all the proper ips, private Vlan. It requires system engineers to set up the shared disk ressources and set up the cluster groups. It requires DBA to install SQL server services in cluster (but SSIS, installed as active/active). It is much easier though to manage SQL server cluster on a daily maintenance as all server objects are maintained in cluster, even the server ressource allocation such as memory, CPU allocation or degree of parallelism. With a virtual IP, application just connect onto one virtual IP or Server Name. When in need of failover, the management is fairly easy as you just move the cluster group that brings in less than 30 seconds all the ressources onto the secondary server, becoming the active one. Your SPOF at the storeage level results in controller cards that fail or disk that fail. Usually this is where your cost will go onto: securing your storeage tier.

All of this require senior level engineers even though on daily basis, there is nothing to do.

 

At the install, mirroring is not too difficult to install. Network engineer would ensure DNS suffix and proper private network and firewall settings and DBA would set up all the security correctly as well as synchronising the databases. Once all done, daily maintenance is not too difficult but activity needs to be supervize as mirroring can suffer in performance during large operations such as massive reindexing for example (suspend mirroring during reindexing). Also server objects have to be created on both side with server specificitt=y (keep login SID, make sure to use localhost as server name for the server on DTSX, etc...). Automatic failover is ensure with a witness server. Manual failover is also fairly easy to deal with. Application datasource needs to handle a failover parner to switch to the partner in cas of a failover.

Log shipping might be the easiest solution and might be the best to implement for most of the companies. It just requires proper networking between the two servers. DBA will set up very easily a log shippment between the two servers. Proper monitoring.

 

Here is the rate for all the solution for this particular criteria: human ressources

Sql server cluster: 9

(once installed, DBA only, mid-level)

Mirroring: 9

(once installed, DBA only, mid-level)

Log Shipping: 9

(once installed, DBA only, mid-level)

Replication: 6

(once installed, DBA only, senior level)

Sharding: 6

(once installed, DBA only, senior level)

 

5. How much performance can you accept to lose?

 

This is an obvious equation. Adding redundancy usually adds up in ressource usage. For example, high safety mirroring does not give as much performance as asynchroneous mirroring. The first one gives real-time mirroring of all your database-specific operations but require a two-commit transaction, whereas asynchroneous mirroring does not provide real-time but does not influence much performance. Asynchroneous mirroring is still a good deal because it is quasi-real-time.

SQL server clustering does provide high performance as it does not require your sql code to be played at two database servers. Redundancy is offered at the storeage level. A software, cluster administration, enables you to automatically failover.

That is probably the best advantage of sharding and grid solution. The ability to lose performance is almost none. You will probably gain performance by sharding or setting up a grid. Writing on grid might be a bit impacted but with asynchroneous transaction on the other grid node, you will not have too much impact. Concerning the sharding, the write should be done only on one shard and limit in terms of ressources the load to the dedicated shard. Scalability is your key in these two solutions.

Log shipping is also a good solution as backup and transfer should not hinder much your performance on the principal server. You can also sue your secondary servers as read-only databases for your near-real-time  datasource revenues. Mirroring can give a small advantage on entreprise version with the snapshot of your mirrored databases but is limited to one passive node exposure.

SQL server clustering does not give you more performance but is not impacted either. So this is a pretty good solution overall.

Replication can also help exposing data onto a different server to avoid ressource usage of your reporting tools and cubes onto the primary sources. Replication can severely hinder, as well as mirroring performance when logs are not getting through due to a network failure, security disruption (replication and mirroring), or human errors (especially on replication, it can quickly happen). Replication can for example make your transaction log grow as the transactions are not marked bein processed by the distributor. Mirroring can also be abottleneck if your log does not get through, your mirroring is suspended for maintenance (reindexing for example).

 

Here is the rate for all the solution for this particular criteria: High performance

Sql server cluster: 7

(once installed, your ressources are not very impacted by the cluster. However it does not help.)

Mirroring: 5

(once installed, your ressources are somewhat impacted but you might use your mirrored databases with snapshots on entreprise to expose your data to reporting)

Log Shipping: 8

(once installed, DBA only, your ressources are not impacted by much and is often using ressources your disaster recovery plan would use anyway (I am thinking of your log backups and transfer). Your secondary server can expose your database with a standby version)

Replication: 6

(once installed, replication is ressource intensive unless you dedicate a distribution server, your data can be exposed for reporting)

Sharding: 9

(once installed, the solution can load-balance your reads and also your writes as you can access several shards)

 

Conclusion:

Depending on where your company is located on the criteria above, the best solution for your company can be one of the solution above or even another (like third party replication even at the subsystem level. I have seen success in all high availability strategy.

I think a lot of time log shipping is not considered quickly but one thing I like a lot with log shipping is the fact that your disastry recovery plan usually handles as well log backups. Log shipping does not involve a lot more work and is very inexpensive.

Also, HA can also be done geographically. Based on network speed between the two datacenters, all the solution can be done but that comes with a definite price. Log shipping stays the cheapest solution. Mirroring is not far behind. Sharding is not that expensive either. Geoclustering is expensive definitely as you need to implement serious sussystem redundancy (like recoverypoint from EMC).

 

I hope this article can help. Do not hesite to comment.

 

 

Partager cet article
Repost0