Overblog Suivre ce blog
Administration Créer mon blog
20 février 2017 1 20 /02 /février /2017 12:39

I have struggled with one client on queries that were dealing with date ranges or type ranges or many filters, on very large tables with low density in many cases, and you knew one index would really improves it but it was based on some filter that was undeterministic.

Let's take a classic example, you have a website / an api that queries the orders that are not yet completed per accounts and for the last day. 

On sql server, you have many indexes that can help and data partitioning can as well but in some case, you have a query that has so many filters and it does only care about the last day or even more precised... the last hour. 

You know this index would be huge if you do not use a filter and your DML operation would suffer greatly! 

Additionally your table is not very dense with tons of columns (well you did not design the table and you are , sure frustrated, but you got to deal with the current design).

Finally the query is like one of the most frequently executed on your platform. 

It there a solution?

YES!

 

 

The solution is with filtered indexes! It is nice because it is a standard or wed edition features and therefore you can use it at ease. 

As the index stays very small, due to the fact that it only cares about the last rows (for example 100K rows intead of the 40 million rows) and the reindex takes about 2-3 seconds online (entreprise edition) and 1 second offline. 

So even on standard edition the blockage would not be dramatic. 

So let's do an example.

Let's asume I have an order table.

First let's prepare the table:

CREATE DATABASE DynFilteredIndexes
GO
USE DynFilteredIndexes
GO
CREATE Schema Orders authorization dbo
GO
CREATE TABLE Orders.Header 
(OrderId int identity(1,1) not null,
DateOrdered datetime not null,
AccountId smallint not null,
OrderTotal decimal(18,2) not null,
OrderStatusId tinyint not null,
)
GO
ALTER TABLE Orders.Header ADD CONSTRAINT PK_OrdersHeader PRIMARY KEY CLUSTERED (OrderId)
GO
SELECT top 20000
CONVERT(SMALLINT,ROW_NUMBER() OVER (Order by A.Object_Id)) AccountId
INTO #AccountId
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO
SELECT top 30000
CONVERT(SMALLINT,ROW_NUMBER() OVER (Order by A.Object_Id)) Day
INTO #DayId
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO
SELECT top 1000
CONVERT(DECIMAL(18,2),ROW_NUMBER() OVER (Order by A.Object_Id))*100.0 OrderTotal 
INTO #OrderTotal
FROM Sys.Objects A CROSS JOIN Sys.Objects B CROSS JOIN Sys.Objects C 
GO

DECLARE @Rand int , @row int, @AccountIdFrom int, @AccountIdto int, @OTFrom int, @OTTo int
DECLARE @Cnt int = 1

WHILE @Cnt < 5000
BEGIN
SET @Rand = RAND()*1000
SET @Row = RAND() * 20000
SET @AccountIdFrom = RAND() * @Rand
SET @AccountIdTo = RAND() * @Row
SET @OTFrom = RAND() * @AccountIdTo
SET @OTTo = RAND() * @AccountIdTo
SELECT @Rand, @Row,  @AccountIdFrom, @AccountIdTo,@OTFrom, @OTTo, @Rand % 2 + @Row % 2 + @AccountIdFrom % 2

INSERT INTO Orders.Header 
(AccountId, DateOrdered, OrderTotal, OrderStatusId)
SELECT TOP (@Row) Accountid, DATEADD(DAY, -Day, GETDATE()) DateOrdered, OrderTotal, @Rand % 2 + @Row % 2 + @AccountIdFrom % 2 OrderStatusId FROM #AccountId A CROSS JOIN #DayId B CROSS JOIN #OrderTotal C
WHERE AccountId BETWEEN @AccountIdFrom AND @AccountIdTo
AND OrderTotal BETWEEN @OTFrom AND @OTTo
SET @Cnt = @cnt + 1
END
GO
 

Let's see how many rows we have then per date, account id, order total , status to see how random we have inserted in the order table:

 

 

 

Counts : statistical distribution of the randomization

Counts : statistical distribution of the randomization

Let's now add indexes : 

Classic indexes:

CREATE INDEX IX_OrdersHeaderDateOrdered ON Orders.Header (Dateordered)
CREATE INDEX IX_OrdersHeaderAccountId ON Orders.Header (AccountId)
 

The index filtered :

CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > '20170219'

is instantaneously created due to the index on dateordered previously created.

Note that this index cannot be created :

CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > DATEADD(DAY,-2,GETDATE()) 
 

Msg 10735, Level 15, State 1, Line 4
Incorrect WHERE clause for filtered index 'IX_FilteredLastDay' on table 'Orders.Header'.Error: 

 

 

When I calculate the size of the indexes:

SELECT 
                            @@SERVERNAME                        AS ServerName,
                            DIPS.database_id                    AS DatabaseId,
                            'DynFilteredIndexes'                 AS DatabaseName,
                            DIPS.OBJECT_ID                        AS TableId,
                            O.name                                AS TableName,
                            DIPS.index_id                        AS IndexId,
                            I.name                                AS IndexName,
                            s.name                                AS SchemaName,
                            DIPS.partition_number                AS PartitionNo,
                            DIPS.alloc_unit_type_desc            AS AllocUnitTypeDesc,                            
                            DIPS.page_count                        AS PageCount,
                            p.rows                                AS RowsCount,
                            DIPS.avg_fragmentation_in_percent        AS Frag,
                            GETDATE()                            AS DateChecked,
                            DIPS.index_type_desc                AS IndexType
                        FROM sys.dm_db_index_physical_stats (23, NULL, NULL , NULL, 'DETAILED') AS DIPS
                        INNER JOIN [DynFilteredIndexes].sys.partitions AS P ON DIPS.object_id = P.object_id
                                                                         AND DIPS.index_id = P.index_id
                                                                         AND DIPS.partition_number = P.partition_number
                        INNER JOIN [DynFilteredIndexes].sys.indexes AS I ON P.object_id = I.object_id
                                                                      AND P.index_id = I.index_id
                        INNER JOIN [DynFilteredIndexes].sys.objects AS O ON I.object_id = O.object_id AND O.type = 'U' 
                        INNER JOIN [DynFilteredIndexes].sys.schemas AS S ON O.schema_id = S.schema_id
                        WHERE DIPS.page_count < 10000

 

The result is clear: 

The filtered index has only 2161 rows for 8 data pages, no wonder it was created super fast!

 

 

 

index size (rows and pages)

index size (rows and pages)

Then we can look at the performance:

Here are three Selects (using "*" to include all columns but we can obviously list the columns which is a better practice):

SELECT * FROM Orders.Header WITH (INDEX = IX_FilteredLastDay) WHERE DateOrdered > '20170219' AND AccountId = 18
GO
SELECT * FROM Orders.Header WITH (INDEX= IX_OrdersHeaderAccountId) WHERE DateOrdered > '20170219' AND AccountId = 18
GO
SELECT * FROM Orders.Header WITH (INDEX= IX_OrdersHeaderDateOrdered) WHERE DateOrdered > '20170219' AND AccountId = 18
GO

Here is the profiler result:

Profiler result

Profiler result

On the profiler result, clearly the filtered index is faster then the date then account filter, which makes sense as it is in the order of the discrimination. 

 

Ad you can see the filtered index footprint is very very tiny: 78 io reads! and 0 ms of CPU.

It is even more clear if I add orderstatusid in the where filter with 26 io reads.

 

 

profiler result with add filter

profiler result with add filter

I could even increase the performance by adding the dateordered into the list of columns as this generates a key lookup:

exec plan with key lookup

exec plan with key lookup

Well you can see that the index is really fast. The next question is how to maintain the index then???

Well you create a job that run this script once every morning:

Use DynFilteredIndexes
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
DECLARE @Date DATE = DATEADD(Day, -1, GETDATE())
DECLARE @DateStr VARCHAR(16) = REPLACE(CONVERT(VARCHAR,@Date),'-','')
DECLARE @Stmt VARCHAR(MAX) = 
'
CREATE INDEX IX_FilteredLastDay ON Orders.Header(AccountId, OrderStatusId)
INCLUDE (OrderTotal)
WHERE DateOrdered > '''+@DateStr+'''
WITH (ONLINE = ON, DROP_EXISTING=ON)
'
print @stmt
EXEC (@Stmt)

It will run in 1 second and will give you a very nice index! (no real DML impact and super dast querying!)

 

t-sql script is included below.

 

I hope you had a nice reading!

Best!

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
15 janvier 2013 2 15 /01 /janvier /2013 14:11

hello guys,

Today's topic relate to a series of trainings I spent doing at one of the top 10 ECommerce in Europe as a database/etl architect for 2 years time. The idea of those series were to train junior to senior level developpers on how to optimize their applicative code with database-related operations.

One of the series that I like particularly was about sending arrays of value to the database in order to do a series of DML operations.

I still have the video series and they have no relation to anything done in this Ecommerce company. The copyright is mind, so I can definitely share it with people. However, they are way too big to share on a blog.

I have also seen similar training sessions on the Net but I would happy to discuss with your projects on how to implement it in Coldfusion, .Net or Php... or any other kind of Web language.

I will summarize my findings and my advices on this article regarding this need that is quite often overlooked in the web industry.

1. Ad-hoc versus stored procedure

As a DBA, there is absolutely no controversy here for me. Any applicative code should be calling stored procedures when reaching the database for any DML operations or even Selects. 

Some DBAs can say it is because of performance. Well, this reason is shaky. It is true that stored procedure helps the SQL server engine to reuse execution plans. However, performance can be sometimes an issue when dealing with searches. 

Many searches like "Give me the orders for specific dates and/or with specific customer names and/or specific cities, etc..." have what we call dynaic portions of (hopefully) SARGable conditions and stored procedure with CASE WHEN conditions will not optimize the queries. 

There have been a lot of debate here but the truth and the solution is somewhat simple. You will eventually use parameterized query.

Parameterized query will help you gain the performance needed and increase security and avoid sql injection with proper parameter check management, at the application layer or at the stored procedure laywer. 

And in the last sentence I gave my preferrence: Stored procedure. 

Yes, indeed! Even though some queries might be very dynamic, for security reason (and not really for performance reason (just a bit), stored procedure is unequivocal for me.

Having that said, if you want to update a table, for instance, on several rows (for example, you like to update the shopping cart on one stored procedure call), you end up passing several values for the same parameters (for exemple, CartDetailIds) to the database. 

2. Fetch stored procedure calls vs one stored procedure call

Keeping the same example, you can decide in one click of a button to update the shopping cart (addition of quantities, removal of products, etc...). As I sais, there are three ways to update your cart:

1 is to build dynamically your DML operation and do one execution dynamically: as I stated, this is not very good for security reason.

Another one is to call n times the same stored procedure that would update each cart detail, one at a time, like fetching stored procedure calls. This solution is not good as well for two reasons. The first one is that you will need to deal with a transacxtion block at the application layer and I would rather keep the transaction as short as possible. The second reason is that the performance is significantly degraded. Do the test. I did the test and I could see significant differences. (I can share the tests upon request).

Finally, you can pass all the details to one stored procedure call and let the stored procedure deal with the transaction and the update operation at once. So in this example, the trick would be to pass some kind of array of data with CartDetailId and Quantity values, quantity 0 would be to remove the line. We will discuss later how we pass such arrays but fist, conceptually, I like to pin point the fact that you could definitely improve performance by using Merge statement (to deal with inserts, updates and deletes of Cart details at the same time) and then the Ouptut clause to render to the application the results of the changes (notably render the inserts as new CartIds would be created. If you did not have any other table to update (which is probably not true as you probably would update the Cart table (for the total amount column or a last update date column), you would then do an implicit transaction and not even need to deal with transaciton block.

3. Solution to pass arrays of data to the stored procedures

There are for this 4 solutions. I tested all three solutions and I will comment worse to best performance solutions.

The worse one is the one that calls several time a stored procedures but this one was already discarded for reasons stated above. On a scale of performance, this solution is noted 1/10.

The second one is passing arrays with lists (texte with a separator). For example in our case we could pass @CartDetailIds = '541001,541002,541004' with @CartDetailQuantities = '1,0,5'. The stored procedure would then parse the texts and allocate to each cartdetailid the corresponding quantity. To do so, you need to create a parsing function. This is pretty classic and this kind of function can be done in any sql server version or any database engines (oracle or Mysql, etc...). This makes this solution universal and therefore useful if you are not committed to a specific technology. Then you would use this function to gather the pieces and insert all data onto one single variable table. The preparation would be done and the DML operation would then be possible. I would rate this solution in terms of performance a good 7/10 performance. This job does the job nicely and answer all the issues: security, performance, scalability.

The third one is passing an xml. You can pass an array with an XML fragment like <CartDetail id="541001" Quantity="1"/><CartDetail id="541002" Quantity="0"/><CartDetail id="541004" Quantity="5"/>. Once passed onto the stored procedure to an xml parameter @CartDetails, you can rebuild a table with xquery and do then the same than above. The performance is good and I would rate it a bit higher than the first one. Actually even in terms of perfornance, it bits the first solution but not by much. I would give a 8/10. Note that Xml uses a lot of verbose. The advantage is the universality of the language for sure. However, the text is longer to pass and if you have to pass thousands of rows, it might be a concern.

The third solution is passing a table-valued parameter (TVP). This solution is by far the most efficient one. Again during my test, performance is very good and much better than the previous solution (if you push the stress to hundreds of thousand of rows at once). Instanciating a table can be done on framework 3.5 and above on .Net. Another advantage is the referential integrity of the data passed into the stored procedure as you can decide the exact datatypes of each column. In our example, quantity should be a smallint, allowing values from 0 to 255 maximum and reducing the storage and eventually improving the performance. Your main concern when you use TVP is how to change a stored procedure if you use the same TVP in any stored procedure. You cannot change a table datatype if it is used already. You might consider then TVP versions to get the release not on one big shot and increase regression testing time. But this can be managed nicely if you know how to deal with versions.

4. Things that make the small difference

Now that I talk about performance (and be more a DBA), I like to talk about architecture. I like TVP for another reason. Sometimes I am a bit too neat on the architecture and TVP helps me being neater again. As I like to enforce a filegroup different of primary on all my tables, as I like to define datatypes like telephone or email and use those user-defined datatypes on my tables, I like TVP because I can enfore integrity on all my parameters, even the array ones. 

When integrity struggles with performance, like FK might, there can be a debate but we are talking about defining a datatype or a check constraint, I am in favor of spending the time on it. Once we spent the time on it, maybe use an additional day or two to cover all grounds on each table, then we achieve top-of-the art storage. Data partitioning is in the same kind of things when you need to spend time to render the best copy for your database structure.

So TVP is the same kind of thing. Do not hesitate to spend extra-time and ask extra question to the porject manager/business analyst to get the best storage and best performance form your database.

Then everything will fall into pieces nicely: integrity, security, scalability.

Do not hesitate to ask me question and ask me for the videos related to this training.

Have a good day!

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
21 novembre 2012 3 21 /11 /novembre /2012 20:03

1. Understanding the concept

One of the main challenges to a distributed system is how to handle transactions across several servers and technologies. 

What if your application connect for the same transactions to several web services or database servers and if any failure occurs, need a rollback of all that is been implemented? 

How do you handle this also during an ETL workflow with several servers on several data centers?

I can give you many examples where transactions are not easy to handle. Noting that distributed transactions could also be dangerous and block the entire system if a rollback cannot occur

 

2. Do not mix technologies

One of the nightmares I encounter during my consulting life is the following example. Let’s assume that a stored procedure ran from a SQL server job does push from a linked server a large amount of data and then update several tables from the source database and finally bcp a bunch of data to flat files and send an email with an attached file with error (errors bcp’ed to flat files too).

During all of the process, a transaction block is designed with try catch block to roll it all back.

In some case the bcp portion (run with Xp_cmdshell) gets blocked by an OS that do not respond for various reasons (that could be a memory leak). The production supervisor decides to stop the job, triggering a rollback. Unfortunately he waits hours and the rollback does not occur and the main table where data were pushed on the destination server gets frozen by a lot of locks? The supervisor calls the DBA on-call that eventually fix the issue by ending the OS process that runs the bcp and was hanging on the SQL server box.

In this case, the main issue is the distribution of the transactions on several software and here the rollback is initiated by SQL server engine (by stopping the job) but has no permission to rollback the DOS cmd (the bcp) initiated by a process on the OS level.

What does this mean? Do not mix technologies! Because if everything works well, the transaction will be committed and nothing will fail (because the Dos cmd is actually already committed as the transaction scope within SQL server does not have any authority).

In this very case, it is much better to consider the tasks to be independent and play each step only if the previous step works. In this case, it is better to implement it within an ETL/Workflow system. Each step, you can constrain the success and failure and based on a log system, mark a rollback or not on the specific technology of the step that failed.

3. Keep it simple

Obviously, a distributed environment require a certain level of complexity when dealing with transactions but if you can make it simple, make it simple!

Try to identify what is really transactional and what is not for 2 reasons: one is for performance but one is also to make your transaction simple.

A good architect will always look for new technology, look at various ways to set up transactions on distributed environment but will use manly his common sense to make the system viable especially if one of the components fails.

4. Keep it scalable

The “keep it simple” paragraph is probably the hardest one to describe as by means, a transaction on a distributed environment is complex. 

“Keep it scalable” is easier to explain and is plain common sense. Most of the developers would make it work! Most of the architect would make it work with the right environment setup and the right software layers.

For example, if the task is to purge a multi-million row tables on a SQL server platform. As the production environment has man y concurrencies, he will make sure to keep the transactions short by implementing/using indexes and by keeping the DML operations as small as possible (by packets usually).

5. Keep it manageable

Another thing to focus on is to make sure that complex system has a very clear documentation and technology that is easily to manage. Another classic mistake of many companies is to setup technology that they cannot manage. 

Another one is to setup a system that is hard to maintain when there is a new feature to implement or hard to restart when there is a failure that blocked the system for a while. Both things need to be considered when implementing such system.

6. ETL system

What I like about ETL! There is one thing that I really like. It is the way you have to follow a strict workflow! Each task is independent of the transaction scope and therefore if a failure occurred, you need to setup the contingency plan within your ETL workflow. In SSIS, Event handlers make sure you can set up generic tasks for many different tasks, which is good.

I usually put together SQL tasks that I consider a transaction block (within the same stored procedure call). This allows me to rollbacks several DML operations at once.

However, when changing technology (like BCP, FTP, Email, Zip, etc...), there are handled outside the same transaction scope with their independent tasks. 

My ETL flow then needed to set up contingency plans if needed for any error on any task.

Most of the type, the contingency plan is just to log the error and the system would make sure to alert the Production team and guide him to restart the system or fix the error to get the incriminated transaction victim to be back in the ETL process.

For specific tasks, you can eventually set up a rollback plan.

 

7. Development cycle

The main classic mistake that IT service do is believing that the development phase is the longest part of the project. Here are the main phases for me of a project:

1. Architectural studies: see if the project is possible, goes well within the current Information system

2. Architectural implementation: define how to setup servers and code layers to accommodate the needs

3. Infrastructure installation: install the necessary network and servers

4. Development: develop the different code layers (this task can be done during step 3) based on architectural implementation

5. Q&A phase: tests and validation of the code layers and load/stress tests

6. Production deployment: deploy on production

7. Production supervision: supervise for a period the deployment.

Based on my experience, one step that is too often ignored is the step 2, the architectural implementation. 

It reminds a project. I was in charge of the architecture and the development (oh well, it happens). Phase 1 was fairly quick as it involved current system in place. Phase 2 could have been very fast as well but it took me about 2 weeks to really grasp all the functional concepts and render them feasible technically. The project manager answered most of the functional needs but technical limitations rendered the specs largely under-documented. Technical documentation was written by me but required a lot of changes or at least a lot of assumptions from the part of the functional specs, which were not précised enough, especially when errors were occurring (classic mistake as well).

The project manager was giving me 2 months to deploy on production the specific system (it was a master data management system) and was worry about the fact that I spent 2 weeks on step 2. 

Well thanks to it, the development phase took 1 week (he thought it would take 3 weeks).  The Q&A phase took 1 week (the project manager thought it would take 2 weeks) and the production phase took 1 day to deploy and 1 week to supervise. 

Even though the project manager estimated the step 2 to last only 1 week instead of 2 weeks, he was agreeably surprised that the project overall took 5 weeks instead of the previously planned 8 weeks.

Since the deployment, there were no bugs fixing to implement. Only new features.

I wanted to spend time explaining this story because I think this is why I wanted to be an Architect. I believe that development starts way too quickly and main issues are coming from a bad architecture and it is especially true in a distributed environment with distributed transactions!

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
7 octobre 2012 7 07 /10 /octobre /2012 00:38

Data density: method of scoring your database

A database is a container for data. The aim for a database is mainly storing in a way that we can read and write data in an efficient way. An efficient way in terms of securing your data, read your data in the most up-to-date state, write your data in the best organizational way, etc…
Back in the days, databases were quite small and concurrency or performance issues were rather small. Now this is the opposite. Those are huge issues and we often call DBAs to the rescue for that matter. PCI or other Audit compliance adds to the mix the security of the data, that was not a main issue back in the days as well.

Obviously there are many issues a DBA faces but I think it comes down to one main issue: Density. Density of data is the same as people density. So often, public services have difficulties to keep Post offices open in rural areas just because the need for it is small or scarce. It is like a luxury! Government would rather manage medium and large cities. It is easier to deliver services, easier to clean streets, easier to deal with things. Of course, public services need to be organized but they gain a nice service for the buck spent.

Well in database, one can say the same thing. If your data map in your databases were going everywhere on the physical disk, you would have harder time to manage to clean your data and all. I am not talking about having thousands of tables/ indexes. I am talking about having data all over on your physical or logical storage space. For example, a table would gather information about orders but the orders would be all over the place with data split everywhere.

The first thing would be that you would not know how to index your data if you did not know how the orders were coming. Fortunately, you would probably index your order by an identifier and then index by the date and the user that ordered mainly. But let’s say for the sake of the example that you would not know, the orders would be all over the data map of your data. In other words, unmanageable.

The second thing is you would store large empty space because of the data splits in all your indexes. Storage would be fragmented and you would need more of them.

The third thing is you would use a lot of resources just for index maintenances when you do DML operations or reindexing.

The fourth thing is you would have decrease performance regularly because of the instability of your indexes and the cost of the maintenance. The concurrency locks would be longer, the transactions would take longer to perform, etc…

All bad stuff. But what is common to all the issues is DENSITY. Ah!

What do I mean by DENSITY? Simple definition. Data density, for me, is how much data I can fit in the same physical block of storage space. That is it. We do not need to use big words. DBAs/Architects would use, what is the depth of your indexes, how many data splits it performs, what is the chronic fragmentation, etc….

And we can go on and on with different terms. They are all exact and to be a good DBA/Architect, you need to know all the concepts because you are usually called when the damage is done.

BUT if you are called before the damage is done. The answer is plain simple! Try to put as much data in the smallest storage space.

Obviously, you do not want to buy a cabinet if you need to store at the end a car. Same for an index/table. You have consideration such as fillfactors and covering indexes, etc… But all of those are just strategy to minimize in the long run the same concept: Density.

Yes indeed! The goal for fillfactor is to say: I want to make sure I give enough room when I rebuild or create an index so that there is less data split and overall less space taken so my data are all together nicely ordered and not too much fragmented.

The goal for covering indexes is to limit the number of indexes to create on a table so that we take less space and then DML operations are overall smarter and reads smarter than if we did create several indexes instead of the covering one!

So my assumption, maybe not share by many DBAs is this one. DENSITY is the key!

There are exception of course, otherwise the work of a DB architect and DBA would be boring. Also Front end Ecommerce databases are not into the mix. Why because obviously the main concern is not storage it is performance and lock concurrency with volatile database. So the NoSql deal is the best one. The column store indexes in some case and the denormalization is some other is the best option,   especially when you work with large cache stores.

No, I am talking about when you need to access the data on real time and in a rich way. For example, CRM database or Logistics database, or Payment transactions, etc…

My first principle for a good production database is to get only the data you need in the table or partition you care. I have seen so many client databases will the entire history of orders or transactions on production database. This fit the principle. Obviously, if you have issue of density in a large table, your density issue is smaller if your table is smaller.

My second principle is to make sure that your table does not have hundreds of columns or large empty “large” data-typed columns, such as Text, Varchar(2000), xml… In other words when you write or read data on such tables, your key lookups and index updates get faster to be done. Usually developers think: why care about data types, why not putting an int instead of smallint. This is where the architect/DBA kicks in! Obviously a Key-value type table would be pretty dense as there would be not much columns (assuming the clustered index would follow roughly the order where the data are inserted).

The rest: putting the right indexes, not too many indexes, avoid subselect into Select statement, write the columns on your select instead of “*”, etc… make perfect sense and usually end up being following the same principle of DENSITY.

I would not say a database is only that, of course. There are set-based programming where the database is very useful, like Rating and Billing of transactions or direct marketing for example, etc… There are transactions to deal with on the diverse DML operations to set up. There are set-based programming and security compliance to do and your database engine can assist you with that (on logins, permissions, database, backup and data encryption). Good for us.

But overall, I still believe the main important thing is DENSITY!

So, I decided to set up for all my audits a DENSITY scorecard for all the databases besides all the server settings I need to audit when I audit a client database production environment.

If you execute this statement on the specific table:
EXEC sp_SpaceUsed ‘<Schema>.<Table>’, you obtained very useful information such as the number of rows and the total size allocated in KB. There are other ways to know the size by using dmvs but I like this way as it is pretty simple.

Then you build up a stored procedure to run the statement on all tables and then store the results in one table.

You then calculate the number of data pages based on the total allocated, knowing that a data page is equal to 8 kilobytes. 8 kilobytes are the minimum SQL server will read. It is therefore important to know the number of data pages. You then divide the number of rows with this number of data page to get the ratio Rows per page. The higher is the number the denser is your table (this included all indexes).
Obviously, you can do the same calculation per index (using dmvs again), but I like to keep it simple at the table level.
You can then identify the tables that have low density to table with high density. You can suspect issues on the table with low density.
You can also scorecard your database by calculating the same ratio by adding up the numbers.

Sum (Rows) / {Sum(Total Allocated Storage In Kb) / 8 kb}.

You can also remove the outliers (the tables that are too dense and not dense enough when they are really far off the median value. However, if their weight is big on the overall average, I would not advise it.

In my experience, when the table density was higher than 10 rows per page, we were usually in the clear regarding performance and storage. Usually we obtained issue when the density was between 1 and 3 rows per page.

This scoring methodology allowed us to pinpoint the tables with storage issues and pinpoint overall the database with low density.

As stated above, there are many ways to improve density and the consequences of improving density are usually quick on performance and storage.

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
2 octobre 2012 2 02 /10 /octobre /2012 11:02

During the past ten years, websites have witnessed different stages on their needs of database tools. In the beginning, websites stored nothing at all on database because they were primarily giving simple services based on application algorithms or they were institutional websites. Then came e-Commerce and extranet were companies needed to store some information to secure and give access to specific information. Finally, internet became a tool to track every single click (like Xiti or other major advertisers). Information has never been so important. Google, based on their search engine, is able to get so many people using their software technology that it does compete directly with Microsoft. And, sure the search engine algorithm are important but they still need to crunch billions of data.

But as we elevate the starting point of sale or acquisition of revenues, we elevate the complexity of the services provided on internet. Users demand performance and sophisticated services. Therefore, the application servers needed to insert intelligence, usually driven by a humongous amount of data,  in data mining technologies for example. E-commerce is becoming the first main channel of purchase on specific industries such as computer electronics. Payment transactions have to be extremely secure and performing very well. These systems have to respect strict rules, after the catastrophic Enron event that require company to log every single event that could be a point of failure within the transaction process.

All of that makes databases much more complex to manage and I noticed that, now, you have to manage not only your databases efficiently but also your data. Some might think that data management is just ensuring referential integrity but it is not. It is not at all actually. In a highly transactional environment, we might even think that referential integrity should be delegated, in profit of performance and maintenance.

And here is why my title is “…data versus database management”. A database server enables you to have the ability to store the data in an ACID transaction and the ability to help you query (select, update, delete) data in a more comprehensive way and with a server dedicated for that.

All of that is great and nice but this is theoretical. Fortunately, they are simple practices to follow. No matter how great the server engine is, the DBA is still the man! There is a shift now with what the DBA is.

Now, the DBA does not have to ensure referential integrity at all cost. He has to balance performance and integrity.

Now the DBA does not have to normalize the data at all cost. As the storage is cheaper, he has to make sure performance is ensured and sometimes accept some denormalization.

Now the DBA does not have to do a big transaction. It can separate the process into small transactions and perform the actions asynchronously. Once again, performance makes the difference. Also, in transactional environment, as auditors want to make sure that there is no breach, DBA have to do asynchronous tasks to follow specific workflow rules.

This is a challenging time and this is all good for DBAs. More and more, data is becoming the very core of a lot of businesses online. DBAs start being gurus and the persons that carry the essence of the business.

Now DBAs have to elevate their skills. Not only they have to ensure that the database servers are up and running and that they work on full speed but they have to be able to manage DATA more efficiently and give priorities to performance and not integrity.

Integrity is the enemy of performance. Although it is essential, so that you do not store garbage data, it is not anymore the essence of businesses. More and more businesses trust the application layer to take care of the integrity and in some extent, there is some truth about it.

Now I still think we should make a difference so that databases enforce rules of integrity.  And this difference is now based on this question:  Is the server storing more configuration data or transactional data?

To give you an example, let’s say your company is an online payment facilitator. It is very true that you cannot forget to enforce integrity when you set up a new account, a new market, a new type of payment. However, you should not be invasive when the application server inserts payment transactions. If you decide to enforce integrity for example on a billion-row table with several foreign-key, unique and conditional constraints and with applications that write onto this table every 10ms, You might get bottlenecks and heavy reads that enforce the integrity. Now it might be good to set up this integrity at first but if your online payment platform wants to get some money, avoid it afterwards. If you application layer knows that there is only 3 types of payment and insert only within those three types of payment in the database, that is sufficient integrity. At some point, as every manager in a team as to delegate, the database has to delegate integrity to the applications. In other way, database servers should TRUST application servers.

Now I am not going to have a lot of friends by saying that, but this is true. We live in a pragmatic world where a payment should not take more that 1 second to process. Beware of the performance!

And if you manage your data instead of managing your database, you will:

1/ make your life easier.

2/build a system that is not dependable on database engine that might just die on you.

Ideally, you want your database engine to be the best as possible but what you want the most is managing your data the best way. A DBA has a new challenge. This challenge is that his job has to be closer to the business.

Here are some good practices to achieve better performance in a transactional environment:

1/ Federate your transactions: having a lot of servers to write not only gives you more power and more writes and reads parallelism, it also gives a natural way of implementing high availability strategies. If one node is down, you write on the other one.

2/ Get your database transactions as small as possible and do uncommitted reads as much as possible: be pragmatic, how many times do you really need committed reads? Beware of committed reads, they can lock your server processes.  The smaller your transaction is, the better it is as you free memory, connection and tables.

3/ Implement a thorough partition analyses: before starting business, try to have an idea of what transactions are needed on real time for reporting and for updates. Set your partitioning scheme adequately so that you can free “old news” transactions and do maintenance on them (extracting, purging, reindexing, defragmenting) without disturbing your current activity.

4/ Think Data instead of Database. When you think about it, the more your data is small to manage, the easiest your database will be to manage. If you have opportunity to think about a better partitioning (3), a better storage solution to keep your production data small, you will make your life easier.

5/ Identify your group of data: configuration data and transactional data are different and so should be your servers. When you just want to know client’s configuration, your data repository should emphasize reads and integrity and when you insert transactions, your data repository should emphasize writes and performance.

6/ delegate and trust the application layer: for a lot of reason, it is good to cache data within your server in certain situation. It gives the application the ability to run without the database and also the ability to have a better a performance for highly recurring request for data that are not change that often. It frees up database process CPU and Ram. Database engine can always cache execution plans but they are still going to query or at least render the result over the network. DBAs and application engineers have to find the right balance and now where to delegate the query. More importantly, when you insert or update within the database, constraints can slow down performance. The database should trust the application. One way of giving trust is the use of application web services. Those web services will enforce security by limiting Ip ranges and setting up certificates. And web services will enable to set up the trust the database need. Obviously if the database is accessed by 1000 different applications directly, then the database will have a hard time to trust all of them but if the database is accessed via a web service, then it will be more comfortable to delegate some of the integrity rules.

 

Now some DBAs, with the last point (6) might think that database will end up being just a storage space. Do not worry about it. Database engine will still perform better querying as it is a set-based engine. Database will still perform better storage compression. Database will still perform better recovery strategies for your data. And let’s be honest, isn’t it better for our career to think that you manage data and not database? Because, think about it, data is moving all the time, not database. So in essence, it is harder to manage volatile systems than conventional and easy-to-follow systems. So the big bucks will be for people that manage unconventional systems, not the opposite!

 

 

 

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
25 août 2010 3 25 /08 /août /2010 10:30

Hello guys,

 

Today we will discuss the different high availability (HA) strategy you have on SQL server.

 

Basically, it all depends on your needs, your SLA towards your clients and the level of satisfactory you would like to have to your client.

Also, it depends on your wallet. Some HA strategies are very expensive, some are very cheap.

I am going to rate all solution from 1 to 10, 10 being the best and 1 is the worse for the criteria.

 

Here are the question you should consider:

 

1. How much money would I lose if my system would not work for 24 hours?

 

It is pretty reasonnable to think that you would need one full day to rebuild your database server, giving a robust backup regime and a good database administrator. Contrarily to a web server, database server can take more time to rebuild. If your primary server crashed and you do not have an HA strategy in place, you would have to reinstall Sql server on another machine, restore all databases, rebuild all server objects that are not within msdb or master and/or remap all your datasource or swith DNS lookup towards the new server.

If 1 day downtime does not make your company suffer, you do not need an HA strategy! It would be too expensive to maintain, dardware and software wise and it is not necessary for your company. Obviously as a DBA, it is hard for me to say it but it makes sense.

The How much money you can lose should also indicate how much budget you should dedicate on your Disaster Revovery Plan and High Availability solution, if any.

 

2. What is your SLA or satisfactory level?

 

If 1 day is too much, then it all depends on your degree of HA desired. This can be real-time and low maintenance, real-time and high maintenance, not real-time and low maintenance, etc...

For example, log shipping is less ressource intensive than mirroring but does not supply a real-time solution. For many companies this would be fine to have the risk to lose up to 15 mn. of data and recover immediately a crash. In the mean time, log shipping as well as mirroring (until the next release of Microsoft SQL server 2010, according to my internal contacts at Microsoft) are not providing HA for your server objects. This means you would need to care about SQL server jobs, DTSX packages, credentials, DBMail, Logins, Service brokers, separately. LS and Mirroring would only cover the databases.

Another solution is clustering and give you real-time coverage of your databases but also your server objects. As your storeage is shared on this solution, you need to make sure your storeage area does not become a single point of failure. For performance, you would also need to make all possible for your SAN to be setup for optimized database files' storeage.

 

Here is the rate for all the solution for this particular criteria: minimizing downtime and data loss

Sql server cluster: 9

(not much chance to lose significant downtime and data)

Mirroring: 8

(not much chance to lose significant downtime (maybe logins not deployed correctly) and data but maybe some server objects)

Log Shipping: 6

(chance to lose some downtime as this is a manual failover, also logins not deployed correctly on secondary server and some other server objects)

Replication: 6

(chance to lose some downtime as this is a manual failover, also the publication need to be broken to make your data available and proper installation needs to be done (to avoid identities reseed, not for replication foreign key and triggers activation, etc...)

Sharding: 5

(with proper architecture, sharding can also be good and not losing significant downtime but it is hard to find the right architecture. Sharding is almost th eopposite of one centralized database)

 

3. How much maintenance is required?

 

Some solution require very reccurent care. Some others not. Upgrading/patching can also be far more difficult for one solution compare to another one.

 

Here is the rate for all the solution for this particular criteria: high maintenance

Sql server cluster: 7

(once installed, maintenance is minimal but still require senior level management)

Mirroring: 7

(once installed, maintenance is low but still require specific maintenance)

Log Shipping: 8

(once installed, maintenance is low but still require proper supervision)

Replication: 7

(once installed, maintenance is low, especially if properly installed (separate distributor from publisher) but still require proper supervision)

Sharding: 5

(versioning and deployment is much more challenging, also merging shards or spliting shards while activity is on, more backups to do, more work on datawarehousing...: this is probably the most important drawback)

 

4. How much human ressources (DBA, system engineers) you like to dedicate

 

At the install, SQL server cluster is intensive in human ressource, it requires network engineer to set up all the proper ips, private Vlan. It requires system engineers to set up the shared disk ressources and set up the cluster groups. It requires DBA to install SQL server services in cluster (but SSIS, installed as active/active). It is much easier though to manage SQL server cluster on a daily maintenance as all server objects are maintained in cluster, even the server ressource allocation such as memory, CPU allocation or degree of parallelism. With a virtual IP, application just connect onto one virtual IP or Server Name. When in need of failover, the management is fairly easy as you just move the cluster group that brings in less than 30 seconds all the ressources onto the secondary server, becoming the active one. Your SPOF at the storeage level results in controller cards that fail or disk that fail. Usually this is where your cost will go onto: securing your storeage tier.

All of this require senior level engineers even though on daily basis, there is nothing to do.

 

At the install, mirroring is not too difficult to install. Network engineer would ensure DNS suffix and proper private network and firewall settings and DBA would set up all the security correctly as well as synchronising the databases. Once all done, daily maintenance is not too difficult but activity needs to be supervize as mirroring can suffer in performance during large operations such as massive reindexing for example (suspend mirroring during reindexing). Also server objects have to be created on both side with server specificitt=y (keep login SID, make sure to use localhost as server name for the server on DTSX, etc...). Automatic failover is ensure with a witness server. Manual failover is also fairly easy to deal with. Application datasource needs to handle a failover parner to switch to the partner in cas of a failover.

Log shipping might be the easiest solution and might be the best to implement for most of the companies. It just requires proper networking between the two servers. DBA will set up very easily a log shippment between the two servers. Proper monitoring.

 

Here is the rate for all the solution for this particular criteria: human ressources

Sql server cluster: 9

(once installed, DBA only, mid-level)

Mirroring: 9

(once installed, DBA only, mid-level)

Log Shipping: 9

(once installed, DBA only, mid-level)

Replication: 6

(once installed, DBA only, senior level)

Sharding: 6

(once installed, DBA only, senior level)

 

5. How much performance can you accept to lose?

 

This is an obvious equation. Adding redundancy usually adds up in ressource usage. For example, high safety mirroring does not give as much performance as asynchroneous mirroring. The first one gives real-time mirroring of all your database-specific operations but require a two-commit transaction, whereas asynchroneous mirroring does not provide real-time but does not influence much performance. Asynchroneous mirroring is still a good deal because it is quasi-real-time.

SQL server clustering does provide high performance as it does not require your sql code to be played at two database servers. Redundancy is offered at the storeage level. A software, cluster administration, enables you to automatically failover.

That is probably the best advantage of sharding and grid solution. The ability to lose performance is almost none. You will probably gain performance by sharding or setting up a grid. Writing on grid might be a bit impacted but with asynchroneous transaction on the other grid node, you will not have too much impact. Concerning the sharding, the write should be done only on one shard and limit in terms of ressources the load to the dedicated shard. Scalability is your key in these two solutions.

Log shipping is also a good solution as backup and transfer should not hinder much your performance on the principal server. You can also sue your secondary servers as read-only databases for your near-real-time  datasource revenues. Mirroring can give a small advantage on entreprise version with the snapshot of your mirrored databases but is limited to one passive node exposure.

SQL server clustering does not give you more performance but is not impacted either. So this is a pretty good solution overall.

Replication can also help exposing data onto a different server to avoid ressource usage of your reporting tools and cubes onto the primary sources. Replication can severely hinder, as well as mirroring performance when logs are not getting through due to a network failure, security disruption (replication and mirroring), or human errors (especially on replication, it can quickly happen). Replication can for example make your transaction log grow as the transactions are not marked bein processed by the distributor. Mirroring can also be abottleneck if your log does not get through, your mirroring is suspended for maintenance (reindexing for example).

 

Here is the rate for all the solution for this particular criteria: High performance

Sql server cluster: 7

(once installed, your ressources are not very impacted by the cluster. However it does not help.)

Mirroring: 5

(once installed, your ressources are somewhat impacted but you might use your mirrored databases with snapshots on entreprise to expose your data to reporting)

Log Shipping: 8

(once installed, DBA only, your ressources are not impacted by much and is often using ressources your disaster recovery plan would use anyway (I am thinking of your log backups and transfer). Your secondary server can expose your database with a standby version)

Replication: 6

(once installed, replication is ressource intensive unless you dedicate a distribution server, your data can be exposed for reporting)

Sharding: 9

(once installed, the solution can load-balance your reads and also your writes as you can access several shards)

 

Conclusion:

Depending on where your company is located on the criteria above, the best solution for your company can be one of the solution above or even another (like third party replication even at the subsystem level. I have seen success in all high availability strategy.

I think a lot of time log shipping is not considered quickly but one thing I like a lot with log shipping is the fact that your disastry recovery plan usually handles as well log backups. Log shipping does not involve a lot more work and is very inexpensive.

Also, HA can also be done geographically. Based on network speed between the two datacenters, all the solution can be done but that comes with a definite price. Log shipping stays the cheapest solution. Mirroring is not far behind. Sharding is not that expensive either. Geoclustering is expensive definitely as you need to implement serious sussystem redundancy (like recoverypoint from EMC).

 

I hope this article can help. Do not hesite to comment.

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
14 avril 2010 3 14 /04 /avril /2010 12:19

Hello Guys,

How are you doing? 

The topic of today is how to setup mirroring by minimizing downtime. Well it is not that difficult but it requires some tips to achieve this.

First, we need to make sure we have all the prerequisites set up.

0. secondary server identical if possible

Set the secondary server identical (same edition, same version, same patches, same drive letters, etc... This is the true high availability you need. Otherwise the high availability setup will eventually fail due to the lack of ressources on the secondary server. But this is the client's call! and the client's wallet.

1. Get all the logins (with their SID) copied on the secondary server. This is something that is usually forgot by clients who want to do the thing alone. Once they failover, they realize that their logins do not connect. Ouch!

This link will help you transfer the logins easily: http://support.microsoft.com/kb/246133/

2. Get all the other server objects you feel usefull for the immediate failover:

like setting SQL server jobs and some DTSX packages that need to be turned on quickly after the failover either by an automatic job that would check whether the database is online or by activating by hand after the failover.

3. The database you wish to mirror needs to be in full recovery mode.

4. Set up a test database and do the mirroring for it, whether you will use active directory or certificates/master key.

This is obviously an important step. You need to make sure that the network setup is working and you need to perform manual failover on this test database to make sure all is in good shape. Ideally, try to set up the witness on it and let it go for a day or so to check networkk disruption and tune the timeout accordingly. Please comment if you need more info. If you have an application on .Net, Coldfusion, java etc... set up the connection string with a failoverpartner, do a stupid query on it for a page and look at the page while performing manual failover to see if the failover partner is working at the application tier.

Once all the prerequisites are set up and basic mirroring tests are done. 

 

Second,

perform a full backup of your primary database (no need of course to stop services) and restore it on the secondary server. Obviously a shared path need to be available.

Then perform a log backup by scripting it. Perform a copy by scripting it. Restore the log backup. Ideally do it on the same batch.

To do so there are many ways:

1. use Sqlcmd to access remotely the server.

2. use xp_cmdshell (need to activate it with sp_configure) to be able to transfer file

If you do not succeed with SqlCmd, you can always set up a linked server with the credentials you like to perform the log restore.

Concerning the copy, if you have troubling getting the copy working because of security, you can set up a proxy account and run the cmd via a sql server job in your script (sp_startjob). If you need more details, let me know.

Once you have perform the log backup, the copy and the restore once via the same batch, you actually minimize the time you will need to stop the activity on the principal server. Yes, indeed, you need absolute synchronization.

Within your batch, add your script concerning the mirroring set up.

prepare script: the "alter database set partner = 'TCP://principal.domain.com:Port' " on the secondary server via linked server or sqlcmd. You can test it on your test database.

Then script the same for the principal server:

"alter database set partner = 'TCP://secondary.domain.com:Port' "

"alter database set witness= 'TCP://witness.domain.com:Port' "

 

Third, the time to synchronize

Choose the right time, like not much activity to do your mirroring.

You can actually try to run it directly without even stopping the activity. Technically you can still read. There will be no effect in reading the principal server by your application logins. It can works. I had databases for client that I mirroried without absolutely any downtime.

If activity still persists, you will need to kill them! and apply the "Alter database set single_user with rollback immediate" script. Obviously, you cannot forget to activate multi_user back after your mirroring is set up.

 

If you follow specific testing protocols and run thorough testings. There is absolutely no way you can miss or fail your mirroring.

I would be happy to assist you on your mirroring needs if needed. I have performed hundreds of them already on Sql server 2005 and 2008.

Any comments are welcome!

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
10 avril 2010 6 10 /04 /avril /2010 15:50

Hello Guys,

 

You might wonder what is new and whether this is that fantastic?

Well as usual, I cannot say that Microsoft changed the world on this one either but as good followers, they release some of the new good obvious features they should address to avoid being too late.

 

1. In-memory OLAP

One of the main features SQL server 2008 R2 address, with Gemini / Excel 2010, is letting the user take relational sources on excel and develop a real OLAP cube with PowerPivot. It accelerates the prototyping of new cubes and enable users like marketers, managers and analysts to go faster and not wait for an entire life cycle to get the most of the data in short period.

Now, this is not fantastic either: you still need the ressources to be able to get the cube going like RAM and CPU to process. Excel can hold about 100 millions rows which is nice.

 

2. Optimistic unicode storage management

That might be the most underestimate feature of SQL server 2008 R2. This allows you to accelerate everything concerning reading and writing strings into columns where you need to enter your strings into unicode storage because of a few lines. Let's say your company primarily deals with non-unicode countries such as France for example and then they have a small amount like 1% of their business in Greece or Hungary and they need to use the data model everywhere, therefore need to change all their varchar storage to nvarchar storage.... great!

This is annoying... very annoying. First you will have to migrate your data from varchar to nvarchar, also the store procedures parameters, all variables, all table-valued parameters, so on and so forth.

Also, before Sql server 2008 R2, your storage required twoce the size as before... bummer. What if your storage was limited? what if your indexes suddenly jump in space  and reindexing takes twice as long.... This is very annoying... and all of that for just 1% of the business...

He he! Microsoft with this new release did improve that. Even though R2 takes a tiny bit more space for the non-unicode data in a unicode column, it reduces probably by 40% the size needed for the unicde datatype.

In other words, think about having a database, with a major table with a very large varchar(512) column taking like 5GB of storage. Ouch. Suddenly you learn you need to migrate it to nvarchar(512) as your company is opening business to Russia. You need to find 5 more GB of space! Well with Sql Server 2008 R2, the non unicode data will take probably about 100 MB more only. So the migration would only be needed 100 MB only :-), these 100 MB is to store the fact that the data are non-unicode, not counting the potential index on the column or full text search catalog too...

Now you get it! finding data will also be much faster as you read way less i/o than before.

 

3. Master data management

It is interesting and Microsoft tries its best but this feature is not perfect! It is very not user-friendly and does not replace a good customized master data management system.

 

4. Logical CPU core

R2 now allow 256 cores. Before it was 64 cores. But to be honnest, unless you really need that many processors, your reaching this need means you either have not a scalable system and your data-tier is not optimized, or you have so many concurrent connections that you need to think about getting some kind of data-tier farm.

I guess 256 cores is nice when you think about datamining processing. It might be good for BI then.

 

5. Reporting services geospatial features

Reporting services adds new geospatial features so that you can litterally had a map of the United States and then you can click on each state to get the information, sales, costs, etc... However, this is pretty long to set up and not very user-friendly. It is worth looking at though.

 

There are other small features like multi-server tools.

 

Overall SQL server 2008 R2 is very interesting especially for BI new features but definitely not worth migrating from SQL server 2008 to 2008 R2, especially because everybody should wait for the first service pack ;-)

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
10 avril 2010 6 10 /04 /avril /2010 09:33

Hello Guys,

 

10 years ago, people did not care too much about getting the database fast or available. Databases were small and ECommerce or online payment or any kind of social community did not exists. Performance and availability were somewhat important for several companies but a large portion of them did not care much. They could either tell their employees that the database will be back up and running in a few hours. Performance_wise, obviously we could not wait forever but once again large databases were not that common.

I do remember working for a company that waited every single night an entire night batch to get their accounting consolidated. Not that this was that major of a work. I believe today it would take about 10-20 minutes to do the same work.

 

Now, we can say that machine 2 machine (telecommunication for example) and people 2 machine (Ecommerce for example) require extreme availability and performance. Additional technology enriches the panel of connection types such as widgets, services farms. Heterogeneous behaviour makes database availibility and performance a science.

 

Historically the data-tier has alwways been the centralized storage area where all data needed to be persisted, backed up, secured resides. There is therefore a direct connection with reading and writing on disk. For DBAs, that is the main bottleneck for performance and availability: the storage area.

 

Other problems might happen but there are all related to typical application single point of failures like the operating system failing, the memory failing, the network failing, etc...

You can have serious application requiring a very tiny portion of disk to run. Lately enriched website would need serious storage for images, large-code files, multimedia files but technically, these are all related to storage.

 

The data-tier could work the same way: bringing all data onto memory and therefore removing the impact of disk and all database editors work on that whether or not they wanted too.

NoSql and grid technology like xkoto or Oracle RAC try to make the persistent data brought to memory and make it like a buffer so that the "operational" database is on the cache layer.

Sql Server cachestore and SAN cache do that as well making reads much faster and ensurring somewhat of better performance.

 

So there are two areas to take care off:

- the getting the data or writing the data part: either on the cache of disk

- the transactional part to keep consistency of the data

 

getting data and writing data can be done fairly easily: you have a standalone server and you are done. You need to keep the data available: you get a replication, a mirroring on a different server that would be your partner. You can silmplify all by clustering the all thing although even this solution will give you a spof on the storage layer.

Writing the data, same.

 

Now that you cover your availability, you need to ensure that your data is secured. Your availability does not secure your data: it just makes sure your data will be available but if this is crap data, this will be still crap data. Recently a client of mine was subject to sql injection. Yeah, you got it: no form validation on the client side; no validation on the server side and a bunch of dynamic ad-hoc queries. Classic! Another client of mine recently did want to update one line out of millions of row but actually committed an update on all rows. Oops!

Well in this case, your availability does not help you. Great! you can still connect your data but it is all crap! You need to make sure you know your tolerance when you have a disastry like that and your backup regime will become handy. For both clients of mine we were able to recover all data correctly thanks to a good supervised backup regime.

 

Ok, let's way we cover all basis on that: good availability, good disastry recovery strategy, what is next? Well you get your performance to take care off and this is where it gets complicated.

Why?

 

Because sometimes performance does not help availability or disastry recovery plan and vice-versa.

 

Here are some clear example:*

1. you can denormalize data to make them eithier to query: more storage, more need of disk, more backup place, more network for the robocopy, more transaction log, etc...

2. you increse the need of indexing: same results

3. you increase the need of reindexing: same result

 

So how in the world will you be able to improve performance without getting your database on their knees.

Solutions 1 and 2 work no problem if you accept the consequences. If you add a couple of ressources in the momery, cpu, SAN area, you are probably fine and this might be some of the components to deal with.

 

But there are other areas

 

1. Federation of servers: get availability automatically be spreading the load onto different servers. Something I implemented very easily with telecommunication companies. We talk more about gateways in those companies. Fir high volumetric data, we just get several servers to handle the data onto different servers. This gives the availability because if one gateway does not work, then we go on another one. We secure the data by implementing serious backup regime and a datawarehouse that lets the database tier upfront very tiny: improving availability because less ressources needed and performance because less data to query no matter how your execution plan is.

 

2. Keep only operational data on your operational server; separate front and back office data: some clients unfortunately do not separate the data that resides at the back office and the front office. You should separate it not only at the database level but ideally at the server level. It is like the best practice for database architecture. You do not want to have CLR or application stuff on your server because your server is not made to handle different type of technologies that eat eachother on ressources like CPU or memory. generally your back office data will be rich and focus on the business to be run (datamining, billing, client information b2b, etc...). Front office will try to be light if possible.

 

3. Application tier: you can have as many layers as you want. In some way, it is good architecture to have multi-tier environnement to make it scalable. This will enable your to switch servers easily and maintain your layers independantly. However, it does not change the fact that, in an ECommerce, for example, your web navigation is critical. If you manage to make your client uses less dependancy for the most viewed pages you will gain in performance. Let's say there is one data you need to show to the client but you know it is changing all the time and it will consume ressources, you will try to make it available on a less-viewed page like you will make your client click on it, not show it automatically. I am not saying it is what you have to do. Functional aspect of your application might require the availability of the data very quickly. I said, you need to consider seriously the performance aspect of your application navigation from the web-tier to the application-tier to the data-tier.

 

4. Virtualising the data-tier: as virtualisation becomes very popular and cloud computing, this is definitely another performance and availability improvement. NoSql projects are looking on this sense by trying to get into the data-tier only if needed pretty much. Grids goes to the database layer but try to spread the load on identical data servers. In other words, they read data from the most ressourcefull server and write the data on the most available server and then peer-to-peer replications (mySql for example) or command-type replication (xkoto for example) can take care of it. Virtualisation is a way to do federation of servers in some ways, so I can dig that. However, it is to count on an additional layer of abstraction that might be a black box for you.

 

5. Data partitioning: you can obviously store data even within the same table in different raid array drives getting the loads more evenly whether your write or you read.

 

6. Make your database model towards performance. Debate around row-store vs. column-store / relational vs. non-relational database is important especially for performance. Lately referential integrity is less important than performance. Think about it, you delete a member of your website but because you want it to be fast you only delete the core information (the row on the primary table). Well tehcnically your member is gone whether you have parasite data. Then you can have a cleanup process on the background that would clean those useless data anymore. For a non-experienced DBA it is difficult to say that as this is one of the important point: referential integrity. But believe me, you have to adapt your database to the situation! After all, some people would say that database is just a way to store the data in a usefull order.

 

On all the solution, I prefer the federation of servers and I think you probably can apply to most of the industries. If ECommerce might require getting very large "lookup" data like looking at the memeber data, you can still federate data like orders or payments or emails sent, etc... I usually advise client to segment their databases/servers based on their websites, on the countries for international websites, on the alphabetical order of the logins, etc...

It is always possible to segment your data this way.

 

Any comments would be apreciated :-)

 

Clement

 

 

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
7 avril 2010 3 07 /04 /avril /2010 13:23

Hello Guys,

It is usually pretty straightforward to read an execution plan, to understand why one query would use an index instead of another one, to understand the needs of a nested loop or a hash join, etc...

It is usually....

But then comes .... the horrible tables!. Usually you get to be the firefighter at a client that developped a "monster" table, that is usually how the client calls it () and you look at all their queries that hammered the server with ios and cpu and memory swaping,e tc... The catastroph!

Then you look closer and you realize your table is millions of row long and 250 columns large with, obviously to make it fun, a large amount of text, image or other exotic datatype columns.

They are obvious improvements to perform on the query side like:

1. make sure the client does not do a "select *", select useless columns that susequently do not reside on the index you scan or seek and utimately perform a very large key lookup due to the poor page density.

2. make sure the drive where the file that hold the data and indexes reside in very well formatted (cluster size and disk alignment)

3. make sure you try to perform the select only one or two times instead of thousands time on your application page especially if you do not reuse the same execution plan...

 

All of that is fine but sometimes, you realize that SQl server decides to use an index instead of another one. One can say randomly but usually it is based on the number of IO it estimates to read versus RAM it might use.

Also, on large table, the estimation plan gets worse and forcing indexes is the solution for better performance. We can definitely see the difference running before and after optimisation on the profiler. The Sql server profiler is your empirical friend ;-)

Bottom line is:

- try to avoid if possible blob/in-lob storage data or at least put them in different table to minimize storage within the same clustered index. 

- try to increase Page density, it makes your indexes less fragmented during large operations of updates, deletes and make the work of your DBA much more conventional. The execution plans are correctly estimated, the fragmentation is organically growing and easy to maintain, etc...

The worse and that happens to too many clients is when they mix very very low page density on large table with random-GUID clustered index. This is the worse. You even have no way of paritioning correctly your table (and by the way most of those clients cannot afford enterprise edition anyway)...

Hopefully, and this is a howler to clients, you make the DBA create the data model before your developers go a-wire with monster tables!

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article