Overblog Suivre ce blog
Administration Créer mon blog
14 avril 2010 3 14 /04 /avril /2010 12:19

Hello Guys,

How are you doing? 

The topic of today is how to setup mirroring by minimizing downtime. Well it is not that difficult but it requires some tips to achieve this.

First, we need to make sure we have all the prerequisites set up.

0. secondary server identical if possible

Set the secondary server identical (same edition, same version, same patches, same drive letters, etc... This is the true high availability you need. Otherwise the high availability setup will eventually fail due to the lack of ressources on the secondary server. But this is the client's call! and the client's wallet.

1. Get all the logins (with their SID) copied on the secondary server. This is something that is usually forgot by clients who want to do the thing alone. Once they failover, they realize that their logins do not connect. Ouch!

This link will help you transfer the logins easily: http://support.microsoft.com/kb/246133/

2. Get all the other server objects you feel usefull for the immediate failover:

like setting SQL server jobs and some DTSX packages that need to be turned on quickly after the failover either by an automatic job that would check whether the database is online or by activating by hand after the failover.

3. The database you wish to mirror needs to be in full recovery mode.

4. Set up a test database and do the mirroring for it, whether you will use active directory or certificates/master key.

This is obviously an important step. You need to make sure that the network setup is working and you need to perform manual failover on this test database to make sure all is in good shape. Ideally, try to set up the witness on it and let it go for a day or so to check networkk disruption and tune the timeout accordingly. Please comment if you need more info. If you have an application on .Net, Coldfusion, java etc... set up the connection string with a failoverpartner, do a stupid query on it for a page and look at the page while performing manual failover to see if the failover partner is working at the application tier.

Once all the prerequisites are set up and basic mirroring tests are done. 

 

Second,

perform a full backup of your primary database (no need of course to stop services) and restore it on the secondary server. Obviously a shared path need to be available.

Then perform a log backup by scripting it. Perform a copy by scripting it. Restore the log backup. Ideally do it on the same batch.

To do so there are many ways:

1. use Sqlcmd to access remotely the server.

2. use xp_cmdshell (need to activate it with sp_configure) to be able to transfer file

If you do not succeed with SqlCmd, you can always set up a linked server with the credentials you like to perform the log restore.

Concerning the copy, if you have troubling getting the copy working because of security, you can set up a proxy account and run the cmd via a sql server job in your script (sp_startjob). If you need more details, let me know.

Once you have perform the log backup, the copy and the restore once via the same batch, you actually minimize the time you will need to stop the activity on the principal server. Yes, indeed, you need absolute synchronization.

Within your batch, add your script concerning the mirroring set up.

prepare script: the "alter database set partner = 'TCP://principal.domain.com:Port' " on the secondary server via linked server or sqlcmd. You can test it on your test database.

Then script the same for the principal server:

"alter database set partner = 'TCP://secondary.domain.com:Port' "

"alter database set witness= 'TCP://witness.domain.com:Port' "

 

Third, the time to synchronize

Choose the right time, like not much activity to do your mirroring.

You can actually try to run it directly without even stopping the activity. Technically you can still read. There will be no effect in reading the principal server by your application logins. It can works. I had databases for client that I mirroried without absolutely any downtime.

If activity still persists, you will need to kill them! and apply the "Alter database set single_user with rollback immediate" script. Obviously, you cannot forget to activate multi_user back after your mirroring is set up.

 

If you follow specific testing protocols and run thorough testings. There is absolutely no way you can miss or fail your mirroring.

I would be happy to assist you on your mirroring needs if needed. I have performed hundreds of them already on Sql server 2005 and 2008.

Any comments are welcome!

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
14 avril 2010 3 14 /04 /avril /2010 10:15

Hello Guys,

 

I cannot tell you how many clients have extremely bad security layer at the data-tier, thinking they will be off the hook because their security at the application tier is well defined.

This is nice to have ssl/https protocol layers and some firewall layers and all that jazz but if you let your logins do whatever they feel like once they are on the database server, you are in risk of losing your data, being stolen your data or even worse compromise the business overall.

How many clients develop their website with setting their application login as db_owner of their most trusted database and keep it this way once they go live on production due to the lack of expertise of this topic.

suddenly a few months later, they realize that sneaky injection attacks or hungry employees make their way onto the data tier and set up mayhem in the data. as soon as you are aware of it, you prevent the attacks making your application tier more secure and you fire the employees that provoked the horror. But unfortunately you realize it too late and your entire website is compromise. You then put your website down due to maintenance to fix all the mess as unfortunately you either do not have secured backups or if you have, you have no idea how to deal with point-in-time restore or you realize that your backups were not actually set up correctly.

Worse! you have to reinstall the entire database server because the attacks were very damaging even at the server layer.

This nightmare can happen and a good 10% to 20% of my clients call me to fix this kind of mess. One was because they just fired a developer that had all-in access to the data layer (with credit card information !) and another one due to sql injection. during a previous audit of their systems, I warned them a big deal about security but it was always at the bottom of their list, advocating they had to develop and deploy new functionalities first and if security was too much of a pain, they will look at that afterwards!

This is frustrating :-( but this is the life of a SQL server consultant. Warn the client, then secure what can be secure without dealing with the client application code (backup regime, having a development lifecycle with DBA validation).

What even worse is when the client needs to be PCI-DSS compliant. Then there are a huge task to do.

No matter what the goal of securing the data, here are the things to make sure to do:

1. reduce the amount of sysadmin people to users and sql service accounts: make sure the users are RESPONSIBLE for the sake of securing the production environment

2. Secure the backup folder and where the backups are robocopied, taped, etc.. Securization can be done at the file server layer and at the backup layer itself by setting up transparent database encryption (if this is something that you like me to develop I can)

3. Secure your sensible data: some data have to be secured like credit card data, password, etc... You can either use an application layer encryption, a dedicated box for encrypting the data or data encryption at the data layer with build-in encryption and symetric/asymetric keys. This is something I can also talk about if needed.

4. Your application logins need to have limited permissions to the databases it connects. There are many ways to minimize the application logins accesses. Here are the main ways:

4a. Minimize the access to stored procedure execution

4b. avoid cross-database query and avoid cross-database chaining: if not possible, keep the stored procedure execution level, add the execution by owner, grant the owner authentication to server, make the database trustworthy and set up synonyms to the other database tables. Make sure the owner of the source database does not have sysadmin permission and add the database owner to the databaase role dedicated to this security on the other database. 

4c. Forbid dynamic sql as much as you can especially if your stored procedure require execution as owner to access other database tables

4d. Avoid linked servers, not only for security, but for having a complete independant architecture and avoid distributed transactions.

4e. Use Active Directory / LDAP security if possible to set up common encryption, that is compliant to Microsoft.

4f. Make sure a DBA will audit regularly all security policy. Best would be that your DBA is the only person allowed to deploy anything on the server and on the production database.

4g. Use of schema and database role: to set up "dynamically" the permissions onto a database. Granting execute on the same schema stored procedures will allow your developers to have secured permissions to any stored procedures within the same schema and therefore permissions will be already set and maintainibility will be easy.

4h. Secure data and log file size so that any attacks will not make the other databases that share the ressources unavailable if something goes wrong with this databases. Obviously this require supervision setup and a DBA that can adjust sizes of database files if necessary.

4i. Separate ressources: usually the back office software development is not as tight as front office development for several reasons like the implementation of a third-party software or different developpers running a lot of weird services, ETL "injections", so on and so forth. Your front office databases can be at risk. It is better to separate instances and at best having different server even. Publication of back office data and extraction of front office data can be perform with thourough atchitecture setup using replication, ETL tools and custo;ized publication SQL development (Along my experience, publication of data is extremely frequent!). This allows to secure and avoid the heterogenenous sources on the front office and then security os easier to setup.

 

 

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Administration
commenter cet article
10 avril 2010 6 10 /04 /avril /2010 15:50

Hello Guys,

 

You might wonder what is new and whether this is that fantastic?

Well as usual, I cannot say that Microsoft changed the world on this one either but as good followers, they release some of the new good obvious features they should address to avoid being too late.

 

1. In-memory OLAP

One of the main features SQL server 2008 R2 address, with Gemini / Excel 2010, is letting the user take relational sources on excel and develop a real OLAP cube with PowerPivot. It accelerates the prototyping of new cubes and enable users like marketers, managers and analysts to go faster and not wait for an entire life cycle to get the most of the data in short period.

Now, this is not fantastic either: you still need the ressources to be able to get the cube going like RAM and CPU to process. Excel can hold about 100 millions rows which is nice.

 

2. Optimistic unicode storage management

That might be the most underestimate feature of SQL server 2008 R2. This allows you to accelerate everything concerning reading and writing strings into columns where you need to enter your strings into unicode storage because of a few lines. Let's say your company primarily deals with non-unicode countries such as France for example and then they have a small amount like 1% of their business in Greece or Hungary and they need to use the data model everywhere, therefore need to change all their varchar storage to nvarchar storage.... great!

This is annoying... very annoying. First you will have to migrate your data from varchar to nvarchar, also the store procedures parameters, all variables, all table-valued parameters, so on and so forth.

Also, before Sql server 2008 R2, your storage required twoce the size as before... bummer. What if your storage was limited? what if your indexes suddenly jump in space  and reindexing takes twice as long.... This is very annoying... and all of that for just 1% of the business...

He he! Microsoft with this new release did improve that. Even though R2 takes a tiny bit more space for the non-unicode data in a unicode column, it reduces probably by 40% the size needed for the unicde datatype.

In other words, think about having a database, with a major table with a very large varchar(512) column taking like 5GB of storage. Ouch. Suddenly you learn you need to migrate it to nvarchar(512) as your company is opening business to Russia. You need to find 5 more GB of space! Well with Sql Server 2008 R2, the non unicode data will take probably about 100 MB more only. So the migration would only be needed 100 MB only :-), these 100 MB is to store the fact that the data are non-unicode, not counting the potential index on the column or full text search catalog too...

Now you get it! finding data will also be much faster as you read way less i/o than before.

 

3. Master data management

It is interesting and Microsoft tries its best but this feature is not perfect! It is very not user-friendly and does not replace a good customized master data management system.

 

4. Logical CPU core

R2 now allow 256 cores. Before it was 64 cores. But to be honnest, unless you really need that many processors, your reaching this need means you either have not a scalable system and your data-tier is not optimized, or you have so many concurrent connections that you need to think about getting some kind of data-tier farm.

I guess 256 cores is nice when you think about datamining processing. It might be good for BI then.

 

5. Reporting services geospatial features

Reporting services adds new geospatial features so that you can litterally had a map of the United States and then you can click on each state to get the information, sales, costs, etc... However, this is pretty long to set up and not very user-friendly. It is worth looking at though.

 

There are other small features like multi-server tools.

 

Overall SQL server 2008 R2 is very interesting especially for BI new features but definitely not worth migrating from SQL server 2008 to 2008 R2, especially because everybody should wait for the first service pack ;-)

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
10 avril 2010 6 10 /04 /avril /2010 09:33

Hello Guys,

 

10 years ago, people did not care too much about getting the database fast or available. Databases were small and ECommerce or online payment or any kind of social community did not exists. Performance and availability were somewhat important for several companies but a large portion of them did not care much. They could either tell their employees that the database will be back up and running in a few hours. Performance_wise, obviously we could not wait forever but once again large databases were not that common.

I do remember working for a company that waited every single night an entire night batch to get their accounting consolidated. Not that this was that major of a work. I believe today it would take about 10-20 minutes to do the same work.

 

Now, we can say that machine 2 machine (telecommunication for example) and people 2 machine (Ecommerce for example) require extreme availability and performance. Additional technology enriches the panel of connection types such as widgets, services farms. Heterogeneous behaviour makes database availibility and performance a science.

 

Historically the data-tier has alwways been the centralized storage area where all data needed to be persisted, backed up, secured resides. There is therefore a direct connection with reading and writing on disk. For DBAs, that is the main bottleneck for performance and availability: the storage area.

 

Other problems might happen but there are all related to typical application single point of failures like the operating system failing, the memory failing, the network failing, etc...

You can have serious application requiring a very tiny portion of disk to run. Lately enriched website would need serious storage for images, large-code files, multimedia files but technically, these are all related to storage.

 

The data-tier could work the same way: bringing all data onto memory and therefore removing the impact of disk and all database editors work on that whether or not they wanted too.

NoSql and grid technology like xkoto or Oracle RAC try to make the persistent data brought to memory and make it like a buffer so that the "operational" database is on the cache layer.

Sql Server cachestore and SAN cache do that as well making reads much faster and ensurring somewhat of better performance.

 

So there are two areas to take care off:

- the getting the data or writing the data part: either on the cache of disk

- the transactional part to keep consistency of the data

 

getting data and writing data can be done fairly easily: you have a standalone server and you are done. You need to keep the data available: you get a replication, a mirroring on a different server that would be your partner. You can silmplify all by clustering the all thing although even this solution will give you a spof on the storage layer.

Writing the data, same.

 

Now that you cover your availability, you need to ensure that your data is secured. Your availability does not secure your data: it just makes sure your data will be available but if this is crap data, this will be still crap data. Recently a client of mine was subject to sql injection. Yeah, you got it: no form validation on the client side; no validation on the server side and a bunch of dynamic ad-hoc queries. Classic! Another client of mine recently did want to update one line out of millions of row but actually committed an update on all rows. Oops!

Well in this case, your availability does not help you. Great! you can still connect your data but it is all crap! You need to make sure you know your tolerance when you have a disastry like that and your backup regime will become handy. For both clients of mine we were able to recover all data correctly thanks to a good supervised backup regime.

 

Ok, let's way we cover all basis on that: good availability, good disastry recovery strategy, what is next? Well you get your performance to take care off and this is where it gets complicated.

Why?

 

Because sometimes performance does not help availability or disastry recovery plan and vice-versa.

 

Here are some clear example:*

1. you can denormalize data to make them eithier to query: more storage, more need of disk, more backup place, more network for the robocopy, more transaction log, etc...

2. you increse the need of indexing: same results

3. you increase the need of reindexing: same result

 

So how in the world will you be able to improve performance without getting your database on their knees.

Solutions 1 and 2 work no problem if you accept the consequences. If you add a couple of ressources in the momery, cpu, SAN area, you are probably fine and this might be some of the components to deal with.

 

But there are other areas

 

1. Federation of servers: get availability automatically be spreading the load onto different servers. Something I implemented very easily with telecommunication companies. We talk more about gateways in those companies. Fir high volumetric data, we just get several servers to handle the data onto different servers. This gives the availability because if one gateway does not work, then we go on another one. We secure the data by implementing serious backup regime and a datawarehouse that lets the database tier upfront very tiny: improving availability because less ressources needed and performance because less data to query no matter how your execution plan is.

 

2. Keep only operational data on your operational server; separate front and back office data: some clients unfortunately do not separate the data that resides at the back office and the front office. You should separate it not only at the database level but ideally at the server level. It is like the best practice for database architecture. You do not want to have CLR or application stuff on your server because your server is not made to handle different type of technologies that eat eachother on ressources like CPU or memory. generally your back office data will be rich and focus on the business to be run (datamining, billing, client information b2b, etc...). Front office will try to be light if possible.

 

3. Application tier: you can have as many layers as you want. In some way, it is good architecture to have multi-tier environnement to make it scalable. This will enable your to switch servers easily and maintain your layers independantly. However, it does not change the fact that, in an ECommerce, for example, your web navigation is critical. If you manage to make your client uses less dependancy for the most viewed pages you will gain in performance. Let's say there is one data you need to show to the client but you know it is changing all the time and it will consume ressources, you will try to make it available on a less-viewed page like you will make your client click on it, not show it automatically. I am not saying it is what you have to do. Functional aspect of your application might require the availability of the data very quickly. I said, you need to consider seriously the performance aspect of your application navigation from the web-tier to the application-tier to the data-tier.

 

4. Virtualising the data-tier: as virtualisation becomes very popular and cloud computing, this is definitely another performance and availability improvement. NoSql projects are looking on this sense by trying to get into the data-tier only if needed pretty much. Grids goes to the database layer but try to spread the load on identical data servers. In other words, they read data from the most ressourcefull server and write the data on the most available server and then peer-to-peer replications (mySql for example) or command-type replication (xkoto for example) can take care of it. Virtualisation is a way to do federation of servers in some ways, so I can dig that. However, it is to count on an additional layer of abstraction that might be a black box for you.

 

5. Data partitioning: you can obviously store data even within the same table in different raid array drives getting the loads more evenly whether your write or you read.

 

6. Make your database model towards performance. Debate around row-store vs. column-store / relational vs. non-relational database is important especially for performance. Lately referential integrity is less important than performance. Think about it, you delete a member of your website but because you want it to be fast you only delete the core information (the row on the primary table). Well tehcnically your member is gone whether you have parasite data. Then you can have a cleanup process on the background that would clean those useless data anymore. For a non-experienced DBA it is difficult to say that as this is one of the important point: referential integrity. But believe me, you have to adapt your database to the situation! After all, some people would say that database is just a way to store the data in a usefull order.

 

On all the solution, I prefer the federation of servers and I think you probably can apply to most of the industries. If ECommerce might require getting very large "lookup" data like looking at the memeber data, you can still federate data like orders or payments or emails sent, etc... I usually advise client to segment their databases/servers based on their websites, on the countries for international websites, on the alphabetical order of the logins, etc...

It is always possible to segment your data this way.

 

Any comments would be apreciated :-)

 

Clement

 

 

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
7 avril 2010 3 07 /04 /avril /2010 13:23

Hello Guys,

It is usually pretty straightforward to read an execution plan, to understand why one query would use an index instead of another one, to understand the needs of a nested loop or a hash join, etc...

It is usually....

But then comes .... the horrible tables!. Usually you get to be the firefighter at a client that developped a "monster" table, that is usually how the client calls it () and you look at all their queries that hammered the server with ios and cpu and memory swaping,e tc... The catastroph!

Then you look closer and you realize your table is millions of row long and 250 columns large with, obviously to make it fun, a large amount of text, image or other exotic datatype columns.

They are obvious improvements to perform on the query side like:

1. make sure the client does not do a "select *", select useless columns that susequently do not reside on the index you scan or seek and utimately perform a very large key lookup due to the poor page density.

2. make sure the drive where the file that hold the data and indexes reside in very well formatted (cluster size and disk alignment)

3. make sure you try to perform the select only one or two times instead of thousands time on your application page especially if you do not reuse the same execution plan...

 

All of that is fine but sometimes, you realize that SQl server decides to use an index instead of another one. One can say randomly but usually it is based on the number of IO it estimates to read versus RAM it might use.

Also, on large table, the estimation plan gets worse and forcing indexes is the solution for better performance. We can definitely see the difference running before and after optimisation on the profiler. The Sql server profiler is your empirical friend ;-)

Bottom line is:

- try to avoid if possible blob/in-lob storage data or at least put them in different table to minimize storage within the same clustered index. 

- try to increase Page density, it makes your indexes less fragmented during large operations of updates, deletes and make the work of your DBA much more conventional. The execution plans are correctly estimated, the fragmentation is organically growing and easy to maintain, etc...

The worse and that happens to too many clients is when they mix very very low page density on large table with random-GUID clustered index. This is the worse. You even have no way of paritioning correctly your table (and by the way most of those clients cannot afford enterprise edition anyway)...

Hopefully, and this is a howler to clients, you make the DBA create the data model before your developers go a-wire with monster tables!

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
6 avril 2010 2 06 /04 /avril /2010 08:48

Hey,

This is my first article. My friends over facebook commented on my blog and asked for a first article. The challenge is there: try to wrtie an article frequently (I wish daily but this is too presemptuous: plus, I am going to have a baby and I am not sure I will focus on the blog).

This blog is not intended to show how to implement things. I can definitely help people on that, no problem but I want the blog to be light-wighted and focusing on architecting and not on implementing.

So the first topic is transactional replication, requested by my good friend John Park, who spend several poker nights and football beer trips over the fifth bar in the rue Mouffetard.... good time !

This is an interesting topic. Too often companies completely miscalculate the use of replication. Replication is considered as one way for high availability. It is not a solution for high availability, far from it! 

Replication should be use to spread information across sql servers. I can see usual needs between a sql server handling the back office / configuration of a website and a front office completely denormalized and ready to rock - type servers. 

The nice thing with replication is that you do not have to worry about all the coding to transfer the data. Data is transferred by simple technique that everybody can implement. Bcp, stored procedure to prepare the snapshot and the transaction data, a log reader at the distribution to control the overall flow and at the target some local procedure to make sure we do not implement distributed transactions. Overall a good mechanism.

Obviously this goes with drawbacks and one of the main ones is administration. Replication is super easy to implement but the not-experienced DBA will have a hard time to get this maintained. Large reindexing, transaction stock at the source server or any kind of non-sized correctly log file can be trouble! Well he will end up breaking the replication and rebuilding it while there are techniques to release pressure, publish only one article, etc...

Servers and databases should be sized correctly to implement it and distribution server, if replication is very heavy should be deported to a dedicated server.

Another aspect systematically overlook is the stickiness of replication: some DBA for non large companies are implementing replication, correctly, architecturally speaking but want a very easy administration on the source server and therefore implement the database with simple recovery, thinking it would make the database easy to maintain given the fact that no point-in-time restore nor any log backup should be implementing. Well, probably a good call. But replication does not give the ability to empty the log file if some transactions still get marked. DBA has to be aware of this particularity. When your applications writes data faster than the transactions get buffered and delivered by the distributer, your log will get bigger until the pressure is released.

Another aspect neglected is the fact that implementing a snapshot replication and not real-time replication will be heavy. The bigger your database is the bigger your snapshot will be. It is probably safe to say that real-time replication on most circumstances is your best bet.

Another important architectural advise: just replicate your tables, not your stored procedures. You will not be able to change your stored procedure at the source while you are not going to likely want to change it at the target immediately. Just get a simple development lifecycle and take the time to deploy it in packages.

Another advise: put your articles (tables then) in a different database on the subscriber and point to the tables with synonyms on the real target database. This will allow you to maintain your connection to the subscribed articles and if you need to reinitialize the snapshot, you will be able to do it without downtime by creation a new subscriber on a different database and switching synonyms at the last moment, reducing significantly the downtime. Also, having a different database will allow you to put simple recovery while your target database require to be in full recovery. Replicated data are just redundant data that do not need to be in full recovery at the subscriber.

Well I let you comment a bit on this first article. As you noted, I am not the kind of DBAs that really enjoy replication. At my main client, very visible one, like a flagship customer for Microsoft, they implemented replication everywhere, even for Informatica and power exchange, that rely on replication to extract real-time data (I cannot believe a third-party provider makes Microsoft rely on one additional layer)! Well the thing is that this client has open a case with Microsoft on a bug concerning the purge of the log reader and this case has been open for too long, like 6 months. Microsoft comes and go to the office, making good money out of it but cannot fix the issue. This has to be said, very large replication is not handle very well by Microsoft!

So here I say it as well: try to use replication for configuration data but data that are handled over tera octets, forget it or you will suffer! Better use simple data warehouse extraction techniques

 

 

 

 

 

 

Repost 0
Published by clementhuge.over-blog.com - dans Architecture
commenter cet article
5 avril 2010 1 05 /04 /avril /2010 16:07

Clement Lyon

Hello All folks interested in the data-tier!

Given the importance of networking and internet, I decided to create this blog to interact with my fellow DBA friends and friends-to-be.

I am an independant Senior Sql server consultant who advises all-sized companies on how to develop a secured and scalable storage solution for their most important projects related to database.

I particularly like to work with all kind of industries, as it is always interesting to see how different the storage solution can be whether you have a very visible front end such as E-Commerce or very very busy traffic between servers such as telecommunication companies or Payment industries.

Lately I have been focusing on how to store exotic data like videos, xmls, geolocalization, bitmask and hierarchy data into relational or non relational data-tier. I also try to see how we can develop real geographic active-active redundancy at the data-tier to move forward. Architects emphasize with fairness, the fact that high availability does not go well with performance. The more available is your data, the more ressources you will use to get your data spread out and reduce at minimum all SPOFs. And if you want to reduce the performance bottleneck, you will need to invest in development, software or hardware.

The data-tier cannot stay centralized now as we need to ensure both performance and availability, Web and application tiers store their codes in different locations, so as to be database. Questions about latency will set up your data-tier architecture.

This blog will then focused on high volumetry, non relational key-value / column stores, in-row or in-lob data storage, data compression, IO etc.. all the effective source of optimization we need to focus on at the data-tier.

Talk to you soon folks!

Repost 0
Published by clementhuge.over-blog.com - dans General
commenter cet article