Posts

Showing posts from 2014

Rainbow in the Cloud

Image
Cloud technologies burst into our lives without asking our permission. At the beginning many of us thought the only thing that it brings would be the thunder and the rain.  Time shows that, instead, it might even bring some color to our skies. Today, on a first day Keynote of the 16th PASS Summit, Microsoft representatives   -Corporate VP Data Platform, Cloud & Enterprize T.K. "Ranga" Rengarajan,  - General Manager of Power BI James Phillips  - Corporate VP Information Management &Machine Learning Joseph Sirosh  took stage one after another talking about the Microsoft Data Platform and what it can offer in the cloud space. They have talked about the explosion of all kinds of data devices.  Those devices produce enormous amounts of data. Many also consume enormous amounts of data. This data is changing the way we work, the way we do business, and the way we live. We all depend on data to make decisions. Microsoft Data Platform allows us to do

Using Google Charts API to Visualize Schema Changes

Image
Last week I have worked on the new email report using  Google Charts  and liked it so much that decided to share it here with anyone who finds it useful. I have a Schema Changes Audit table which is being maintained by the DDL Trigger. The relevant record is added to this table every time anyone changes objects on the server.               Here is a report that I have generated using Google Charts absolutely for free and easily. If you are not familiar with the Google Charts, you can read my old post about it and how it works  here . It is easy to use, very customizable and FREE. The above visualization is using Google Bar Chart. To keep things short, I am using Transact SQL to build an HTML Image tag. The above email body contains this HTML: http://chart.apis.google.com/chart? cht=bvg& chs=660x250& chco=CF9036,90062D,67E13B,82088D,319CBA& chd=t:0,1|1,3|3,2|18,5|4,8& chds=0,19& chxt=x& chxs=0,ff0000,12,0,lt& chxl=0:|Sep%202|Sep%

Memory is a new disk

Image
In the database world, disk based data stores are slowly being replaced by memory-based data stores. Memory prices are becoming more affordable and operational databases can usually fit totally into memory. According to the Gartner (the research company that provides independent technology reports) by the end of 2015 all enterprise DBMS will use memory optimizations and most of this  transformation will happen this year.  I am playing with the Hekaton tables these days and thinking to whom of my customers it might be relevant. Most of them prefer the new stuff and are quite eager to put new terms on their CV. They like to say “Why SQL Server? Everyone is using Redis as an in-memory database, it’s free and working blazingly fast. And the other department is using the Couchbase cluster. We don’t want to stay behind…” In such situations I need to step outside of the wardrobe where I’m hiding and peek around. The DBMS market keeps growing and many great new technologies are being intr

The Distributor. Think and Rethink every thing.

Image
The key player in the transactional replication topology is the Distributor. Misconfiguration of the Distributor can lead to increased load on the production server, can interfere with the regular application activities and even cause production databases to become inaccessible. In addition, its configuration greatly impacts the replicated data latency. Changing the Distributor’s configuration after setting up the replication can be problematic and can impact application activity. The main aspects to be considered when planning a replication design are: Number of publications on each publisher in the future replication topology EPS in the replicated tables Number of planned publishers Distance between Publisher, Distributor and Subscribers Number of subscribers for each publication Size of the published Databases Most of the answers to the above questions will lead to the the decision whether you want to have a dedicated Distributor server or configure the Publisher or the