Pages

Ads 468x60px

Featured Posts

MSSQL Smart Sheets by SQLsprawl.com

Quick references to everyday SQL tasks and they are FREE.

PASS Chicago

PASS is an independent, user-led, not-for-profit organization co-founded by Microsoft and CA in 1999. It was created to build a body of individuals that would provide expertise to help educate millions of SQL Server users around the world. PASS is run by volunteers and operates globally through its websites, sponsored events, yearly Summits, and localized Chapter activities.

MSSQL Smart Sheets by SQLsprawl.com

Quick references to everyday SQL tasks and they are FREE.

MSSQL Smart Sheets by SQLsprawl.com

Quick references to everyday SQL tasks and they are FREE.

MSSQL Smart Sheets by SQLsprawl.com

Quick references to everyday SQL tasks and they are FREE.

Tuesday, October 30, 2012

Special people use Bulk logged recovery mode.


Who uses Bulk Logged?
Special people use Bulk logged recovery mode or so I have inferred from word on the street and Microsoft's BOL explanation. Bulk Logged is now referred to as a "special-purpose" recovery mode for versions 2008R2 and above. This maybe the one simple explanation that could have demystified Bulk logged years ago for DBA's who choose to over look it because they didn't know what it is. Now you know, it's for your "special purpose". What special purpose? Steve Martin explains in the movie the Jerk "Well, when I was a kid my mom told me... there goes my special purpose! And someday I'd find out what my special purpose was!"
Today's the day! 

Your Special Purpose
Bulk logging recovery mode will log less transactions (minimal logging) in the log file than full or simple recovery modes. While a database is in Bulk logged recovery mode, the MSSQL engine will handle Bulk operations differently than the other two recover models. Bulk Logged exists because big data loads take up space and use log files in way that the DBA may not want it to do. For instance, You may have a finite space for your log or you may have grown your log in a deliberate fashion for best performance. Whatever the case is, you don't want to take additional space or fragmented the growth on your log file. In special cases when a bulk insert operation is detected, SSQL refrains from logging individual transactions within that operation to the log file. The operation is then carried out by the engine which places the data directly into the database and marks only the event metadata in the log file. I will get into the details of the why and how later so let's get back to your "special purpose". Special purpose indicates that Bulk logged is not a long term recovery model although you may find a case for it in your environment specially if that includes a data warehouse. What Microsoft is trying to say is that bulk logged is meant to be turned on in special situations and then turned off when the special situation is complete. If you are loading 30GB of data into your database and you have a log file with 10GB of space for growth, you can change your recovery model to Bulk Logged before the operation and save the log file. In addition, it can be used to minimize I/O on the log file in cases were the log file needs to be readily available while a bulk operation is in mode. In any case, you know you have a special situation and it is killing your log file, so you have the option of switching your recovery model to Bulk Logged. 
There are some caveats to Bulk Logged, of course, but you can choose if they are acceptable to your environment before making the decision to use Bulk Logged, I will detail the risk later.

Why Bulk Logged
Bulk logging recovery mode is a way to get around the design element of MSSQL that provides recoverability in every case, logging transactional I/O to the log file. Someone at Microsoft must have realized that the recoverability of MSSQL may at some times be a burden for bulk operations but they did not call it out for what it truly is, however they have recently upgraded it to a "special purpose" recovery mode. I may get some push back from the community by spelling out this way but that is truly what is going on, we are circumventing the "log everything" function of MSSQL. Let's get on to looking at those special purposes and spell them out so there is less confusion than there has been over the past decade. These are the bulk operations that will be minimally logged in your log file if the recovery mode is set to Bulk Logged:
Bulk Import Operations:
1) Operations carried out by the BCP utility. http://msdn.microsoft.com/en-us/library/ms162802(v=sql.105).aspx
  5) Index rebuilds and dbcc dbreindex or alter index rebuild. (see below)
 There are some cases where bulk transactions may still be logged under Bulk Logged, most notably in cases with transaction replication working against the table. You can get details here: http://msdn.microsoft.com/en-us/library/ms190422(v=sql.105).aspx

How Bulk Logged works
When a bulk operation is detected and it meets all the criteria for minimal logging, the MSSQL engine then marks the log file that a bulk operation has begun. During the bulk operation, data is written to tables (pages) but not to the transaction log. Instead the MSSQL engine relies on a page called the "bulk changes bitmap page" this page contains a bit for each page that was changed during the bulk operation. When a transaction log backup occurs after the bulk operation, the MSSQL engine pulls each page indicated on the bitmap page as having been effected by the bulk operation into the transaction log backup file. So to an extent, pun intended, moving the data that would have hogged space in your transaction log to the transaction backup file. MSSQL then uses these pages in your backup file during a transaction log restore process to re-apply the bulk operation. Well isn't that convenient?

Bulk Logged Notes
Restores:
The thing to remember is that with Bulk Logged, some transactions are not fully logged so you loose some ability to do a point in time restore. In addition, because the trans log backup stores pages, you must restore the entire trans log backup file if it contains a bulk operation. This may not be good if your RPO is one hour and the bulk operation or log backup spans greater than one hour so do your homework before presenting bulk log as an option. 
Disk Space:
Keep in mind that the space you have allocated for backups will need to accommodate the addition data in your trans log backups, well kind of, you may have been storing these transactions in full mode anyway.
Read only:
If you are using a database that is normally in read only and after your bulk operation it is put back into read only, each trans log backup will continue to pull in the changed pages. This is because the backup operation cannot set the bit that indicates that the page has been altered. Truth be told, if you have a read only database, why are you doing trans log backups? I thought I would mention it anyway.
Replication:
With transactional replication, select into and bulk insert are fully logged because they need to be moved to the subscriber and there is no why around it. 
Writetext and Updatetext
Statements that alter existing text, ntext and image data are fully logged however new data of the same type is not. This also goes for the WRITE clause in update statements.
Indexes:
 Using bulked log during index rebuilds and dbcc dbreindex or alter index rebuild will minimize logging operations in your log file. This occurs whether it is an online or offline operation.

Friday, September 14, 2012

SQLsaturday #160



SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. This event will be held Sep 22 2012 at Kalamazoo Valley Community College, 6767 West O Avenue, Kalamazoo Township, MI 49009.

Admittance to this event is free, but we do charge a lunch fee of 10.00 so that we can provide a lunch - not pizza! Please register soon as seating is limited, and let friends and colleages know about the event.
My Schedule: Build you own schedule HERE
8:00 AM Gathering Tchotchkes as fast as possible. (Reminder: Get your PASS bag first or wear cargo pants.)
 8:30 AM Opening Ceremonies
09:00 AM Brent Ozar
Building Faster SQL Servers
SQL Server speed boils down to how fast we can get data in and out of storage. Sooner or later, we're going to run out of memory to cache. In this session, we take a lesson from Microsoft's blazing-fast Fast Track Reference Architectures, look at why they work so well for the wildest queries, and explain how to build our own SQL Servers for quick queries. Brent Ozar shows you how to test your own servers and compares them to servers he's worked with in the field.
10:15 AM Luke Jian
Anatomy of a Join
A lot of database professionals today do not have a Computer-Science Degree so they never attended a “Introduction to Databases” course. In this session we are trying to fix at least one thing and we’ll look in depth at the three types of join operations, visualize how their algorithms work in order to understand how query plans are computed, explain why these join operations have very different performance characteristics and why the optimizer chooses a specific join operator to use in a query plan. Will demonstrate that empirical cost calculations are similar to what Query Optimizer actually outputs. Good understanding of join algorithms is vital in order to diagnose and remedy issues related to bad query plans.
11:30 AM - 1:30 Sweeping floors
01:30 PM Rob Kerr
BI1 Big Data Analytics - Hadoop and Microsoft BI
This session is a deep dive that will show you how to use the Hadoop platform to extend traditional Microsoft BI solutions. The first half of the session covers big data architectures, the MapReduce process and strategies for integration of Hadoop data with traditional BI tools. The second half of the session demonstrates how to combine Hadoop, SQL Server Analysis Services, Power View and SharePoint 2010 into an end-to-end solution.
02:45 PM Benjamin Nevarez
Dive into the Query Optimizer-Undocumented Insight
This 400 (or maybe 500) level session will focus on using undocumented statements and trace flags to get insight into how the query optimizer works and show you which operations it performs during query optimization. I will use these undocumented features to explain what the query optimizer does from the moment a query is submitted to SQL Server until an execution plan is generated including operations like parsing, binding, simplification, trivial plan, and full optimization. Concepts like transformation rules, the memo structure, how the query optimizer generates possible alternative execution plans, and how the best alternative is chosen based on those costs will be explained as well.
4:00 PM Norman Kelm
PowerShell, the New SQL Hammer
You've seen all the amazing scripts that use PowerShell, but writing your own is raising more questions. This session will help fill in the gaps by explaining all the moving parts of PowerShell 2.0, the integration with SQL Server and answer the following questions as well as others. Why is Invoke-Sqlcmd necessary? What is and why is there a Minishell for SQL Server? What makes the SQLSERVER: PS Drive so powerful? How does a remote SQL Server get added to the SQLSERVER: PS Drive?
5:15 PM Closing and Raffle

How to: MSSQL 2008 (R2) Edition Upgrade

How to: MSSQL 2008 (R2) Edition Upgrade
by Paul Olson
To upgrade to a higher edition of SQL Server 2008 (R2)
 1.Insert the SQL Server installation media. From the root folder, double-click setup.exe or launch the SQL Server Installation Center from Configuration Tools. To install from a network share, locate the root folder on the share, and then double-click Setup.exe.
2.To upgrade an existing instance of SQL Server 2008 to a different edition, from the SQL Server Installation Center click Maintenance, and then select Edition Upgrade.
3.If Setup support files are required, SQL Server Setup installs them. If you are instructed to restart your computer, restart before you continue.
4.The System Configuration Checker runs a discovery operation on your computer. To continue, click OK.
5.On the Product Key page, select a radio button to indicate whether you are upgrading to a free edition of SQL Server, or whether you have a PID key for a production version of the product. For more information, see Editions and Components of SQL Server 2008 and Version and Edition Upgrades.
6.On the License Terms page, read the license agreement, and then select the check box to accept the licensing terms and conditions. To continue, click Next. To end Setup, click Cancel.
7.On the Select Instance page, specify the instance of SQL Server to upgrade.
8.The Edition Upgrade Rules page validates your computer configuration before the edition upgrade operation begins.
9.The Ready to Upgrade Edition page shows a tree view of installation options that were specified during Setup. To continue, click Upgrade.
10.During the edition upgrade process, the services need to be restarted to pick up the new setting. After edition upgrade, the Complete page provides a link to the summary log file for the edition upgrade. To close the wizard, click Close.
11.After installation, the Complete page provides a link to the summary log file for the installation and other important notes. To complete the SQL Server installation process, click Close.
12.If you are instructed to restart the computer, do so now. It is important to read the message from the Installation Wizard when you are done with Setup. For information about Setup log files, see How to: View and Read SQL Server Setup Log Files.



SQL Prime Philosophy

SQL Prime Philosophy

by Paul Olson

The SQL Prime philosophy is defined as a concept of high performance supported by an expanding knowledge that embraces the concept of the unity of the SQL structure and function.
These are the four major principles of SQL Prime:
1. The SQL Server is a unit, an integrated unit of Resource, Design, and Function.
2. SQL possesses self-regulatory mechanisms, having the capacity to defend, repair, and optimize itself.
3. Structure and function are reciprocally inter-related.
4. Rational design is based on consideration of the first three principles.
These principles are not held by dba’s to be empirical laws; they serve, rather, as the underpinnings of the philosophy on SQL Server administration and design.
The SQL Prime philosophy is defined as a concept of high performance supported by an expanding knowledge that embraces the concept of the unity of the SQL structure and function.
These are the four major principles of SQL Prime:
1. The SQL Server is a unit, an integrated unit of Resource, Design, and Function.
2. SQL possesses self-regulatory mechanisms, having the capacity to defend, repair, and optimize itself.
3. Structure and function are reciprocally inter-related.
4. Rational design is based on consideration of the first three principles.
These principles are not held by dba’s to be empirical laws; they serve, rather, as the underpinnings of the philosophy on SQL Server administration and design.
The SQL Prime philosophy is defined as a concept of high performance supported by an expanding knowledge that embraces the concept of the unity of the SQL structure and function.

These are the four major principles of SQL Prime:
1. The SQL Server is a unit, an integrated unit of Resource, Design, and Function.
2. SQL possesses self-regulatory mechanisms, having the capacity to defend, repair, and optimize itself.
3. Structure and function are reciprocally inter-related.
4. Rational design is based on consideration of the first three principles.
These principles are not held by dba’s to be empirical laws; they serve, rather, as the underpinnings of the philosophy on SQL Server administration and design.

Thursday, September 6, 2012

SSRS: Dynamic Columns

SSRS: Dynamic Columns

Not knowing how many columns will be returned for any given report at runtime is a problem. At least you would think! Easy approach, make the column names dynamic!
Most of the time we lay out our reports with static columns and populate them with returned values. In this case we are populating the column names with returned values. The Matrix lay out is the the way to go because of the physical column row setup.
 In this case we are using QuestionDisplay as the column header and PrimaryAnswer as the value in the row for that column. The report will populate the number of columns based on the number of values returned for QuestionDisplay. We will then use SELECT DISTICT to manage this sprawl.

The DataSet Query:
SELECT
DISTINCT
vw_SurveySummary
.SurveyID
,
vw_SurveySummary.SurveyName
,
vw_SurveySummary.SectionOrder
,
vw_SurveySummary.SectionName
,
vw_SurveySummary.QuestionNumber
,
vw_SurveySummary.QuestionSubNumber
,
vw_SurveySummary.QuestionDisplay
,
vw_SurveySummary.QuestionRank
,
vw_SurveySummary.QuestionSummary
,
vw_SurveySummary.QuestionMultiAnswer
,
ISNULL(vw_AnswerDetail.UnitCode,'') AS UnitCode
,
ISNULL(vw_AnswerDetail.PrimaryAnswer,'') AS PrimaryAnswer
FROM
ICQsurvey2.dbo.vw_SurveySummary vw_SurveySummary
LEFT
OUTER JOIN ICQsurvey2.dbo.vw_AnswerDetail vw_AnswerDetail ON vw_SurveySummary.QuestionID = vw_AnswerDetail.QuestionID
WHERE
SurveyName='2011 ICQ' AND QuestionNumber IS NOT NULL

Results Returned:

Creating the Report:

I typically use the report wizard as a starting place for a new report.
The grouping will make all the differance here, follow these guidlines:
  • Group the row by your Filter, UnitCode in this example.
  • Group your Column by your header value, Display Question.

         
Results:
The values for your row under each column will coordinate to the line value that was returned.

Change Owner of an Object in SQL Server


So I have this custom app that was built by an unknown a few years ago and it is mission critical! Oh and great news; it assigns ownership to db objects with create login! Now a new user needs to have the account turned over to them and nobody knows who supports this app. I figured it out but it took some digging and it turns out that it is as simple as changing object ownership. Now, who has a valid script to do this in MSSQL 2000? 
There are many scripts out there to change the object owner for MSSQL. I have found issues with each one that I have tried so I cleaned up some scripts for MSSQL 2005-2008 and 2000 and posted them here:

CREATE SP - Exchange Object Owner
IF exists (select * from sysobjects where id = object_id(N'[dbo].[sp_ExchangeOwner]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[sp_ExchangeOwner]
GO
SET QUOTED_IDENTIFIER  OFF    SET ANSI_NULLS  ON
GO
CREATE proc sp_ExchangeOwner( @usrName varchar(20), @newUsrName varchar(50))
as
-- @usrName is the current user
-- @NewUsrName is the new user
set nocount on
declare @uid int                   -- UID of the user
declare @objName varchar(50)       -- Object name owned by user
declare @currObjName varchar(50)   -- Checks for existing object owned by new user
declare @outStr varchar(256)       -- SQL command with 'sp_changeobjectowner'
set @uid = user_id(@usrName)
declare chObjOwnerCur cursor static
for
select name from sysobjects where uid = @uid

open chObjOwnerCur
if @@cursor_rows = 0
begin
  print 'Error: No objects owned by ' + @usrName
  close chObjOwnerCur
  deallocate chObjOwnerCur
  return 1
end
fetch next from chObjOwnerCur into @objName
while @@fetch_status = 0
begin
  set @currObjName = @newUsrName + "." + @objName
  if (object_id(@currObjName) > 0)
    print 'WARNING *** ' + @currObjName + ' already exists ***'
  set @outStr = "sp_changeobjectowner '" + @usrName + "." + @objName + "','" + @newUsrName + "'"
  print @outStr
  print 'go'
  fetch next from chObjOwnerCur into @objName
end
close chObjOwnerCur
deallocate chObjOwnerCur
set nocount off
return 0
GO
SET QUOTED_IDENTIFIER  OFF    SET ANSI_NULLS  ON
GO


MSSQL 2000 Version
if exists (select * from sysobjects where id = object_id(N'[dbo].[sp_ExchangeOwner]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[sp_ExchangeOwner]
GO

SET QUOTED_IDENTIFIER  OFF    SET ANSI_NULLS  ON
GO
CREATE proc sp_ExchangeOwner( @usrName varchar(20), @newUsrName varchar(50))
as
-- @usrName is the current user
-- @newUsrName is the new user
set nocount on
declare @uid int                   -- UID of the user
declare @objName varchar(50)       -- Object name owned by user
declare @currObjName varchar(50)   -- Checks for existing object owned by new user
declare @outStr varchar(256)       -- SQL command with 'sp_changeobjectowner'
set @uid = user_id(@usrName)
declare chObjOwnerCur cursor static for
select name from sysobjects where uid = @uid

open chObjOwnerCur
if @@cursor_rows = 0
begin
  print 'Error: No objects owned by ' + @usrName
  close chObjOwnerCur
  deallocate chObjOwnerCur
  return 1
end

fetch next from chObjOwnerCur into @objName
while @@fetch_status = 0
begin
  set @currObjName = @newUsrName + "." + @objName
  if (object_id(@currObjName) > 0)
    print 'WARNING *** ' + @currObjName + ' already exists ***'
  set @outStr = "sp_changeobjectowner '" + @usrName + "." + @objName + "','" + @newUsrName + "'"
  print @outStr
  print 'go'
  fetch next from chObjOwnerCur into @objName
end
close chObjOwnerCur
deallocate chObjOwnerCur
set nocount off
return 0
GO
SET QUOTED_IDENTIFIER  OFF    SET ANSI_NULLS  ON
GO
 

Sample text

Sample Text

Sample Text