Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Sunday, January 31, 2016

Prep notes for my upcoming talk on 'Partially Contained Databases'

Did a prep session this past Friday  for my upcoming talk (Feb. 3) on 'Partially Contained Databases' to the Sacramento SQL Server Users Group (@SacSQLUG)  (Some #spoileralerts if any the group is reading this before the Feb meeting)  Had a few great, gracious current or former co-workers view the talk and demo over a remote screen sharing session and provide feedback. 

  1. The feedback from a group of people with differing experiences with SQL Server was hugely valuable.  As a presenter, you are so close to the topic, you can include things as obvious that you forgot actually took you quite a few leaps to discover yourself.
  2. The call to action, which I will add to the slides, is to 'try it out'.  Try using partial containment for a simple example like a reporting service account that should have access to only one database.
  3. This topic is fast becoming a standard practice to employ for high availability/disaster recovery scenarios.  (At this time, I'll leave it to the viewer's great mental ability to make the connection) 

Some of the core feedback (which may only make sense when the final presentation is shared)
  •  hard time following the difference between types of accounts
  •  too much time spent on the 'intro'; that is the demonstration of backup/restore in an uncontained database to demo the problem trying to be solved
  • appreciated mention of the connection to certification exams
  •  pre-requisites of DBA knowledge 
My thought process incorporating this feedback:
  • Simplified the login and user creation in the demo.  Namely, removed the example using Windows authentication
  • Removed a time-consuming section on various permutations of recovering logins where the password is known/not known, original server is available/unavailable.   This content is saved for a future blog post.
  • Pre-requisites: well, I think it's OK to make an assumption or two since the talk is aimed to the SQL Server user group.  And in any case, that ship has sailed for this week's talk.  I may review the abstract  for future use and make sure it calls out the viewer is familiar with database backup/restore. 
Hurts my heart a bit to give up the Windows auth and login recovery content, because the content is a good practice.  But the time they eat up distracts from the two key  takeaways:

  1. With a partially contained database, logins can live with the schema of a user database rather than the system database (that's a clue for the High Availability/Disaster Recovery connection)
  2. The db_owner and other database roles have additional power within a user database.
Fun stuff.  and i think if i just script out the sample database creation step, all the scripts could be published to let others run the demo themselves.

Friday, June 19, 2015

Generate a Remote Desktop Connection Manager config file from SQL Server Central Management Server

If you are a SQL Server database administrator (DBA) who also has need to administer servers via Remote Desktop, this script will be of tremendous help to you at least once, and may be something you run regularly.

This script will query your SQL Server Central Management Server (not using it yet? - check out Easily Manage your SQL Server with CMS and PBM Webcast) and generate the contents of a a config file for Microsoft's Remote Desktop Connection Manager.  All free tools or built into your SQL Servers.

Get the script at:

Prerequisites:
Install Remote Desktop Connection Manager (RDCMan) version 2.7 from https://www.microsoft.com/en-us/download/details.aspx?id=44989
Have populated a SQL Server Central Management Server (CMS) with at least one group and server

To use, run the script against the instance holding your CMS data.
Click on the link to the XML.  Select All and Copy:
Create a new text file with the extension *.rdg (for example MyCMSServers.rdg)  Paste the XML content:
 
In Remote Desktop Connection Manager, File->Open the *.rdg file

Details:
All unique host names are placed under a group called '_All Servers'.  The 'Smart Lists' dynamically filter based on the groups in your SQL CMS.  This is done by placing a comment in the RDCMan node for each server.  

The script doesn't yet generate hierarchical groups in RDCMan.  Keep an eye out for edge cases that I haven't tested yet: special characters in the server or group names, hosts that are found in more than one group, etc.

Get the script at:

Hope you enjoy!

Wednesday, May 28, 2014

An approach to SQL Server Rough Tuning

Looking for a way to address SQL Server database performance in a production, virtualized environment?  There are many sources of expert advice from very smart people in the SQL Server world.  But often the most well-thought, well-intentioned advice is not easily or quickly implemented in a complex organization.  The reasons could be technical, political or simply availability of time and people. 

This diagram is my current approach to "rough" tuning a SQL Server: The idea that a server administrator or database administrator (DBA) can turn various knobs and flip switches to assigned scarce resources to a database server, but as a practical matter, the inner workings of a given application and database may not be changed… or at least not changed quickly. 


Example situations:
  • A third party vendor application with a proprietary schema, the application might have updates, but there are dependencies or license costs that take time.
  • An internal application has received an influx of new activity, but the development team is fully off on another high-priority project
  • A legacy application with original developers long gone; no known test environment to experiment


Method
Took some best practices, including some selections from the guidance on the SQL Server perfmon counters of interest poster available from Dell/Quest, and added some of the basic steps available to Server administrators and Database Administrators.

Document available as a PDF

Monday, March 24, 2014

SQL Server Data Tools - An Installation Adventure

With SQL Server 2012, your old friend "Business Intelligence Developer Studio" or "BIDS" has been replaced by a component called "SQL Server Data Tools".  SQL Server Data Tools, or SSDT is the primary authoring environment for SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS) projects.  In addition, a new way of working with SQL Server Databases is available.  The new method treats a SQL Server database as a Visual Studio project, enabling adaption of developer concepts like version control and deployment.  

While SSDT is a very innovative product, the steps to arrive at a working installation of SSDT can be a bit of an adventure.  If you have installed SQL Server 2012 from installation media, selecting all features, or at least selected all client tools, your adventure begins here.  

An entry for SQL Server Data Tools will appear in your Start menu, under the folder for "Microsoft SQL Server 2012"


Click on SQL Server Data Tools and you will soon see that the "Microsoft Visual Studio 2010 Shell" is starting up.

You will be promoted to select environment settings.  If you used SQL Server 200X's Business Intelligence Developer Studio or plan to focus this machine on SQL Server Business Intelligence projects, select "Business Intelligence Settings".  Click 'Start Visual Studio'.

After a potentially brief pause, and this loading screen...

… and you are brought into the 'Microsoft Visual Studio 2010 Shell"

Click on New Project…

Now you can actually create SSAS, SSIS, or SSRS projects at this point.  You are good to go as far as that feature is concerned.  Then check out the new option "SQL Server"…

Note the text "Provides instructions for installing Microsoft SQL Server Data Tools from the web".  Clicking this link presents this message...

Clicking "Install" opens this web page...

If you click on 'Download SQL Server Data Tools', as of the time of this writing, you're taken to a page about downloading Visual Studio 2013 trial editions.  I went this route on another machine with Visual Studio 2013 installed, so far, haven't quite figured out how to get SSDT enabled, so...

Click back…
And click SSDT Visual Studio 2012

On the next page, scroll about halfway down to step 2 and click 'Download SQL Server Data Tools for Visual Studio 2012'

For a single installation, go ahead and click 'Run'.  (To save the file for use on other workstations, click 'Save')

Then thoroughly read the License Terms, and if they are amenable, click 'I agree…' and 'Install'
The Microsoft .NET Framework 4.5.  Your environment may vary.
Restart is required if the .NET Framework 4.5 is installed

After the restart, Setup will continue

After this, the SQL Server Data project will be available under 'SQL Server Data Tools', right?  No, to access the SQL Server tools, they're under 'Microsoft Visual Studio 2012'

The 'SQL' menu  with the 'Schema Compare' and 'Data Compare' are now available.

The SQL Server Data Tools look very compelling.  As I'm help my organization migrate to SQL Server 2012, the new features will replace some work that currently requires manual processes or intricate scripting.  Perhaps the install story will tighten up as SQL Server 2014 is released or the Visual Studio 2013 version becomes a bit more clear to me  It is a bit of an adventure to install - nothing difficult, just unclear at some steps the progress towards a working installation.  Hopefully, this blog post helps you be more confident when you decide to start using SQL Server Data Tools.

Thursday, December 26, 2013

SQL Server 2012 AlwaysOn Availability Group - My experience with a proof-of-concept

Just had my experience building our organization's first SQL Server 2012 AlwaysOn Availability Group.  This is a proof-of-concept environment that will be used to demonstrate if we should build out this environment across physical locations.  This post summarizes the resources used in this initial build.
 
The white papers from Microsoft were the more comprehensive resource.   Used these when planning, and justifying the proof-of-concept.
 
Once it got time to actually build, these step-by-steps guides or checklists were used:
 
Here were some of the issues I ran into, and lessons learned
 
Separation of Duties - Server Administration and Database Administration
In my organization, there is a separate Server team controlling virtualization and Server OS while the 'Database' team has full rights to the database software.  Our first build required some back-and-forth collaboration to resolve issues that went above and beyond a standard database server
Lessons learned:
Setup up through the Windows Server Failover Cluster step mostly involves the Server Administration role.  Expect some collaboration needed between Server Administration and Database Administration in new environments
 
AlwaysOn Availability Group configuration
The AlwaysOn Availability Group configuration itself was extremely straight-forward.  The issues I ran into turned out to be items that, either by habit or due the environment, were overlooked when creating a test.  Might not run into these in production with the stricter controls on individual SQL Server instances
 
Note that the database to be made 'AlwaysOn' should be attached or restored to the 'Primary' SQL Server instance in the cluster.  During the AlwaysOn group creation, it can be restored to any secondary instances. 
Lessons learned:
Assess the SQL Server instances to be included in an AlwaysOn group against the standard build for a standalone SQL Server in production.
 

Monday, July 29, 2013

Fixup Script for conflict tables on a Transactional Replication

This script fixes up the conflict tables generated by a transactional replication subscription, and marks them as system tables.

In SQL Server replication, a subscription to a publication of the type 'Transactional replication with updatable subscriptions' allow edits on the subscriber. Conflicts in those edits (e.g. a record modified at the subscriber and publisher) have to be resolved. The default setting is 'Keep the publisher change'. If a given subscriber is on the losing side of the change, conflicts are stored on the subscribers and can be viewed through the Conflict Viewer on the subscriber. They are stored in conflict tables per publication, per table in the format 'dbo.conflict_{PublicationName}_{TableName}. These tables are system tables.

In the scenario that led to the need for this script, both of the servers was patched to SQL Service Pack 2, and replication stopped and restarted. At some point during the process, the conflict tables showed as User Tables. While replication did work continue successfully, having the tables as User Tables was problematic. For starters, permission grants or scripts that target all user tables risked allowing edits to these replication tables, and the Object Explorer view in SSMS became quite messy.

Before

After

Limited testing for this has occurred - basically confirmed successful insert, update, and delete transactions before and after running this script.

Please note much credit for assisting in finding the underlying symptoms and general brainstorming should go to all the people at @SacSQLUG, and @SQLSoldier

SQL

Edit|Remove

/*

FixupConflictTables

 

Will examine the conflict tables generated by a transactional replication subscription, and mark them 

as system tables if for some reason they are showing up as User Tables

*/

--set this constant to have this script perform the action, otherwise it will output the resulting script only

DECLARE @ExecuteNow bit

SET @ExecuteNow = 0

 

--find the name of the associated publication

DECLARE @Publication sysname

SELECT @Publication = publication FROM dbo.MSreplication_subscriptions

 

PRINT '--Fix up conflict tables for the publication ' + @Publication

 

--buffers to hold SQL statements

DECLARE @SQL nvarchar(4000)

SET @SQL =''

DECLARE @tablename sysname

 

--loop through the tables that look like 'conflict_{Publication}_{tablename) and aren't marked as MS shipped (i.e. system objects)

DECLARE curConflictTables CURSOR LOCAL FAST_FORWARD READ_ONLY FOR

SELECT name FROM sys.tables WHERE name LIKE 'conflict_' + @Publication + '_%'

    AND is_ms_shipped = 0

 

OPEN curConflictTables

FETCH NEXT FROM curConflictTables INTO @tablename

 

WHILE @@FETCH_STATUS = 0

BEGIN

    --build the sql to convert this to a system table

    SET @SQL = 'sys.sp_MS_marksystemobject ''' + @tablename + ''''

     

    PRINT @SQL

    IF @ExecuteNow = 1 

        EXECUTE sp_executesql @SQL

 

    FETCH NEXT FROM curConflictTables INTO @tablename

END

 

-- close a cursor

CLOSE curConflictTables

DEALLOCATE curConflictTables

 

Tuesday, November 6, 2012

SQL Server Agent Job - Failed Stats due to negative duration

Have some jobs that collect statistics from SQL Server Agent job history, looking for spikes and trends in data integration.  Over the weekend, those stat collection jobs failed.
 
Turns out there were jobs that reported a dramatic ‘negative’ duration.
The cause, this jobs had started at 2:00 AM Sunday morning, then the Daylight Savings Time kicks in.  The job finishes 4 minutes later, but by then it’s actually ~56 minutes earlier than the start time.  For some reason this gets reported as ~-3976 days, 21 hours…   Then an hour later, it’s 2:00 AM again, and the job runs normally.
 
Fun stuff!

Wednesday, March 30, 2011

Source Code Control - SQL Server/Business Intelligence Stack



    One of the items weighing on my mind is that I have not introduced a true source code control process for my team working primarily with the SQL Server business intelligence stack. A source code control process is not need on this team to 'control' or 'audit' the work among the team, but provide some basic tools for working together:





  1. The ability to work on source code, being confident you are working against the latest and greatest version your team has been able to put together.


  2. The ability to develop source code, and review your work for quality before contributing to the team, and be confident your work will not accidentally override another, or be overridden


  3. The ability to review changes history on code regardless of author, to investigate behaviour at past point and/or rule out "code changes" as the cause of a problem


  4. Current methods



  5. All dev staff RDP into the development server to work on core, server-side projects - especially SSAS and core SSRS


  6. A few of us tend to work locally on SSIS, SSRS, and SQLscripts, then use sync toy to move files up to a file server… use synctoy on the development server to sync files from the share drive down


  7. All staff work on SQL scripts from their own machines, and kind of hope we don't collide with each other




  8. Based on inquiring in and around my organization, and also bringing it up during 'group therapy' session at the Sacramento SQL Server User Group, options included Visual Source Safe, CVS (which based on rudimentary research, actually SVN would be preferred), and Git, which I've been reading about mostly from the open source world. Upon researching Visual Source Safe, learned this old file-based standby is being deprecated by Microsoft in favor of their true server product - Team Foundation Server)



    Alternatives



    Visual Source Safe





























    AdvantagesDisadvantages


    Integration with Visual Studio (for SSIS, SSRS and SSAS)



    License cost/ Procurement process required



    Uses existing file server



    Requires a checkout-edit-checkin workflow





    Is actually in de-support by Microsoft in favor of Team Foundation Server (TFS)





    SVN (client-side tool - TortoiseSVN)

































    Advantages



    Disadvantages



    Free/Open Source



    Lack of integration with Visual Studio (for SSIS, SSRS and SSAS) - 3rd party tool available for $49/seat



    Supports an Edit-Merge-Commit workflow





    Uses existing file server





    http://www.codeproject.com/KB/dotnet/SourceControl_VSNET.aspx





    Git (client-side tool - mySysGit)







































    Advantages



    Disadvantages



    Free/Open Source



    Lack of integration with Visual Studio (for SSIS, SSRS and SSAS)



    Supports an Edit-Merge-Commit workflow





    Uses existing file server?



    Appears to be more geared to storing a project on the open internet (i.e. http:)





    Focuses a lot more on branching and merging - good for bazaar-type development, but not as much value for a small focused (relatively) team





    Team Foundation Server

































    Advantages



    Disadvantages



    Integration with Visual Studio



    Not currently available in our organization. Another shop is evaluating it - need to assess and leverage their efforts, as well as determine licensing



    Other dev lifecycle tools





    Integration with SharePoint