B14 SQL Server DBA Notes
B14 SQL Server DBA Notes
B14 SQL Server DBA Notes
FUNDAMENTALS
1. INTRODUCTION --- 4
2. EDITIONS --- 7
3. VERSIONS --- 10
4. INSTALLATION PRE-REQUIREMENTS --- 11
DIFFERENCE B/W DELETE AND TRUNCATE --- 12
5. INSTALLATIONS --- 14
SILENT INSTALLATIONS,
UNINSTALATIONS --- 25
6. SERVICE PACKS --- 30
7. FILE & FILE GROUPS --- 34,37
8. PAGES & EXTENTS --- 40,43
9. DATA FILE & LOG FILE ARCHITECUTRE --- 44,46
10. SECURITY --- 55
ROLES & PERMISSIONS
1. INSTANCE LEVEL --- 58
2. DATABASE LEVEL --- 61
3. OBJECT LEVEL --- 65
SECURITY HARDENING RULES --- 66
11. RECOVERY MODELS --- 73
12. BACKUPS --- 77
13. RESTORE & RECOVERY PROCESS --- 88, 90
14. PIECE MEAL RESTORATION --- 100
15. DATABASE REFRESH --- 105
16. POINT IN TIME RECOVERY --- 107
17. JOBS & MAINTENANCE PLANS --- 108,110
18. ATTACH & DETACH DATABASE --- 113
19. COPY DATABASE WIZARD --- 115
20. SYSTEM DATABASES OVERVIEW --- 116
21. SUSPECT DATABASE --- 120
22. SYSTEM DATABASE CORRUPTIONS --- 122
23. FILE MOVEMENTS SYSTEM & USER DATABASES --- 126
24. IMPORT/ EXPORT --- 131
25. DB MAIL CONFIGURATION --- 133
26. LITE SPEED --- 138
1
27. SHRINKING OPERATION --- 152
28. UPGRADATION & MIGRATION --- 154
29. TEMPDB FULL --- 156
30. LOG FILE FULL --- 159
31. RESOURCE GOVERNER --- 160
32. POLICY BASED MANAGEMENT
HIGH AVILABILITY
PERFORMANCE TUNNING
2
14. DTA[DATABASE TUNING ADVISOR]
15. ACID PROPERTIES --- 325
16. WINDOWS TASK SCHEDULAR --- 326
17. QUERY TUNING --- 327
18. HIGH CPU ISSUE & MEMORY ISSUE --- 330,334
19. UPDATE STATISTICS --- 335
20. ACTIVITY MONITOR --- 336
21. EXECUTION PLAN --- 341
22. RAID LEVELS --- 342
23. TEMPDB ARCHITECTURE --- 345
24. SQL SERVER AUDITING --- 349
25. NEW FEATURS LIST --- 363
26. DMV’S & SP & DBCC --- 365, 369, 371
27. ALWAYS ON HIGH AVILABILITY
28. REAL TIME CLASSES --- 378
3
INTRODUCTION
> MS SQL Server is a Relational database server used on Web Servers to access information.
Microsoft SQL Server is a database platform for large-scale online transaction processing (OLTP),
data warehousing, and e-commerce applications; it is also a business platform for data
integration, analysis, and reporting solutions.
• RDBMS stands for Relational Database Management System. RDBMS data is structured in
database tables, fields and records. Each RDBMS table consists of database table rows.
Each database table row consists of one or more database table fields.
4
• Perfect suite of application – good clubbing and packaging of Database engine, Agent
Service, Notification Service, Reporting Service, Analysis Service, Integration Service.
• Uncomplicated installation process
• BOL – help documentation is easily available and more friendly to browse to get the
correct help
• Perfect match for all level of organizations… small firms to big enterprise to data store.
Having said that, if we check the real world scenarios and implementations we will find these
facts:
Size of database: According to the 2005 Survey of Winterport [European bank], the
largest SQL Server DW database is the 19.5 terabytes.
Security of data: “Microsoft beats Oracle in security showdown. The website clearly
says “Microsoft patched 59 vulnerabilities in its SQL Server 7, 2000 and 2005 databases
during the period, while Oracle issued 233 patches for software flaws in its Oracle 8, 9
and 10g databases”
Speed and concurrency: SQL Server 2005 system that handles 5,000 transactions per
second and 100,000 queries a day and can scale up to 8 million new rows of data per
day, Is still required more performance? no
Last but not least, search on any job site for “Oracle” and “SQL Server”, you will find
more jobs for SQL Server than Oracle, why? Because more companies implement SQL
Server than Oracle. For example, I searched on sites for last 7 days jobs. I found 2143
Jobs for SQL Server and 1867 jobs for Oracle.
5
Multiples of bytes:
– Kilobyte (KB)
– Megabyte (MB)
– Gigabyte (GB)
– Terabyte (TB)
– Petabyte (PB)
– Exabyte (EB)
– Zettabyte (ZB)
– Yottabyte (YB)
6
EDITIONS
Types of editions:
Developer Edition
Enterprise Edition
Standard Edition
Workgroup Edition
Express Edition
Evaluation Edition
-------------------------------------------------------------------------------------------------------------------------------
Embedded Edition
Azure Edition
7
• OLTP deployments that expect to grow rapidly in the future
• OLTP deployments that are not expected to rapidly grow in the future
• OLTP deployments that are not expected to slightly grow in the future
Developer Edition:
• It is especially used for R & D. It is a fully featured version. No limit for the processors
and RAM. We can’t use it for business purposes.
2. From 2012 SQL supporting core based license but not processor based...
3. Sql 2012, standard edition support server and cal + per core base both models. In this models
UN limited users can connect to database.
4. For enterprise edition, Microsoft offering only "per core basis" but not "per server +CAL”
models.
8
VERSIONS
http://sqlserverbuilds.blogspot.in
9
INSTALLATIONS PRE-REQUIREMENTS
Installation Pre-Requirements:
Versions
O\S Windows Server 2000
Windows Installer 2
Processor type Pentium III compatible processor or higher required [SPEED 600 MH
10
Processor type Itanium processor or faster [1.0 GHz or faster]
AMD Opteron, AMD Athlon 64, Intel Xeon with Intel EM64T support
Processor type
support
Disk space
Feature
requirement
11
Database Engine and
data files,
711 MB
Replication, and Full-
Text Search
Analysis Services and
345 MB
data files
Reporting Services
304 MB
and Report Manager
Integration Services 591 MB
Client Components
(Other than Books
Online and 1823 MB
Integration Services
tools)
SQL Server Books
157 MB
Online
12
DIFFERENCE B/W DELELTE AND TRUNCATE
Types of queries:
What are the difference between DDL, DML and DCL commands?
DDL
Data Definition Language (DDL) statements are used to define the database structure or schema. Some
examples:
13
RENAME - rename an object
DML
Data Manipulation Language (DML) statements are used for managing data within schema objects.
Some examples:
DCL
Data Control Language (DCL) statements. Some examples:
TCL
Transaction Control (TCL) statements are used to manage the changes made by DML statements. It
allows statements to be grouped together into logical transactions.
DELETE TRUNCATE
Delete row by row values Truncate entire data from the table at one shot
Delete can have where condition Truncate we cannot apply where condition
Roll back is possible Roll back is not possible
Slow operation Very fast operation
DML Operation DDL operation
if we delete only data gets deleted If we truncate only data gets deleted but structure remains same
Delete Operation will not reset seed value of identify
column Truncate resets the seed value
14
INSTALLATION
Ex:
.Net Framework:
Note: SQL Server version increase then my .net, windows installer, o\s, memory and hard Disk
are different.
1. In x86 only 32buses transferred each time. Where as in 64 bit 64 buses transferred each
time.
2. In X86 have limitations when trying to consume the hard ware resources like memory.
Ex: In windows server O\S[X-86] Max memory by default can utilize only 2 GB for the SQL
Server. If we want to use more than 2 GB then need to enable switches at O\S and Sql level.
In X64 bit no need to enable any switches and no limitation while consuming the resources.
-------------------------------------------------------------------------------------------------------------------------------
15
Installation of SQL Server 2005 Screen:
Method1: Just run the setup and can get without installing
Method2: Go to image folder in software’s and can get the same info.
Windows Installer: This is one of the operating system where it helps to deployed the SQL
Server services into O\S
Firewall: It will give end points to communication purpose and validate your connection
Note: Only 1 time installation is enough for many Sql server installations in the same server.
4. Registration information
5. Component selection
3 Types:
2. OLAP: SSIS [SQL INTEGRATION SERVICE], SSRS [SQL REPORTING SERVICES], SSAS [SQL
ANALYSIS SERVICE]
16
3. Common components: Book online [BOL], Tools, notification services...
Note:
When you run setup file for the folder SERVERS...You have option to install tools as well as...No
need to install again tools for multiple times in single server.
2. Named Instance: Can specify the customized name as per the requirement.
Note:
1. If SQL Server edition is enterprise: total no. of instances can install 50 per server
2. If SQL Server edition is other than EE: total no. of instances can install 16 per server
Limitations:
17
Note: In real time always select "Mixed mode" and required to provide the password called “SA
account ".
8. Service accounts: On which service account the SQL Server services need to run.
Note: In real time always use domain level services and install SQL Server.
Note: Ensure you confirmed the collation name with requestor or application team... Collation
need to specify on basis of your application
11. Install
-------------------------------------------------------------------------------------------------------------------------------
SQL Server configuration manager: This tool shows only SQL Server related services.
Note: Every version of SQL Server have configuration manager tool. In higher version
configuration manager I can see the services of lower version of Sql as well.
Start
Stop
Pause
> Report server configuration: This is one of the tool help to configure reporting services.
18
Folder inside the instance:
Folder 80: Created when you install Sql server 2005 version and used for backward
Compatibility
COM [Component management] - SQL Server exe files and DLL files related to SQL Server
components
SETUP BOOTSTRAP: Key folder contains [1033, bin, log, resource, arpwraper, setup, dll files]
TOOLS: Tools related exe files for tells and sample databases.
Binn: Critical folder for the instance and contains instance related .DLL and .EXE files.
Repldata: Stores replicated information
Data: Contains database files
19
INSTALL: Contains some scripts
Logs: Very important folder to store event tracking inside the Sql server and used for DBA
trouble shooting area.
-------------------------------------------------------------------------------------------------------------------------------
Build Numbers:
-------------------------------------------------------------------------------------------------------------------------------
Connectivity Procedure:
[Hostname or local]
[HOSTNAME\instance name]
-------------------------------------------------------------------------------------------------------------------------------
Shortcuts:
-------------------------------------------------------------------------------------------------------------------------------
Resource database is introduced in SQL Server 2005 newly for security purpose.
20
1. Instance aware services:
Common services:
Integration service
Browser
Notification services
-------------------------------------------------------------------------------------------------------------------------------
Note: whenever Microsoft releases new product or version into market called RTM
2000: 8.0.194
2005: 9.0.1399
2008: 10.0.1600
2012: 11.0.2100
-------------------------------------------------------------------------------------------------------------------------------
32767
21
Database ID's:
Master -1
Model-3
Msdb-4
Tempdb- 2
Resource-32767
Note: When you create any user database id starts from [5]. Database engine always uses or
communicate with DBID
If I delete any between database then the same ID assign to a new database in sequence
manner.
SQL Server logs track or contains "instance level event" but whereas error log have only “SQL
Server agent related event tracking"
-------------------------------------------------------------------------------------------------------------------------------
Note: SQL Server 2008 \R2 configuration manager show all SQL Server services for same and
lower version of SQL as well
But where as in Sql server 2005 configuration manager does not show 2008 \R2 or higher
version services.
22
Methods for Uninstallations:
2. Installation steps:
> Verify whether SQL Server services are removed from configuration manager for the specific
instance
> Remove the folders which are all related to your instance which need to uninstall.
> Remove from register editor [regedit] for the particular instance if not removed at the time of
uninstallation.
Note: ARPWRAPER is only available in SQL Server 2005 but not in versions of 2008, 2008 R2,
2012.
Note: Before and after uninstallation of SQL please send email notification to requestor.
23
-------------------------------------------------------------------------------------------------------------------------------
2. Default\named instance
3. Collation settings
4. Service account
5. Authentication mode.
9. Components selection.
24
ISSUES AT THE TIME OF SQL SERVER INSTALLATION?
> Not install right .net frame work and windows installer
25
SILENT INSTALLATION OR UNATTENDED INSTALLATION:
Unattended installation:
http://www.matthanson.ca/2012/04/sql-server-2012-unattended-installation/
Steps:
1. From command prompt required to enter to path where "setup.exe" file is placed.
Drive:
Cd (copy path)
-------------------------------------------------------------------------------------------------------------------------------
2. EULA
4. Service account
5. Collation
6. Authentication mode
26
File stream:
> FILESTREAM was introduced in SQL Server 2008 for the storage and management of
unstructured data. The FILESTREAM feature allows storing BLOB data (example: word
documents, image files, music and videos etc) in the NT file system and ensures transactional
consistency between the unstructured data stored in the NT file system and the structured data
stored in the table.
Note: Folder creations in SQL 2005 create as MSSQL.number. It is very difficult to understand
folder belongs to which instance.
But whereas from SQL Server 2008 onwards the folder creation (MSSQL 10.0 OR 50 _INSTANCE
NAME) which help to understand easy about the instance details.
-------------------------------------------------------------------------------------------------------------------------------
> Never keep all system databases, user databases files, backup files, binary files, instance
related files into single disk.
27
Split SQL Server Files as per below:
Note: EMC2 SAN [Storage area networks] disks are act like a network disks where we can
Connect to another system like external hard disk. Data recovery is very easy.
-------------------------------------------------------------------------------------------------------------------------------
Process: Inform to windows team to restart the server. Once server up, DBA team check SQL
Server.
-------------------------------------------------------------------------------------------------------------------------------
Points on versions:
Point 1:
Note: In same server if we install SQL Server 2005 and Sql 2008 or higher version?
> SQL 2005 Configuration manager shows only the services of 2005. Whereas SQL Server 2008
or higher version shows the services in configuration of "SQL Server 2008 or higher Version+
lower version ". Vice versa is not possible.
> Each version of Sql server have different configuration manager’s tools.
Point 2:
> SQL 2005 instance can connect in SQL server 2008\higher version SSMS but whereas higher
version instance cannot be able to connect in lower version SSMS.
Point 3:
> For Sql server 2008 and 2008 R2 summery .txt file is common and can access from 100 folder.
28
SERVICE PACKS
> Microsoft designed service packs or hot fix or cumulative updates to fix the error or bugs or
code related.
29
codename Kilimanjaro or 10.51.2500.0 or 10.52.4000.0 or 10.53.6000.34
Duration: Every month 2nd week MS release hot fixes for multiple issues
> Cumulative update: Multiple hot fixe bug s are included and released as CU
Note: SP or CU or hot fixes designed for version level but not edition level.
-------------------------------------------------------------------------------------------------------------------------------
1. Pre-installation steps:
Select @@version...
30
> Admin level permissions are required
Note: Take a copy of the BINN folder and resource database [MDF and LDF] file.
SQL Server services automatically goes offline state and once done services comes online.
When you apply service pack the BINN and resource database files overwrite or update with
Service Pack files.
> Run moksha tool to find whether any .msi or .msp files?
2. Installation steps:
> Verify the summery.txt file to confirm whether sp or cu or hotfix is applied for all components
or not
Or
xp_msver
31
> Verify registry whether build number is changed or path
> Inform application team to test the connectivity and additional checks
-------------------------------------------------------------------------------------------------------------------------------
How to bypass restart computer policy in Sql server and applying any sp?
SQL 2005:
32
4. Inform apps team to check the data.
Method: 1 Try to uninstall Sql server service pack from control panel
Points:
> If service pack applied on RTM then if you uninstall then build no goes to RTM only. Whereas
if we apply sp2 on RTM+SP1 then build no goes to SP1.
2. Incompatibility issue
3. Permission issue.
Solution:
Process:
33
Extraction process: Sql service pack 3 file for example is of the format ‘Sql server sp3-90000-
setup.exe’ in E:\sqlbits directory, then navigate to this directory in command prompt and
execute the command as below:
X: means extract
A dialog box will open requesting the location to extract the bits. Provide the location and press
OK. SP3 bits will be extracted in the specified location.
Or
> Download the missing .msi or .msp file and place into C:\WINDOWS\INSTALLER
Or
34
FILES
Files:
In SQL Server every database should contains minimum 2 files
Types of files:
.MDF [Master data file, main data file, primary data file]
.NDF [Next data file, new data file, secondary data file]
Purpose:
1. .MDF:
> This file is the startup of the database in Sql server/database...i.e. whenever you start Sql
server database the first file comes online "MDF"
35
> MDF contains other file information [.LDF and .NDF]
> Per database only 1 MDF file and cannot possible to delete.
2. .NDF:
> When primary data file is full\depending on the requirement we can add multiple secondary
files per each database.
3. .LDF:
> The main purpose this file to recover the data whenever any databases crashes.
> Every transaction should write first into LDF file and this concept is called as “WAL [WRITE A
HEAD LOGGING]".
Note: per database we can be able to create 32767 file (.MDF, .NDF, .LDF)
Note:
1. per database can create total max number of files can create 32767
36
1. How to add a file by using query analyzer?
ADD FILE
To filegroup [FGNAME]
Note:
2. Every database should contain Logical name and as well physical file name with specific
extension.
3. Every file should contain one file id to communicate database by using database ID.
6. Auto growth option is always set depending on the file type and gives more value of file
growth which increases the performance database to faster the transaction.
Note:
In SQL Server we can add files [NDF (or) LDF] to the database is online... i.e. without downtime
can be able to add the files.
1. MASTER: We cannot be able to add any data (.NDF) or log (.LDF) files to master
database.
2. MODEL: We cannot able to add any data (.NDF) or log (.LDF) files to model database.
3. MSDB: We can able to add files (.NDF or .LDF) to MSDB database.
4. TEMPDB: We can able to add files (.NDF or .LDF) to Tempdb database.
5. RESOURCE DB: We cannot able to add any files due to resource database is read_only
mode and hidden.
37
FILE GROUPS
File Group:
Advantages:
Note:
38
Per File group> 32767 Files.
Note:
Tables always create file group level but not specific files. If a file group contains multiple files
then the table structure creates inside of all files.
Ex:
) ON [FILEGROUP NAME]
Note: We cannot be able to read data from physical NDF, MDF or LDF.
Reason: Inside file data in encrypted formate and only Sql server engine can be able to
understand.
Note:
In real time MDF or NDF file ALWAYS the file restriction should be "UNRESTRICTED GROWTH”
and File growth value should” MB"
Ldf file ALWAYS the file restriction should be "RESTRICTED GROWTH” and File growth value
should” MB"
-------------------------------------------------------------------------------------------------------------------------------
Note:
1. Primary file group never able to set read-only and which is default set
39
2. If you set any file group as read-only then cannot be able to any DML\Alter operations in the
table.
3. For a specific file group can possible to set both READ_ONLY\DEFAULT. But which always
takes "READ_ONLY"
PAGES
Pages:
Page is the fundamental unit of data storage in SQL Server (data blocks in Oracle).
Eight (8) physically contiguous pages => One (1) EXTENT.
Disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into
pages numbered from 0 to n.
Disk I/O operations are performed at the page level. (SQL Server
reads/writes WHOLE pages).
Page size: 8kb. ==> 128 pages = 1Mb.
Page architecture:
40
> Page header: type of the page, page number, object details, how much free space inside the
page
> Data rows: Where actual data stores inside the page and size is 8060-Bytes
> Row offset: This is the index of the page when data rows increases the row offset also
increase. Row offset index of data rows.
Types of pages:
> Data: All data type data stores in this page except text, ntext, and image.
Note: All below 5 types of pages are designed for Sql server engine usage. Called maintaince
pages.
> IAM [Index allocation map]: Stores information about index pages in extents.
41
> PFS [Page free space]: How much free space in a page.
> BCM [Bulk change mapping]: Store entries about log backup.
> DCM [Differential change mapping]: Stores information about full database backup.
GAM SGAM
It records what extends are allocated SGAM records which extents are current used
as mixed extents and have at least one unused
space
GAM has 1 bit for extent for each extent in the SGAM if bit is 1, then it is extent is used as
interval, it covers. If the bit is 1, the extent is mixed extent and has free page
used
It talks about general extents It talks about exclusively mixed extents
For a database:
Page 0 is header
Page 1 is PFS
Page 2 is GAM
Page 3 is SGAM
Page 6 is DCM
Page 7 is BCM
Objects in a database
Note: If we want to open log file, we need to go third party tool or Log Explorer, Apex SQL tool,
SQL log resource
> The last run DBCC Checkdb is written in the boot page. It contains most critical information
about the database. Every database boot page is the 9th page
42
> When instance is restarted by looking at this boot page is a method to write, information into
the event log.
Error: 5047failed for file group (with in the filegroup, if an table is there)
> Moving a file from one filegroup to other file group, is not page the alternate way is to first
move the data in the FG\file to the other filegroup through scripting/export import and once
data moved successfully. Then delete the file from file groups and create the file in File group2.
EXTENTS
Extents:
> Minimum unit of data storage called an "extent"
> 16 Extents = 1 Mb
Types of extents:
1. Uniform extent: Same type of 8 pages store then call as uniform extent.
43
Note: By default SQLServer engine allocates the pages (8) into mixed extent. After filled, if
engine identifies all 8 pages are belongs to same type then all pages carry to "uniform extent".
2. Page allocation: Page allocation always stores in a sequence manner starting with 0.
01: 0000
01:0001...ETC
03: 0000
03:0001...ETC
XP_fixeddrives
44
2. To know compatibility level of all databases
3 .To know (dbname, size, owner, date created, status, compitabilitylevel) Particular
databases
sp_helpfile
Note: System databases [master, model, msdb, Tempdb] not possible to take offline
9. Rename of database:
45
LOG FILE ARCHITECTURE
> The transaction log is used to generate the data integrity of the database and for data
recovery.
> The transaction log file contains a storing of log records. Physically the sequence of log
records is stored efficiently in the set of physical files that implement the transaction log.
Explicit Transaction:
> A transaction is started with begin Tran and finished with end/commit transaction.
Statement1
Statement2
Statement3
Implicit Transaction:
> SQL Server internally adds begin and end transaction for the individual statements.
Syntax: Statement1
Statement2
46
Log Record:
Log Records are entries that are made into the Transaction Log files.
1) LSN Number
2) Transaction ID
3) Timestamp
5) Statement (Query)
7) Previous LSN, Log record type, Abort (Transaction committed (or) not redo LSN and Undo
LSN)
> Whenever any query is processed, the data will be passed to Data file. Below is the process
how a query is processed in SQL Server/ Importance of Log File Architecture:
47
`
48
The query is written into the log file from the log cache.
If any type of failure occurs while writing data to the data file, then the query in the log
file is executed at the last commit transaction processed (refer commit process down)
and the remaining data is written to the database whenever we start the server.
This process of writing data to the database after a failure from the log file is called as
Recovery.
Procedure Cache contains the execution plan.
Context Cache contains the data of the stored procedure.
Server Level Data Structure contains the Server level information.
Commit Process:
As soon as commit statement is written to the log file it throws a token to the user that commit
is completed successfully (Ex. 1 row affected), this process is called as Commit Process.
WAL [Write a head logging]: Writing every transaction into transaction log called “WAL"
3. VLF's creation depends on File AUTO_GROWTH option and set more which cause less VLF's
creation and can get more performance.
5. Max 50 VLF's then can get very good performance...If increase then performance gets down.
6. The SQL Server Database Engine divides each physical log file internally into a number of
virtual log files. Virtual log files have no fixed size, and there is no fixed number of virtual log
files for a physical log file. The Database Engine chooses the size of the virtual log files
dynamically while it is creating or extending log files. The Database Engine tries to maintain a
small number of virtual files. The size of the virtual files after a log file has been extended is the
sum of the size of the existing log and the size of the new file increment. The size or number of
virtual log files cannot be configured or set by administrators.
49
How to find VLF's:
DBCC LOGINFO
Size of VLF
File id
File sequence no
Note:
For log files always Microsoft recommends to keep FILE GROWTH INTO "Percentage"...
For Data files always Microsoft recommends to keep FILE GROWTH INTO "MB's"...
select * from sys.databases :::Verify the column "log_reuse_wait_desc" shows "active" then
log file is in use ...if not using shows "nothing"
Definitions:
Dirty Page: Current modifying data inside the pages in buffer pool called "Dirty page"
50
How to find whether Log file is using or not?
Column: LOG_REUSE_WAIT_DESCRIPTION:
NOTHING: LDF file is not using or no transactions are running and using WAL concept
-------------------------------------------------------------------------------------------------------------------------------
Columns:
Transaction id
-------------------------------------------------------------------------------------------------------------------------------
CHECK POINT
Check Point
1. A checkpoint writes the current in-memory modified pages (known as dirty pages) and
transaction log information from memory to disk
51
Check point do:
1. Direct check point: Which is the default checkpoint triggered by SQL Server engine.
Syntax:
3. Indirect checkpoint: This check point rise when any Sql server technical activities like
Backup operation
attach\detach operation
Cluster failover
Note: How to capture check point and lazy writer information into Sql server logs?
52
Check point:
Lazy writer:
> This is one of the back ground process where works only from BUFFER Cache.
> Lazy writer works only when system is running with memory pressure or running out of
memory.
> User cannot see or create, manual lazy writer operation in Sql server
> We cannot able to capture lazy writer information into SQL Server logs.
Note: prime job of lazy writer is it flush the pages from buffer to disk.
Question? How lazy writer knows what pages to flush from buffer to disk?
Ans: lazy writer check each page header and verify whether all transactions in the pages are
committed then pages will be flushed to disk... [Page header contains infusion of whether
committed\uncommitted]
53
Explicit and Implicit commit:
2. If any user started transaction with begin Tran then user should fire manually commit
otherwise transaction never complete.
Dirty Page:
In buffer pool what are the current modifying pages called as "Dirty page"
SP_WHO:
Spid
Status
Login name
Db name
Command
Blocking by
Host name
CPU Time
Disk I\O
Spid
Status
Login name
Db name
Command
Blocking by
Host name
54
CHECKPOINT LAZY WRITER
Check point runs in only transaction log
file Lazy writer operates from buffer pool
Check point is the logged operation and Lazy writer is the non-logged operation and did not write to
writes to Tlog file Tlog file
Check point can be controlled by user and
Sql server engine as well lazy writer only operate by Sql server engine
Check point is the background process Lazy writer does not have any fixed time line and only occurs
which triggers every 3 sec when there is memory pressure in the buffer pool
We can able to track checkpoint into Sql Lazy writer information cannot be able to track into SQL
server logs by enabling the trace 3502 Server logs.
In memory free pages list not taken care
and kept free list In memory free pages list taken care and kept free list
We can fire a query to see check point
information by using
55
SECURITY
Security:
2. Once they add account into domain level, automatically accounts replicate to the number of
servers added into domain.
3. In O\S level, user name and password stores in NTLM OR KERBOROS KEY.
Check 2: Verify the user name and password entered by you by the O\S from Kerberos keys.
Authorization: Authorize the connection by verifying the user name and password.
1. Windows authentication
56
1. Windows authentication:
> If you want to connect Sql server first the account should be member of O\S level then only
can be able to use same account to login into Sql server.
> When you connect to Sql server with windows authentication validation (user name and
password) checks in Kerberos key. Once valid pass the connection to Sql server as well.
1. If the account is a member of o\s admin group, by default you can get access\connect to
Sql server because of "BUILTIN\Administrators" Group in SQL Server.
2. If this account is deleted then no admin group accounts can login until adding them
manually.
> User name and password provided while login into SQL Server instance with Sql
authentication
> We can only able to connect by installing tools in local machine from their connect to server
by providing user name and password.
> Passwords travel over a network for authentication this makes Sql authentication is less
secure than windows authentication.
Note: Cases of going to use SQL authentication: If any third party source databases like Ms-
access, Oracle, db2…etc
Note: Windows authentication is always more secure compare to SQL Server authentication
due to Sql authentication .Password are travel over a network which cause less secure than
windows authentication.
57
If your application not support windows authentication login
-------------------------------------------------------------------------------------------------------------------------------
Error: 18452
Error reason: When my instance is configured with windows authentication, users are trying to
connect with SQL Server authentication.
Note: Real time when you perform any Sql server installation, please configure with mixed
mode always to avoid login failure.
XP_login.info
58
ROLE OR PERMISSIONS
INSTANCE LEVEL: #9
B D2 P S4
Bulkadmin
Dbcreator
Diskadmin
Processadmin
Public
Securityadmin
Serveradmin
Setupadmin
Sysadmin
Exec sp_helpsrvrole
59
1. Bulk admin:
User can able to perform bulk insert operation when any data loading activity from application
side.
Ex: http://blog.sqlauthority.com/2008/02/06/sql-server-import-csv-file-into-sql-server-using-
bulk-insert-load-comma-delimited-file-into-sql-server/
2. DB Creator:
Create, alter and drop the database and which is an instance level permission.
Note: If any login can perform only the specific activities as per role defined to the account.
3. Disk admin:
User can able to manage disk level files like MDF, NDF and LDF files.
Ex: User can able to move the files from one drive to another drive if member of disk admin
permissions.
4. Process admin:
1. User can see list of processes which are running in Sql server including system and user
defined process.
60
5. Security admin:
6. Server admin:
Processor changes\allocations
7. Setup admin:
8. Sysadmin:
User can have full control on the SQL Server instance and which is an administrator on SQL
server.
61
Note: Never grant Sysadmin permissions to others\teams until there is requirements.
A B D4 O S
DB_Accessadmin
DB_Backup operator
DB_Datareader
DB_Datawriter
DB_Ddladmin
DB_Denydatareader
DB_Denydatawriter
DB_Owner
DB_Securityadmin
User creation:
62
CREATE USER [username] FOR LOGIN [loginname]
Note: When we map any login with database by default that login name create as user in
specific database.
1. DB_Access admin:
2. DB_Backup operator:
3. DB_Data Reader:
User can only able to perform data reading from all tables with in the specific database.
4. DB_Data writer:
User can only able to perform data write into all tables with in the specific database.
5. DB_DDL admin:
63
User can able to perform
Create
Alter
Drop
Note:
1. If we have db_owner & DB_Deny data reader or writer permission then we can be able to
perform all operations on database except reading\writing...Preference goes to DENY
2. If we have sys admin & DB_Deny data reader\write permission then we can be able to
perform all operations on database...Preference goes to SYSADMIN.
3. Provide reader and deny data reader permission to same account then user cannot be able
to read the data.
8. DB_OWNER:
9. DB_Security admin:
User can get permissions of giving roles to other users at specific table\store
procedure\views\functions...etc
64
Note:
For example, if you wanted to grant SELECT, INSERT, UPDATE, and DELETE privileges
on a table called employees to a user name smithj, you would run the following GRANT
statement:
If you wanted to grant only SELECT access on the employees table to all users, you could
grant the privileges to the public role. For example:
With grant: User gets permissions and at the same time user can pass the permissions to other
users.
f you had granted SELECT privileges to the public role (ie: all users) on the employees table
and you wanted to revoke these privileges, you could run the following REVOKE statement:
GRANT:
USE master;
GRANT VIEW ANY DATABASE TO username;
GO
WITHGRANT:
USE master;
GRANT VIEW ANY DATABASE TO username WITH GRANT OPTION;
GO
The difference between these options is very simple. In case of only GRANT, the username cannot
grant the same permission to other users. On the other hand, with the option WITH GRANT, the
username will be able to give the permission after receiving requests from other users.
65
10. Public:
Limitations:
Note:
1. Always keep database owner as SA account which is inbuilt SQL Server account and we never
delete that account.
2. Any account is using as owner for specific database you cannot be able to delete until you
change the owner of the database.
3. You cannot able to delete the login if any active sessions are running.
Schema:
> If we not mentioned any schema for user, he occupies the default schema-(db)
66
Sp_setapprole ‘approlename’,’password’
Unset_setapprole ‘approlename’,’password’
> Insert
> Alter
> Control
> Delete
> Select
> References
> Update
Note:
> If I grant any database level role then can get permissions on all tables inside the database
> If I grant any table level role then can get only on specific table set permissions and cannot
access other tables due to object level restriction.
PRINCIPALS:
> Principals are entities that can request SQL Server resources like other components of the SQL
Server authorization model, principals can be arranged in a hierarchy.
> Every principal has a security identifier (SID) which is presents in hexadecimal part.
67
> Windows Domain Login/Group
> If we want to give permissions to 100 users, we need not give manually 100 times to users,
simply we can put them as a group, with that group we can give permissions at one time.
> If we want to see the principals in our server through below quires.
U—Windows user
G—Windows Group
S—SQL User
A—Application role
R—Database role
68
SECURABLES:
> Securables are the resources to which the SQL Server Database Engine authorization system
regulates access.
Default logins:
1.BUILTIN\Administrators: This one of the o\s administrator group where if any account is a
member of O\S level admin group can get directly access to SQLServer without adding the
account in SQL Server with SYSADMIN permissions.
Note: From SQL Server 2008 onwards BUILTIN\Administrators group is removed. Even any
member included in o\s administrator group should add under SQL Server instance as a login.
3. WIN-5ROTNADG8A8\SQLServer2005MSFTEUser$WIN-5ROTNADG8A8$B14SQL2K5: ---Login
for full text search service
69
4. WIN-5ROTNADG8A8\SQLServer2005MSSQLUser$WIN-5ROTNADG8A8$B14SQL2K5: ---Login
for Main service and which have Sysadmin
5. WIN-5ROTNADG8A8\SQLServer2005MSagtUser$WIN-5ROTNADG8A8$B14SQL2K5: ---Login
for Agent service and which have Sysadmin
6. SA: Which is the default Sql authentication account and contains Sysadmin permissions.
3. Add current user is one of the option added while installing SQL Server.
Hardening is nothing but protecting the Sql server from the threats or end users
> Always try to create windows authentication instead of SQL Server authentication
> Never give Sysadmin permissions to any other teams except DBA team until strong reason.
70
Password expired: 18456 state 18
-------------------------------------------------------------------------------------------------------------------------------
> Server account "passwords never expire” option should enable and "USER must change the
password" option should be disable or uncheck.
Note: changing account for the service required restart of SQL Server only specific service.
Note: From SQL Server 2012 onwards user defined servers can be created
Syntax:
Role Creation:
USE [dbname]
GO
GO
Drop Role:
USE [dbname]
71
DROP ROLE [RSExecRole2]
GO
Note: In real time apps team define this roles and request DBA team to execute the script.
Steps:
Note: when any requirement to provide highest level of permissions and member in a group,
then add that particular account as a login and provide permissions only.
-------------------------------------------------------------------------------------------------------------------------------
72
Select * from sys.syslogins
> Password
Default Users:
SQL Server ships with ten pre-defined schemas that have the same names as the built-in
database users and roles. These exist mainly for backward compatibility
1. SYS
2. INFORMATION_SCHEMA
3. GUEST
4. DBO
73
The SYS & INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create
objects as a guest. Permissions granted to the guest user are inherited by users who have
access to the database, but who do not have a user account in the database.
GUEST:
Each database includes a guest. Permissions granted to the guest user are inherited by users
who have access to the database.
DBO:
The dbo schema is the default schema for a newly created database. The dbo schema is owned
by the dbo user account. By default, users created with the CREATE USER Transact-SQL
command have dbo as their default schema.
If any member provided as Sysadmin role then can able to access all the database by using
DB_owner role.
Server principals with names enclosed by double hash marks (##) are for internal system use
only
74
RECOVERY MODELS
Recovery Models:
Full
Bulk Logged
Simple.
FULL RM:
In FULL recovery model every transaction is logged into the transaction log file (as per
WAL).
This recovery model is generally used in Production databases (i.e. OLTP based systems)
Advantages:
75
1) Minimal/No Data Loss
2) Point-in-time Recovery
Disadvantages:
1) Performance Overhead and large transactions at times can take more time
In Bulk-logged recovery model every transaction is logged into the transaction log file,
but bulk insert operations are minimally logged.
Bulk Insert operations are SELECT INTO, BULK INSERT, TEXT/IMAGE, Online INDEXING
Advantages:
2) Disk space utilization can be reduced when compared to Full Recovery model during bulk
insert operations
Disadvantages:
3) Bulk logged recovery model is used in special requirement cases where bulk insert
operations have to be performed with a time constraint and generally data loss is compromised
to BULK LOGGED.
In simple recovery model every transaction is logged into the transaction log file, but at
regular intervals the transaction log file is TRUNCATED whenever a CHECKPOINT
76
operation occurs. Simple recovery model is generally used in Development environment
where Database Priority/Point-in-time priority is less
Advantages:
1) Transaction log file growth can be controlled with regular truncation that occurs
Disadvantage:
Use Master
Master - Simple
MSDB - Simple
TEMPDB - Simple
MODEL – Full
77
Transactions are fully logged Transactions are minimally logged Transactions are fully logged
3. Restricted mode: The users who are having only Sysadmin, db owner permissions can only
access the database.
BACKUPS
Backups
> A copy of data that is used to restore and recovery the data after a system failures.
Use: Backup is safe guard to databases because data may can loss due to many failures. Such as
media failures, user errors, hardware failures and natural disasters etc… with good backups, we
can recover database from failures.
Backup are light weight threads in SQL Server which mean never consume more resource like
CPU, MEMORY……etc
Types:
78
Types of backup:-
1. FULL BACKUP, 2. DIFFERENTIAL BACKUP, 3. TRANSACTION LOG BACKUP, 4. FILE AND FILE
GROUP BACKUP, 5. MIRROR BACKUP, 6. SPLIT OR STRIPE BACKUP, 7. COPY ONLY BACKUP, 8.
TAIL LOG BACKUP, 9. PARTIAL BACKUP, 10.PATRIAL DIFF BACKUP
> This backs up the whole database. In order to have further differential or transaction log
backups you have to create the full database backup first.
Syntax:
Note:
> When we are performing any full database backups the backup file includes both committed
and uncommitted transactions as well.
> When we perform full backup and in same time if any active transactions then upto point the
active transaction included under full backup file.
2. Differential backup:
> Differential database backups are cumulative. This means that each differential database
backup backs up the all the changes from the last Full database backup and NOT last
Differential backup.
Note: What are the pages are modified??That information stores in BCM pages after full
backup. SQL Server before trigger the diff backup verifies in DCM page to perform diff backup.
79
3. T-log [Transaction log] backups:
> It takes complete log file (.LDF) and never takes backup of any .MDF OR .NDF files. Full backup
is the base.
Syntax:
Points:
> Log backups are not possible under simple recovery model due to truncate operation after
check point.
> First time log backup depends on recent full backup and after if any number of time log
backup always depends on recent log backup.
Note: When any log backup completion information stores under BULK CHANGE MAPPING
[BCM] page.
If SQL engine need to take any more log backups checks the sequence from this BCM pages.
DCM -Differential change mapping from this page engine reads when the full backup was.
2. Used to create a full database or transaction log backup without breaking the log chain
3. Copy only backup is used in production servers when any high availability concepts are
configured.
Point: A copy-only full backup can't be used as a basis for a differential backup, nor can you
create a differential copy only backup.
Note: Can take copy only differential and copy only t-log backups.
80
Copy only diff and t-log always depends on copy only full backup
to disk='path\sbidb_diff_copyonly.bak'
to disk='path\sbidb_tlog_copyonly.trn'
with copy_only
5. Mirror backup:
2. Mirrored backups simply write the backup to more than one destination.
3. You can write up to four mirrors per media set. This increases the possibility of a successful
restore if a backup media gets corrupted
Syntax:
Mirror
Note: Cost and disk space utilization are high in this type of backup.
81
6. Partial database backup:
> Partial backups were introduced in SQL Server 2005. They are designed for use under simple
recovery model to improve flexibility for backing up very large databases that contain one or
more READ-ONLY file groups
> This backup only the file\file groups in read_write only mode. Skips the file \file groups in
read_only mode.
Syntax:
Syntax:
This type of backup mainly used the case where there is a disk space issue in the server.
[Split] Backup: Striped backup will split the backup into parts and can be very useful during
tight space constraints. Striped backup is taking backup onto different locations (i.e. parts of
backups but not a mirror)
Syntax:
82
Note: If db size 100 GB, disk1: 60 50 GB free. disk2= 50 GB of free space:
Even this case we can take a backup by splitting into 2 drives 50 GB + 50 GB.
Note: To restore split backup files both backup are mandatory. If you lost 1 split backup files
then you cannot be able to recover the data by using another backup file.
> In Sql server 2005, if backup starts Sql engine directly start writing into backup file without
checking any disk space.
> Whereas from SQL 2008 version onwards, SQL Server engine always checks the disk space and
then it start writing the data if sufficient space available .If not fails at the beginning.
Crash cases: - MDF file corrupt\Missing. LDF file corrupt\Missing, Page corruption, Header
corruption, internal db errors
Note:
2. Tail log backup cannot work when database is in simple recovery model.
83
File Backup:-
Syntax:
Syntax:
Note: By default LDF file gets backed up when you trigger any file\file group backup
Note: File or file group backup does not required base as a full backup.
If you want restore file\file group backup, MDF file should restore first.
>>I have a backup file? How to find the whether backup file is valid or not?
Output:
>>I have a backup file? How to find the how many files are there inside of the .bak\trn file?
Output:
Number OF FILES
Path
>>I have a backup file? How to find the version of Sql server by using the backup file?
84
Output:
DBNAME
Compatibility
Version
Server name
Login name
Db size
Collation
MASTER YES NO NO
TEMPDB NO NO NO
Append:
85
Add additional modifications into the existing backup file
Overwrite:
Entire data gets taken into backup file and due to this takes more time.
Backup compression: New inbuilt feature in Sql server 2008 onwards. The main purpose is you
can save the disk space and keep more number of day’s backup files.
1---10%
2--20%
3--30%
4--40%
...10
Backup Set:
A backup set contains the backup from a single, successful backup operation. It can be a
FULL/Diff/Tlog backup.
A media set is an ordered collection of backup media, tapes or disk files, to which one or more
backup operations have written using a fixed type and number of backup devices.
Backup splited into two files and called backup files belong to same family
> Checksum:
86
New in Sql server 2005
If any errors are reported in database then backups are generally fails. If still want to take
backup you can take by selecting “Continue backup on error". But again backup file become
corrupted.
FOR LARGE DATABASE NEVER USE RESTORE VERIFY ONLY OPTION AND IT TAKE HUGE TIME TO
SCAN EACH PAGE.
Note: Backup compression is new feature in SQL server 2008 onwards and only possible in
ENTERPRISE EDITION.
Upto SQL 2005 or lower version to compress the backup file projects use third party tools.
RESTORE METHODS
Restore methods:
Syntax:
Restoring is the process of copying data from a backup and applying logged transactions
to the data to roll it forward to the target recovery point.
A restore is a multiphase process. The possible phases of a restore include
87
The data copy phase involves copying all the data, log, and index pages from the backup media
of a database to the database files.
From LDF both committed and UN committed transactions sent to MDF file.
The committed transaction should stay in MDF and only UN committed transactions sent to LDF
file for further processing.
When you have multiple backup files you should follow some standard
RECOVERY METHODS
Recovery Methods:
WITH RECOVERY:
Additional backups file not allowed and bring the database only immediately.
WITH NO RECOVERY:
Additional backups file are allowed and bring the database into restoring state immediately. In
this feature diff or log backups can be restored.
WITH STAND_BY:
Additional backups file are allowed and bring the database into standby\read_only state
immediately. In this further diff or log backups can be restored.
Note: In this mode users can read the data in middle of restoration process. But no write
possibility.
Example: I have 2 backup file full and diff. I want to restore and follow below script.
88
With no recovery
1. Whenever database crashes attempt tail log backup to recover active transactions
Note: Tail log always need to restore last and tail log need to apply first before starting recovery
process.
Backup strategy 1:
89
Recovery process:
1. Whenever database crashes always attempt tail log backup to recover active transactions.
Once tail log backup taken then start recovery process by using existing backup files
> Restore each transaction log backup after wed 11:00PM with no recovery upto Thursday
10:00 AM
> Last restore tail log backup to recover 3 min of data with recovery.
Note:
> If we restore last backup with no recovery instead of recovery then Please use below query to
bring db online.
> If LDF file is corrupted then DBA cannot be able to recover the transaction from LDF file.
90
Backup strategy 2:
Recovery Process:
1. Whenever database crashes then attempt tail log backup to recover active transactions in
the log file.
2. Take recent full backup which happen recent Sunday 10:00 AM and restore with
no_recovery...
4. Restore each t-log backup file after differential backup Saturday 10:15 PM...Since t-logs are
incremental. Restore with no recovery [If tail log backup success]
[If tail log backup not work then last t-log backup restore with recovery].
5. Last restore tail log backup with recovery to bring database online upto the point.
Note: If LDF file corrupt 100% not possible to attempt TAIL LOG backup with error "fail
activation failure"
Backup strategy 3:
Full backup-10:00 AM
DIFF: 11:00 AM
TLOG: 12:00 PM
91
Full: 12:30 PM [BUT BACKUP FILE IS CORRUPTED]
DIFF: 1:00 PM
Recovery process:
2. Take last to last full backup due to recent full backup @ 12:30 PM got corrupted and restore
with no recovery
Note: If full backup corrupted after recent diff or log cannot be able to use.
Recovery process:
Copy only backup never create distribute the backup sequence .son you cannot be able to use
diff 2 with copy only.
Permission issue
Network issue
Wrong syntax
Db is unavailable
LSN mismatch
92
Without full backup user trying to take diff backup
Permission issue
Network issue
Wrong syntax
LSN mismatch
Without full backup restore user trying to restore diff or log backup
If required to restore one database backup file into another database in the same instance then
use "WITH MOVE” option
Syntax:
93
MSDB Tables for backup information:
1. Dbo.backupfile:
Stores:
Database file,
2. Dbo. Backupfilegroup:
File group id
Backup set
Is default
Is read_only
3. Dbo.backupmediafamily
Media set
Media count
4. Dbo.backupmediaset
94
media_family_count
Is password _protected...
5. Dbo.backupset
Backup lsn
TYPE OF BACKUP
DB Name
Size
Password protected
Collation setting...
Compatibility
-------------------------------------------------------------------------------------------------------------------------------
1. Dbo.restorefile
File number
95
2. Dbo.restorefilegroup
Restore History
3. Dbo.restorehistory
Restore date
Note:
Case: 1
If require to keep same physical files in same instance with different database then require to
use "With move” option....
96
PIECE MEAL RESTORE
Piecemeal restore, introduced in SQL Server 2005, allows databases that contain
multiple file groups to be restored and recovered in stages.
Piecemeal restore involves a series of restore sequences, starting with the primary file
group and, in some cases, and one or more secondary file groups.
Piecemeal restore maintains checks to ensure that the database will be consistent in the
end.
Piecemeal restore works with all recovery models, but is more flexible for the full and
bulk-logged models.
Every piecemeal restore starts with an initial restore sequence called the partial-restore
sequence. Minimally, the partial-restore sequence restores and recovers the primary file
group.
During the piecemeal-restore sequence, the whole database must go offline. Thereafter,
the database is online and restored file groups are available...
97
Piece Meal Restore:
Purpose: we restore the database pieces by piece in Sql server by restoring the database
partially.
Note: -Real time : if any users want to access a specific table \tables which resides in file group ;
you have an option to restore the database a specific file instead of entire database by using
piece meal restore
Advantages: - Saves the time, saves the Storage cost, can test quickly, Can control the user at
specific level.
Note: In standard edition piece meal restore is not possible in SQL server.
ABC - Database
98
A - FG - A1
B - FG - B1
C - FG - C1
99
Insert into A1 values (2,'A1')
WITH NORECOVERY
-------------------------------------------------------------------------------------------------------------------------------
100
Backup database [dbname] to disk='\\ipaddress\drive$\folder name\filename.bak'
4) Veritas NetBackup
6) Acronis
7) EMC Networker
9) ZAMANDA (AMANDA)
DATABASE REFRESH
Taking a backup into production Sql server database and restoring into development Sql server
then this is called "DATABASE REFRESH"
Steps:
2. Always use copy method to copy the backup file from prod to test\dev server.
If Case: I don’t have space in the dev\test server then what I have to do?
Step1: Check the drive whether any unnecessary \old backup file there then delete those to
claim some additional space. Try to copy the backup after
OR
Step 2: If I don’t have any old backup files then inform to storage team to expand the disk or
add new disk...Meanwhile send email to requestor.
101
3. Once backup file is copied to dev\test then please do restore.
4. Move backup file from source to destination server by mapping the drive from either of the
servers.
5. Restore in destination server by changing the root path with destination instance.
7. Issue after database refresh: Whenever we perform any backup and restore between
different instances we used to get ORPHAN user issue.
What is orphan user: A user without having login called as "ORPHAN USER?"
This method help you to recover the data upto point from the backup file always by using
below
Syntax:
------------------------------------------------------------------------------------------------------------------------------
Note:
1. When we change any recovery model from full or bulk logged to simple or simple to full\bulk
logged then you should trigger recent full backup to form LSN Number. Otherwise LSN
mismatch and backup may fail.
102
2. Higher version backup files are not possible to restore in lower version. But vice versa is
possible.
3. A full or differential backup clears the log: NO...But in directly when trigger these kind of
backups check point operation raise.
4. Backups read data through the buffer pool: NO, Never take backup from buffer pool
Reason: every time to get pages to buffer pool is not possible and it impact the performance.
No...Not sure
> If one spid trigger add file and another spid trigger backup then until backup completion add
file in waiting status.
103
JOBS & MAINTENANCE PLANS
JOBS
By using this job DBA can automate Maintaince tasks by defining specific schedule.
Minimum requirements:
> SQL Server agent service should up and running
>Job owner should have permissions
> Job steps and schedules are configured properly.
104
Disadvantage:
When we have multiple steps in a job we can define separate schedule for each step. The
schedule applies to all the steps each time.
-------------------------------------------------------------------------------------------------------------------------------
Job failure Reasons:
Permission issues --- MSDB read\write
Network failure for the backup
Job disabled
Agent stopped
Database not in online
SQL Services are stopped
T-SQL issue
Disk space issues
Job owner disabled
Note: Purpose of Job user is always should keep "SQL Server agent service account
[Local\Domain]"
Note: Multiple jobs can be deleted from "OBJECT EXPLORER DETAILS [F7]"
MAINTENANCE PLANS
Maintaince Plans:
> Newly introduced in SQL Server 2005 with bugs. Fully in SQL 2005 +SP1
> Maintaince plans can be configured in 2 ways
1. Wizard
2. Flow chat
105
Note:
> From SQL Server 2008 onwards "Ignore offline database" option introduced which can skip if
any db is in offline and complete the Maintaince plans.
Where as in SQL 2005 if any db offline Maintaince plan fails.
> Multiple tasks selected in Maintaince plans SQL 2005 then dependent multiple jobs
creats.Where as in SQL 2008 onwards only 1 job for multiple tasks or steps.
> Always Maintaince plans execute or run via jobs only.
-------------------------------------------------------------------------------------------------------------------------------
Maintaince plan types:
Total 11 MP types:
106
Daily:
Backup -transactional log
Backup- differential
Check Database Integrity Task
Maintenance Cleanup Task
Replication Maintaince job
Weekly Maintaince:
Backup- Full
Rebuild Index Task
Reorganize Index Task
Update Statistics Task
Note: From windows side; we can also configure automated tasks by using "WINDOWS TASK
SCHEDULER"
Note: When DBA can use windows task scheduler to automate Sql related in Sql server express
edition we don’t have" Sql server agent services". Without agent service cannot run jobs or
Maintaince plans. In this case we can use Windows Automated task scheduler.
107
Note: cannot delete a job without deleting Maintaince plan. First need to delete Maintaince
plan which automatically delete dependent job
Install steps:
Method-1 GUI:
> Go to database > right click> task> detach
Note: Detach just drop the connection db from only SSMS
> Copy MDF and LDF from source server to destination server
Note: Always use copy and paste method.
> Attach in the destination server
108
METHOD-2 QUERY ANALYSER
Attach \Detach:
Process to move database:
Step 1: Detach Database using following script
Exec master
SP_detach_db @dbname = N’dbname’
GO
Step 2: Move Data files and Log files to new location
Step 3: Attach Database using following script
USE [master]
GO
CREATE DATABASE [dbname] ON
(FILENAME = N’Path of MDF file’),
( FILENAME = N’ path of dbname_Log.ldf’)
FOR ATTACH
GO
Post-install steps or verification steps:
> Validate whether database is up and running fine.
> Inform application team to check the connectivity and data.
> Change the compatibility number if you perform into higher version of SQL Server.
Note:
Limitations:
> Attach\detach method not work from higher version of Sql to lower version of SQL Server.
> System database attach \detach method not work
109
COPY DATABASE WIZARD
Note:
> If db is in read_only then we can perform copy database method.
> This method uses SSIS services to execute in destination.
Limitations:
>System databases
>Databases marked for High availability.
>Databases marked Inaccessible, Loading, Offline, Recovering, Suspect, or in Emergency Mode.
110
> In SQL Server system object data stores in master database logically and physically in resource
database.
EX: any TABLE OR VIEW STARTS WITH sys.
> Whenever restart SQL Server ; SQL Server engine checks for MASTER MDF and LDF path from
configuration manager>advanced> start up parameter> MDF and LDF file location
111
Recovery model
Collation
Root path....etc
3. MSDB [DBID: 4]
112
Triggers
Functions
Joins
Local variables (#)
Global variables (##)
Indexes
Row version [SQL 2008 new feature]
Table level information
5. Resource database: [DB ID: 32767]
> New feature in SQL Server 2005 version onwards
> Physically stores sys object information
> Any service pack\hot fix\cu entries or updates at resource database
> Upgrade SQL Server entries happen in Resource database
> Read_only database
> Hidden database
> No entry in master database related to resource database
> Only Select queries can work and to find when was the last resource database updated. Also
current SQL Server version information stores in resource
SELECT SERVERPROPERTY ('ResourceVersion') ResourceVersion,
SERVERPROPERTY ('ResourceLastUpdateDateTime') ResourceLastUpdateDateTime
GO
-------------------------------------------------------------------------------------------------------------------------------
OPERATIONS ALLOWED\NOT ALLOWED IN SYSTEM DATABASES:
Operations can perform on System db:
> Backups should take for master, model and msdb ...Tempdb, resource database backup
statement does not work
> Adding a file possible for only msdb and Tempdb. File groups are only possible for msdb
database
113
> Can move files [mdf and ndf] from one drive to another drive called "FILE MOVEMENT"
> Shrinking is possible in Tempdb only
Don't do on system database:
> Never create user defined tables
> Never add file or file groups to master and model database
SUSPECT DATABASE
[User db corruption]
Suspect is a state where database becomes inaccessible due to different reasons
Reasons:
1) Data and Log files missing or corrupt
2) Corruption of pages in the Data and Log files.
3) Synchronization issues between data and log files
4) Issues that are caused during Recovery/Restoring process
5) Sudden shutdown happen to your database\instance...
6) Kill spid while transaction is roll back...
114
7) Database Flags in inactive status.
Case 1: LDF File corruption
Steps to Resolve:
1) Identify if database is really in suspect state or not.
Select databasepropertyex ('dbname','status')
2) Attempt to reset the suspect flag using sp_resetstatus
EXEC sp_resetstatus 'test'
3) Set the EMERGENCY mode on for the database for further troubleshooting. Emergency
mode is a READ_ONLY state and gives some base for identifying the cause of the issue.
Alter database dbname set emergency
4) Put database in Single User mode, to avoid connection conflicts.
Alter database <Dbname> set Single_user with rollback immediate
5) Run DBCC CHECKDB on the database to identify if the issue is with Data files or Log files.
Running checkdb finds any consistency and allocation errors and if there are no errors found
then Data file is considered to be clean. The issue might exist with Log file.
Output should say:
CHECKDB found 0 allocation errors and 0 consistency errors in database 'test'.
6) Detach the database and delete log file from the path.
sp_detach_db @dbname='dbname'
115
Select * from sys.dm_exec_requests
Column: "Estimated_completion_time"
------------------------------------------------------------------------------------------------------------------------------
Case 2: If data file corrupt?
Restore from recent backup file
Note: If .mdf corrupt db will not go to emergency mode...
-------------------------------------------------------------------------------------------------------------------------------
Case 3: If page corrupt?
- How to find which page got corrupted?
Run DBCC Checkdb (dbname)
GUI PATH:
Database> Restore> Page> Run Checkdb> Select backup file> ok
Note: Page level restore is possible from SQL Server 2005 version by using T-SQL and whereas
from SQL 2012 Page restore included in GUI.
-------------------------------------------------------------------------------------------------------------------------------
Note:
[Late recovery Option: Client approval is required]
If backup is not available and issues found with data file/log file use
DBCC Checkdb ('dbname', REPAIR_ALLOW_DATA_LOSS)
This command repairs the database but has the risk of data loss. Take proper approvals before
performing this step.
116
Note: Your system databases never go to suspect mode.
Background about CHECK DB
If you run Checkdb what are the internal queries will execute?
Checks the logical and physical integrity of all the objects in the specified database by
performing the following operations:
>> Runs DBCC CHECKALLOC on the database:: Checks the consistency of disk space allocation
structures for a specified database.
>> Runs DBCC CHECKTABLE on every table and view in the database::Checks the integrity of all
the pages and structures that make up the table or indexed view.
>> Runs DBCC CHECKCATALOG on the database::Checks for catalog consistency within the
specified database. The database must be online.
MASTER CORRUPT:
> Master is the most crucial database in an instance, if it is corrupt entire instance gets affected.
Master Corrupt
Error number: 3411
Error Message: Timely fashion error
112 error number for disk space
Partially corrupt:
1. If master database is corrupt, it is either completely corrupt or partially corrupt. If partially
corrupt, instance will start with -m;-t3608 and if it is completely corrupt instance wouldn't start.
2. Put your instance in single user mode.
3. Restore master database WITH REPLACE option
Restore database master from disk=N'F:\Master.bak' WITH REPLACE
117
Completely Corrupt:
1) Master database doesn't start with /m /t3608 and hence we need to rebuild the master
database.
Use command prompt and start rebuilding master database by enter into path of setup.exe
[Path of SQL Server software]
2) Rebuild master
Start /wait setup.exe /qb INSTANCENAME=sql2005 REINSTALL=SQL_Engine
REBUILDDATABASE=1 SAPWD=Admin123
118
5) Start instance normally by removing trace 3608
Net stop "SQL Server (MSSQLSERVER)"
Net start "SQL Server (MSSQLSERVER)"
OTHER METHOD: Copy and paste model .mdf, .ndf files from other instance [Required to take
instance offline]
Note:
From SQL SERVER 2008 onwards MS introduced TEMPLATE FOLDER WHICH contains fresh
"MASTER, MODEL and MSDB" MDF and LDF files.
If any corruption for system database, then please use this template to bring the instance
quickly and do the restore to get updated data.
MSDB CORRUPT:
1) Verify the reason of failure in the error logs and troubleshoot accordingly. If database is
really corrupt then look out for an available valid backup. If backup is available restore MSDB as
a normal user database and it would be restored.
2) If backup is not available, then stop the instance and start the instance in /m and /t3608
startup parameters.
SP_detach_db 'MSDB'
119
4) Execute the script in %Root Directory%\Install\instMSDB.sql file.
ISSUE:::: after msdb rebuild then Sql server agent not able to start? How to resolve
Solution:
sp_configure 'show advanced options', 1;
RECONFIGURE;
http://techydave.blogspot.in/2013/02/sql-server-agent-agent-xps-disabled.html
TEMPDB CORRUPTION:
1. If Tempdb corrupt instance wouldn't respond and it would be in hung state equal to crash.
2. To resolve, restart Sql server instance so that Tempdb files will be recreated.
Trace Flags:
-t: refereing startup
-d: fully qualified path of data file
-e: error file
-l: log file
-c: quick start of instance than regular process
120
-m: single user mode
-s: sqlserver.exe
-n: start instance name then the information will not trace in event viewer
-------------------------------------------------------------------------------------------------------------------------------
Trace Flag 3607:
Starts SQL Server without recovering any databases
FILE MOVEMENTS
Alter database dbname modify file (name='logical file name', filename='new path') -----Data file
Alter database dbname modify file (name='logical file name', filename='new path') -----log file
121
Note: Verify whether logical name is updated with new path by using sp_helpfile again...
3. Take database offline. [Kill the session if any active sessions are running on the database
before taking your user database offline]
4. Move physical files to new path and bring the user database online.
Note: While moving physical .mdf, .ldf, .ndf files use always copy & paste method but not cut
and paste.
5. Sp_helpfile
Note: When performing user database file movement only required to take database offline
but not SQL Server services offline.
Note: Ask application team to check the connectivity of the database and once received
confirmation from application team then only delete the older files from physical location.
Note: After completing the file movement go and clear the older files from directory by using
SHIFT+DELETE but not delete ...since if you use delete next go to recycle bin folder if space is
not there in C:\ drive then server becomes hung state. More risk is involve
-------------------------------------------------------------------------------------------------------------------------------
122
TEMPDB FILE MOVEMENTS:
1. Collect logical file names for Tempdb
2. Update logical name for Tempdb database by using alter command with new path
Named Instance:
Net stop "SQL Server (Named instance Name)"
Net Start "SQL Server (Named instance name)"
5. Tempdb new files [.mdf, .ldf] will automatically create in new path.
6. NOTE: After verification if your instance ok then remove your old Tempdb files.
Note: For system databases you cannot perform offline, attach, detach, drop, delete
123
2. Go to configuration manager-->advanced tab-->startup parameters--> specify new path for
your .mdf and .ldf files
3. Take SQL Server instance stop state.
4. Copy physical .mdf and .ldf files to new location or new path.
5. Start your SQL Services.
Note: Instance level down time is required.
-------------------------------------------------------------------------------------------------------------------------------
124
Alter database msdb modify file (name='MSDBLog', filename='D:\DATA\MSDBLog.ldf')
> The main purpose is to load your data from SQL Server to any third party files like notepad,
excel, any other RDBMS (Oracle. etc.) by using export technique. (Or)
> If your data is in third party files like notepad, excel, any other RDBMS (Oracle.etc) to SQL
Server database by using import technique.
http://searchsqlserver.techtarget.com/feature/The-SQL-Server-Import-and-Export-Wizard-
how-to-guide
http://sqlage.blogspot.in/2014/03/how-to-use-importexport-wizard-in-sql.html
> The SQL Server Import and Export Wizard is based in SQL Server Integration Services (SSIS).
You can use SSIS to build extraction, transformation and load (ETL) packages and to quickly
create packages for moving data between Microsoft Excel worksheets and SQL Server
databases.
> Launch SQL Server Import and Export Wizard by one of the following methods:
Method 1: On the Start menu, roll the cursor over All Programs, scroll down to Microsoft SQL
Server and then click Import and Export Data.
Method 2: In SQL Server Data Tools (SSDT), right-click the SSIS Packages folder and then click
SSIS Import and Export Wizard.
125
Method 3: In SQL Server Data Tools, go to the Project menu and click SSIS Import and Export
Wizard.
Method 4: In SQL Server 2014 Management Studio, connect to the Database Engine server
type, expand Databases, right-click a database, point to Tasks and then click Import Data or
Export data.
We have received SourceFile.xlsx file and we have to load that to SQL server Table. We can
either create SSIS Package in BIDS or we can use Import/Export Wizard to load this file in SQL
Server Table. In this post, we will use Import/Export Wizard.
Step 1:
Right Click on Database in which your table exists or you want to create it and load Excel data as
shown below
126
Fig 2: Import Data by using Import/Export Wizard in SQL Server Table.
Choose the data source which you want to use as source, as we are loading data from Excel,
Choose Excel file as shown below
127
Fig 3: Choose Excel Data Source in Import Export Wizard
Choose a Destination:
Choose the destination where you want to load the data from source. In our case we are
loading our data to SQL Server Table. Configure as shown below
128
Fig 4: Choose SQL Server as Destination
You can directly choose the table from where do you want to load the data or you can write
query if you are using Database as your source. As we are using Excel as source, we will choose
Table (Sheet).
129
Fig 5: Choose Copy data from one or more tables or views
In this part of Wizard, we have to select the Tables or Views we want to use from source and
load data to destination. As we are loading data from Excel, the Excel Tabs are shown. Choose
the Sheet (Tab) which do you want to load. Under Destination, it will show you same name like
Source. I have changed that to Customer Data. You can choose any name of your Table you
want. You can choose multiple sheets or Tables from Source.
130
Fig 6: Select Source Tables/Views in Import Export Wizard
Column Mappings:
Click on Edit Mappings and then you can map the source columns to destination columns, also
if you need to choose correct Data type, you can change here.
131
Fig 7: Column Mapping Import Export Wizard
By Default, Run immediately is checked. I have changed the option to Save SSIS Package and
provided the location where I want to save the SSIS Package. Also there is no sensitive
information that I want to save in Package such as Password so I have selected Do not save
sensitive data.
132
Fig 8: Save SSIS Package to File System
Provide the name of SSIS Package and File Location as shown below
133
Fig 10: Provide Name for SSIS Package
Summary of all the steps will be shown to you in this step. You can see the source and
destination etc.
134
Fig 11: Summary of Steps
Once you hit Finish button, The Wizard will execute all below steps and finally save the SSIS
Package.
135
Fig 12: Save the SSIS Package to given location
136
Fig 13: SSIS Package created by Import/Export Wizard
To execute this package, double click on it and below window will open. If you need to change
the name of File or SQL Server, you can go to Connection Managers and change it. In my case, I
do not want to make any changes. Press Execute Button
137
Fig: 14 Execute Package Utility
Once you hit Execute, Package Execute Progress window will appear and you will be able to see
the progress of execution of your SSIS Package.
138
Fig 15: Package Execution Progress.
Import/Export Wizard is a way to quickly load data between different sources and destinations.
You can create your SSIS Package quickly by using Import/Export Wizard and then add to SSIS
Project and make changes if required.
If we need to export data from SQL Server then we need to Right Click on Database-->Tasks-->
Export Data and Import/Export Wizard will start
139
DATABASE MAIL CONFIGURATION
INTRODUCTION
> This is an enterprise solution for sending mails from the SQL Server database engine to
SMTP servers. SQL Server database applications can communicate with users through an
email system. It provides features like scalability, security, and reliability.
> It uses an SMTP server to send mail. SQL Server 2000 supports SQL Mail, which supports MAPI
profiles to send email instead of an SMTP server. SQL Mail requires a MAPI-compliant mail
server (Microsoft Exchange Server) and a MAPI client (Microsoft Outlook).
> We can send a text message, query result, file as attachment. The database mail can be used
to notify users or administrators regarding events raised in SQL Server. For example, if an
automation process like replication, database mirroring fails or there are latency related
problems then SQL Server can use this feature to notify the administrators or operators.
Points to Remember
Like SQL Mail, database mail doesn’t require a MAPI – a compliant mail server like Outlook
Express or extended programming interface.
Better performance. Impact of sending mails to SMTP servers by SQL Server is reduced as this
task is implemented by an external process initiated by the DatabaseMail.exe file.
Works fine in a cluster based environment.
64-bit support.
Database mail configuration information is maintained in an MSDB database.
Only members of Sysadmin and DatabaseMailUserRole database role of MSDB can send mails
by default.
Allows sending messages in different formats like text and HTML.
Supports logging and auditing features through different system tables of MSDB.
Sp_send_dbmail
This is a system defined stored procedure which is used by SQL Server to send email using the
database mail feature. This stored procedure is present in the MSDB database.
MSDB Database
Consists of all stored procedures, system tables, and database roles related to database mail.
Service Broker
140
To establish communication between the SQL Server engine and the database mail engine we
need a service broker. It submits the messages to the mail engine.
DatabaseMail.exe
This file is present in the Binn folder of the respective instance. It is the database mail engine.
How it works?
When a run time error occurs due to any automated task like backups, replication etc database
engine raise the error and same information is submitted to Database Mail engine, then
database mail engine will submit the mail to SMTP Server using EmailID and Password
mentioned in profile. At the last SMTP Server sends mail to recipients.
Error --> DB Engine --> DB Mail Engine --> SMTP Server --> Recipients
USE [master]
GO
ALTER DATABASE [MSDB] SET ENABLE_BROKER WITH NO_WAIT
GO
141
MSDB tables related to Database Mail
Steps to configure
142
Difference between Database Mail and SQL Mail:
1) Database mail is newly introduced concept in SQL Server 2005 and it is replacement of
SQLMail.
2) Database Mail is based on SMTP (Simple Mail Transfer Protocol) and also very fast and
reliable whereas SQLMail is based on MAPI (Messaging Application Programming Interface).
3) SQL Mail needs an email client (like Outlook/Express) to send and receive emails, whereas
Database mail works without any Email client.
4) SQL Mail works with only 32-bit version of SQL Server, whereas Database Mail works with
both 32-bit and 64-bit.
1. blog.sqlauthority.com/2008/08/23/sql-server-2008-configure-database-mail-send-email-
from-sql-database/
2. www.codeproject.com/Articles/485124/Configuring-Database-Mail-in-SQL-Server
143
LITE SPEED
Key Benefits:
• Reduces storage requirements (up to 95% compression)
• Reduces backup times (up to 75% faster than native SQL Server)
• Reduces restore times
• Reduces network load
• Integrates fully into SQL Server
• The ability to create your backups with varying types of industry-standard encryption
• The ability to do object-level restores (i.e., tables, views, and stored procedures; this
feature is available only in the enterprise version)
• Mirroring of your backup files to multiple locations
• An enterprise console allowing you to control the backup and restores of all your MS
SQL Servers in one location
• Integrated log shipping
144
Lite Speed Advantages:
Lite Speed provides consistent compression and encryption for SQL Server and Oracle,
along with efficiency
Compression:
Lite Speed uses the same compression engine on both the Oracle and SQL Server
platform, so the compression ratio for like data is reliably similar on both databases.
Encryption:
Lite Speed provides encryption for both SQL Server and Oracle.
Efficiency:
Lite Speed performs backup compression on the database server, in memory, before the
backup is written to disk or shipped across the network to a tape system. All other
methods require more storage and/or greater network utilization because the
compression occurs after the backup is created.
• http://www.techrepublic.com/blog/howdoi/how-do-i-install-configure-and-use-
litespeed-for-database-backups/167
Convert SQL Lite Speed Backup to Native Backup:
• http://kotako.wordpress.com/2010/03/02/convert-litespeed-backups-to-sql-server-
backups-and-restore-them/
• http://easymssql.blogspot.co.uk/2010/01/need-to-convert-sql-lite-speed-backup.html
• https://support.quest.com/SolutionDetail.aspx?id=SOL22045
145
SHRINKING
> To release the space to disk when any unused space hold by the database
> The only way to release the space from database in Sql server is shrink.
Note: Shrink only when the database \file contains available free space. If no even you try to
shrink no space released to disk.
Cases:
> Can shrink log file any time based on the available free space in Sql server.
> Check whether any open transactions are running on database before shrink data file. If any
then do not perform any shrink operation.
> Request for down time for any application that is trying to access SQL Server on the server so
that resource utilization on the server is very minimal.
> Shrink max 5 or 10 GB from database at a time to release the space but not more than that.
Causes performance impact.
> Do not shrink the data file to its maximum capacity. Always leave minimum 10-20% free space
on the data file.
146
How applications generally connect to database?
1. When application try to connect, uses Config file (Configuration file--Resides in application
server). Config file (.txt) should contains SQL Server instance name + user name [service
account name--DBA team should add the account into SQL Server under security] + Password
[Strong]
Application use all these details from Config file and point connection to SQL server database
> Any business 2 data centers maintains called PRODUCTION and DR data center.
> Always distance between data centers should not be more than 50 KM.
147
UPGRADATION & MIGRATION
UPGRADATION
http://www.esquillace.com/Tech/ScreenshotGallery/SQL/SQL2008/SQL2008UpgradeAdvisor
Install_and_Usage/SQL2008UpgradeAdvisorScreenshots.htm
Upgradation Steps:
> Upgradation involves overwriting existing SQL Server instance and upgrading from one
version to another version.
> Applying a Patch, Upgrading to New Version and Upgrading to New Edition.
Steps:
Pre-upgrade steps
Upgrade steps,
==Pre-upgrade steps:
==Upgrade steps:
3. Check db, login, jobs and maintaince plans are still remain same in SQL Server after upgrade
148
4. Verify sp_configure to check configuration in SQL Server
Note: If any SSIS or DTS or cubes packages are there then BI team will take care...
Note: If instance is configured with high availability then break the HA and install higher version
then try to reconfigure HA one more time.....
Reasons of Upgradation:
Database Engine
Analysis Services
Reporting Services
Integration Services
1. Issue can prevent from upgrading from lower version to higher version
Advisor report:
Advisor gives the report which contains issues found during analysis, to manage tasks associate
list.
The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers,
and trace files.
Note: Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures.
149
MIGRATION
Migration Steps:
> It involves moving data or databases from one Instance to another Instance same in SQL Or
other DBMS like oracle, Sybase etc.
> Like OS Upgradation, Moving data from one drive to another drive, other DBMS etc.,
> Down time is minimal and once tested new server is released.
https://sqlschoolhouse.wordpress.com/category/sql-server-database-migration/
Pre-Migration steps:
2) Disable all HA options and ensure all jobs are also disabled.
Migration steps:
3) Take backup of Source (2000/2005) and restore at destination (2005/2008) (or) follow
Detach/Attach. ONLY user databases should be backed up.
4) Change compatibility level from 80 to 90 for all the database(s) that have been migrated
sp_dbcmptlevel
sp_help_revlogin
7) DBCC UPDATEUSAGE
150
> The table or index row counts and the page counts for data, leaf, and reserved pages could
become out of synch with time in the database system tables. DBCC UPDATEUSAGE command
corrects these inaccuracies and ensures the counts are updated.
8) DBCC CHECKDB
> DBCC CHECKDB is a commonly used command that can check the allocation, structural and
logical integrity of a database and its objects.
9) Update Statistics
> Statistics of SQL Server 2005 can be outdated and hence it is very important to update the
statistics after moving the database to 2008.
> Update statistics updates the header information (Page), count information, histograms, and
metadata allocations.
sp_updatestats
Torn Page Detection: Torn Page Detection grabs the first 2 bits of every 512 byte sector on each
page and stores those bits in the page header. When the page is later read back in from disk, SQL
Server compares those header bits with the bits from the sector, to make sure they’re still the
same. Now, that check is better than nothing, but you can see where it might potentially miss
corruption in the rest of the sector, right?
Checksum: the SQL Server Database Engine calculates a checksum over the contents of the whole page,
and stores the value in the page header when a page is written to disk. When the page is read from disk,
the checksum is recomputed and compared to the checksum value that is stored in the page header. This
helps provide a high level of data-file integrity.
11) Transfer of Jobs (/Logins) can also be done using DTS Packages by DTS/SSIS team. If there
are any DTS Packages in SQL Server 2000 they can be migrated to SSIS Packages in 2005.
If Jobs have to be transferred by DBA, then manually he/she has to script all the jobs and
execute that commands at the destination.
12) Finally after migration is completed ask Application team to perform App checks and
validate if SQL Server 2005 is compatible with the respective application or not.
151
Once final Go is given by App team that confirms that migration is a success.
152
TEMPDB FULL
Tempdb Full:
1) Increase the File Size (if storage is available).
2). Add NDF file from another/same drive.
3) Find out the transaction which is occupying more space in Tempdb and troubleshoot or kill
that transaction based on approval
4) Shrink the Data File of Tempdb (if no OPEN transactions are in progress)
DBCC OPENTRAN \sys.dm_db_session_space_usage
5) DBCC FREEPROCCACHE
temporary tables and table variables may be cached. Caching allows operations that drop and create the
temporary objects to execute very quickly and reduces page allocation contention.
LOGFILE FULL
Log file full:
Error: 9002
Steps:
1. By using select * from sys.databases and verify the column "log_reuse_wait_desc" to find any
active transactions are there?
153
DBCC SQLPERF (Log space)
2. Check whether log file growth is set to restricted or unrestricted?
If set restricted then increase the size for log file growth.
3. Verify whether log backups are running or not if not manually run one T-log backup.
4. Try to perform SHRINK operation which can release the space.
5. Check the disk space availability where you kept log file
6. If disk space is not available then add the overflow file into the disk where space is available.
7. Perform file movement after approval to the disk where it have enough free space.
8. Verify whether log backups are running or not if not manually run one T-log backup.
Last option:
TRUNCATE ONLY ;;;( Not recommended in production server)
Backup log dbname with truncate only
Note: VLF'S deleted from log file and due to truncate operation. You may get the free space in
log file but chances of data loss.
154
SQL SERVER SECURITY AND POLICY BASED MANAGEMENT
The Policy Based Management feature was introduced in SQL Server 2008. The purpose
of the feature is to assists SQL Server administrators in creating and enforcing policies
tied to SQL Server instances and their objects. The policies can be configured on one SQL
Server and re-used on other SQL Server instances to provide a SQL Server security
model for instance.
Policy Based Management allows DBAs to define the preferred state of the SQL Server
system components (e.g. instances and objects) and compare the compliance status
with the preferred state. Properly declared policies ensure enforcing company rules in
the SQL Server environment, and are commonly a part of the SQL Server security model.
The Policy Based Management feature is built on top of the SQL Server Management
Objects collection (objects that are designed for programming all aspects of managing
Microsoft SQL Server) which supports SQL Server 2000 and later versions. Therefore
Policy Based Management can be utilized on versions prior to SQL Server 2008, for
instance via the PowerShell subsystem and SQL Server Agent.
Policy Management allows creating policies for various facets with a specified condition.
155
Facets: Facets is the property of SQL Server which the policy will consider managing. There are
several facets on which policies could be implemented. For example, we will use the “Database
Option” facet to implement a policy which will ensure that the Auto Shrink option should be
TRUE for all hosted databases on the server. Similarly, we will be creating policies on the Stored
Procedure facet.
Conditions: It is the criteria upon which the facet is evaluated. While designing a policy for the
server, the first step is to create a condition which is to be evaluated by the policy for the facet.
Policies: As the dictionary says, I reform, a SQL Server policy is a set of basic principles and
associated guidelines, formulated and enforced by the Policy Manager of a server, for the
desired server facets to conform with, which in the long run shall maintain the server consistent
and help the DBA achieve organizational level IT norms.
Example 1
Scenario: We will create an on demand policy to ensure that all the databases have the Auto
Shrink option set to True. By default, a database that is created has Auto Shrink set to False, as
shown in the figure below.
156
Step 1: Creating a Condition
Next, provide a name to the Condition: “Check Auto Shrink”, and select the facet from the
Facets drop down as “database option”. In the Expression Editor, choose the field from the
drop down “@AutoShrink”, select operator as “=”, and value as “True”.
The condition will check all databases for their auto shrink properties to be true.
Click OK.
157
Step 2: Create a Policy
Provide a name as “AutoShrinkPolicy”; from the Check condition drop down, select the
Condition we just created. And from Targets, check every database as we want every database
to conform to this policy.
Next is the evaluation mode. Let’s keep it “On demand” for this example. On demand means
we will evaluate the policy at our will instead of at a predefined schedule.
158
Click OK.
Step 3: Evaluation
We have been able to create the policy; now we will let the Policy Manager evaluate the policy.
To evaluate, right click the Policy “AutoShrinkPolicy” and click Evaluate. SQL Server evaluates
and lists the result as shown in the screenshot below. Since for none of my databases Auto
Shrink is True, there are non-conformations for each one of the hosted databases on my server.
159
For conforming the results as per the Policy, check against the database and click on the
Apply button.
160
This will set the Auto Shrink property for TestDB to True and a green sign will denote its
conformance.
HIGH AVAILABILITY
161
HIGH AVIALABILITY
High Availability
High Availability: SQL Server provides several options for creating high availability for a
server or database.
HA is to continue operations when a component fails. This is usually a hardware
component like a CPU, Power supply, Disk failure, Memory failure or the complete
server.
With HA there is usually no loss of service when a component fails.
162
LOG SHIPPING
• Log Shipping is used to synchronize the Distributed Databases. Synchronize the database
by copying Transaction logs, Backing up, Restoring data. SQL Server used SQL Server Job
Agents for making those processes automatic.
Or
• It automatically sends transaction log backups from one database (Known as the
primary database) to a database (Known as the Secondary database) on another server.
An optional third server, known as the monitor server, records the history and status of
backup and restore operations. The monitor server can raise alerts if these operations
fail to occur as scheduled.
Or
• Shipping the transaction log backups from one server to another server called "Log
shipping"
163
The main functions of Log Shipping are as follows:
6. Create a copy share in secondary server and provide minimum read permissions.
7. SQL Service account should have the permissions on backup and copy share.
9. Should be any edition for log shipping except express edition [No agent service]
164
For implementing Log Shipping, we need the following components - Primary Database
Server, Secondary Database Server, and Monitor Server.
• Primary Database Server: Primary Sever is the Main Database Server or SQL Server
Database Engine, which is being accessed by the application. Primary Server contains
the Primary Database
• Monitor Server: Monitor Server is a SQL Server Database Engine which Track the Log
Shipping process.
Note: If monitor server is included in LS configuration then alert job gets created in
monitor server itself if monitor server is not included then alert job created in both
primary and secondary server.
165
Backup job:
Copy job:
• A SQL Server Agent job that copies the backup files from the primary server to the
secondary server
Restore job:
• A SQL Server Agent job that restores the copied backup files to the secondary database
Alert job:
• A SQL Server Agent job that raises alerts for primary and secondary databases when a
backup or restore operation does not complete successfully within a specified
threshold.
Architecture Points:
166
1. This is one of the database level HA option in SQL Server.
3. Backup job take the transactional log backup of primary database into backup share.
4. From backup share copy job picks the t-log backups and copy to secondary server > copy
folder.
5. From copy folder restore job picks the backup file and restore in secondary database
167
3. Connect to secondary instance > then provide copy share path and restore database mode.
1. No recovery: No users can able to access database
2. Standby: Database in read only mode and users can able to read the data.
Note: In log shipping secondary server database can be used for "reporting purpose".
4. Add monitor server if need and click OK
5. Verify log shipping status.
Advantages:
• Data Transfer: T-Logs are backed up and transferred to secondary server
• Transactional Consistency: All committed and un-committed are transferred
• Server Limitation: Can be applied to multiple stand-by servers
• Secondary database mode: Stand-by mode [Read-only]...Useful for reporting purpose
• Recovery model supports Full and bulk-logged…database to simple recovery will cause
log shipping to stop functioning
• Edition is not necessary to be same for both primary and secondary servers.
Disadvantages:
168
• Failover: Manual
• Failover Duration: Can take more than 30 mins
• Role Change: Role change is manual
• Client Re-direction: Manual changes required
7) Backup/Copy/Restore Job owner change can cause permission issues and break LS.
9) WITH RECOVERY statement fired at Standby server can bring secondary database ONLINE
breaking LS.
12) Backup schedule is changed can cause lot of delay which might raise an alert.
1. Monitoring in Log-shipping
2. What are the log shipping jobs?
3. Failover
4. Switchover
5. What are the reasons log shipping fails? If fail what happened?
6. What are the common errors numbers will get in log shipping?
7. What is .tuf [Transaction undo file] purpose? Overview
169
8. If we delete .tuf file what happen to LS?
9. If we do insert, update, delete in primary server database, changes replicate to
secondary?
10. If I shrink primary database what happen to log shipping in secondary db?
11. If I delete some records in primary database what happen to log shipping in secondary
db?
12. If I do truncate in primary database what happen to log shipping in secondary db?
13. If I take a manual log backup for LS configured database will it impact secondary?
14. Adding multiple secondary server to the existing configuration?
15. Adding file to log shipping database?
16. Patch management process in LS instance?
17. Reasons a backup job, copy and restore job fails?
18. How to change recovery models in log shipping expect simple? Yes we can able to change
....Simple recovery not support for LS
19. If primary database log file full then what happen to secondary? How will we resolve?
1. Go to job view history and check status to monitor log shipping status. If all backup,
copy and restore jobs are running then we can say your log shipping is in sync and
working fine.
MSDB Tables:
1.dbo.log_shipping_primary_databases:
170
Backup share local path
Monitor server ID
2.dbo.log_shipping_primaries:
primary_id
primary_server_name
primary_database_name
maintenance_plan_id
backup_threshold
threshold_alert
threshold_alert_enabled
last_backup_filename
last_updated
planned_outage_start_time
planned_outage_end_time
planned_outage_weekday_mask
source_directory
3.dbo.log_shipping_primary_secondaries:
171
SECONDARY SERVER TABLE:
1.dbo.log_shipping_secondary_databases
Secondary ls databases
Restore mode
2.dbo.log_shipping_secondary:
Monitor server
2.dbo.log_shipping_secondaries
primary_id
secondary_server_name
secondary_database_name
last_copied_filename
last_loaded_filename
last_copied_last_updated
172
last_loaded_last_updated
secondary_plan_id
copy_enabled
load_enabled
out_of_sync_threshold
threshold_alert
threshold_alert_enabled
planned_outage_start_time
planned_outage_end_time
planned_outage_weekday_mask
allow_role_change
1.dbo.log_shipping_monitor_primary:
Primary server
Primary dabase
Backup threshold
2.dbo.log_shipping_monitor_Secondary:
Primary server
173
Secondary server
Threshold
3.dbo.log_shipping_monitor_history_detail:
4.dbo.log_shipping_monitor_error_detail:
5.dbo.log_shipping_monitor_alert:
> This is done when the primary database is No longer available. It is not pre-planned.
> When primary database goes down my application cannot be able to connect and log
shipping is no more exists.Need to bring secondary database online manually is called as
“failover”
Steps:
1. Disable all backup, copy and restore jobs, alert job. Inform application to stop the app to
avoid any user co0nnections
174
3. Compare [BACKUP SHARE AND COPY SHARE] and move .trn files from primary to secondary
copy folder.
== Manually copy and paste from backup share to copy share by checking the time stamp and
LSN number
== Or just run the copy job and which automatically copy to copy share. Please copy manually
only TAIL LOG BACKUP
Note: Find what was the last restore backup file in Secondary
From msdb database, log shipping secondary tables we can get what was the last backup file
restored...
4. Restore pending .trn backup files in secondary database with no recovery to recover the
transactions (If tail log backup).
Note: Restore or copy by running manually or run the copy \restore jobs.
5. Restore last .trn backup file with recovery to get your secondary database up & running. If I
have tail log then restore last TAIL log backup file with recovery
7. Inform to application team with new server instance name and database to start
transactions.
Note: Log shipping supports manual failover method and does not support automatic failover.
175
1. Whenever you create any login in Primary server then take the create login script in Primary
server and execute in secondary server.
3. But a user from primary server to secondary server is not possible to proceed manually. Due
to secondary server database is in read-only\restoring state.
4. Wait or run manually for the backup\copy\restore job to run after login creation.
5. Once done, your secondary database gets added into user under database automatically.
4. SWITCH-OVER:
> This is done when both primary and standby databases are available. It is pre-planned.
> Switch-Over: Swapping the roles that means-- your primary become -secondary [ONLINE –to-
Restoring or standby mode]
DISASTER RECOVERY: [DR process] this concept used for high availability solutions to test
both primary and secondary are working fine instead of waiting for any disaster. Can called as
proactive check.
Every 6 Months or 1 Year DR Test happens. In real time down time is required.
Pre-step:
After primary becomes secondary server. Role changed from ONLINE TO RESTORING MODE
3. Same like your failover operations; compare your backup and copy folders, move files
176
4. Restore secondary database by using t-log backup with recovery to change the role. Now
secondary database will be in online state (Role swapped)
5. Start reverse configuring log shipping from secondary to primary server [switchover].
Post Activities:
> Inform application to point connection to the current primary which was previous secondary
to test.
> If apps and users are able to work then DR test is successful.
Note: If monitor server is not included then alert job create in both primary and secondary
SCENARIO: 6 WHAT ARE THE REASONS LOG SHIPPING FAILS? IF FAIL WHAT HAPPENED ?
Reasons:
177
12. Database not available
13. Instance not available
14. Recovery model change
SCENARIO: 7 WHAT ARE THE COMMON ERRORS NUMBERS WILL GET IN LOG SHIPPING?
Error:
And
Contains only uncommitted transactions and create in secondary server but not in
primary...ONLY SECONDARY DB IS IN STANDBY MODE
The TUF file basically contains the information with respect to any modifications that were
made as part of incomplete transactions at the time the backup were performed.
> This file resides always under secondary server> copy folder
Is there any way to re-create .tuf file... i have configured log shipping but unfortunately tuf file
has been removed and now log shipping has been stop and we are unable to up the same.?
Ans: Impact is only restore job failed in secondary. We have reconfigure the log shipping one
more time to re-create .TUF file
Note: If I delete .TUF file then impact to restore job but not copy and backup jobs.
178
.Tuf file is updates dynamically whenever any log backup file restore
.TUF file only creates in secondary server > copy folder>
No .tuf file creates in “NORECOVERY “mode.
>.WRK [work file]: Work file creates in secondary server and the main purpose is it contains copy job
information. This file used by only log shipping copy job
WORK FILE [.WRK]:- To manage the file copy process from Primary server to Secondary server,
.WRK files temporarily generated.
Means, The .wrk file got generate when the transaction log backups files are being copied from
the backup location (Commonly at Primary Server end) to the secondary server by the agent job
named as LS-Copy on the secondary, and when file copied completely at secondary server, they
renamed to the .trn extension.
The temporary naming using the .wrk extension indicates/ensure that the files will not picked
up by the restore job until successfully copied.
179
SENEARIO: 12 IF I TAKE A MANUAL FULL BACKUP FOR LS CONFIGURED DATABASE
WILL IT IMPACT SECONDARY DB?
Due to LSN mismatch log shipping will not work (restore job fail)
Note: Recommended in LS take copy only backup if require to take full backup
Due to LSN mismatch log shipping will not work (restore job)
Note: Due to this reason in log shipping if any user used to take adhoc full backup then we will always
USE” COPY ONLY FULL BACKUP “not to distribute any LSN number and to work log shipping as usual.
3. In 2nd secondary server again create additional copy and restore jobs.
Note: multiple secondary server > each secondary server should contains at least 1 copy and 1
restore jobs.
180
Impact: After adding a file to log shipping db then there will be no impact to backup job, copy
job but restore job gets fail.
3. Identify after adding a file what is the recent backup is happened then confirm by perform
Note: All backup files are moved to copy share and restored expect backup file after file added
4. Go to secondary and try to restore the log backup file with move option,
restore log dbname from disk='[path of log backup file n copy folder]'
5. Verify by running restore job whether log shipping is working or not. If works, log shipping is
in sync.
Note: Always keep same version of Sql between primary and secondary
Then you can perform all operations but except SWICHOVER Scenario in log shipping.
Note:-
1. In log shipping for the version of Sql 2005 if any changes performed at primary server but the
changes are failed
2. To roll back the data we need to restore with fresh full backup, after restoring full backup if I
enable log shipping jobs my restore job is going to fail due to LSN number mismatch
Same in Sql server 2008 version just take a fresh log backup from primary and restore in
secondary automatically Log shipping will going to start
181
SCENARIO: 16 REMOVAL A FILE INTO LOG-SHIP DATABASES:
If we remove file in primary the automatically remove from secondary after backup restore.
When an instance of SQL Server is configured as part of Log Shipping, t is important to install
service packs in a correct sequence; otherwise we may get unexpected issues.
There is no required sequence to apply a service pack for Primary, Secondary and Monitor
servers in a Log Shipping environment. The following is the steps about apply service pack:
Agent failure
182
Incorrect path
Network failure
Domain issue
Agent down
> In log shipping we can change recovery model from full to bulk-logged or bulk-logged to
full...There will not be any impact to LS.
> But is we change from full to simple or simple to full then we need to trigger one full backup
as a mandatory.
183
Note: System databases [master, model, msdb, Tempdb] log shipping configuration is not
possible.
No. currently, it is not possible to script log shipping. the only supported means of setting up log
shipping is through the wizard
Yes. It is possible to set up log shipping between servers that are in separate domains.
In SQL 2005 version: Not possible to configure log shipping between 2 different domains.
In SQL 2008 version onwards: Possible to configure log shipping between 2 different domains.
Use pass-through security. Configure Windows NT accounts with the same name and
passwords on the primary, secondary and monitor servers. Configure SQL Server related
services to start under these accounts on all servers and use SQL authentication while
setting up log shipping to connect to the monitor server. Or
Use conventional Windows NT security. You must configure the domains with two-way
trusts. SQL Server related services can be started under domain accounts. Either SQL
authentication or Windows authentication can be used by jobs on the primary and
secondary servers to connect to the monitor server
184
Note: Only Enterprise and Developer editions of SQL Server support log shipping.
• Re-establish log shipping only on standby server (i.e. remove the Copy/Restore/Alert
jobs and delete the standby database) and add the Standby instance as a new standby
database.
• You can set up log shipping between versions, however things aren't that simple. If you
use it in this way and you need to fail over to the standby server, you now have no way
to swap the log shipping roles, as you can't then log ship back from the 2008 server to
the 2005 server.
http://www.databk.com/walkthrough5.htm
http://support.microsoft.com/kb/314515
5. Failover Process
185
7. Primary database file is full?9740147406
13. If change recovery model? LS there any impact to log shipping database?
> If you change the recovery mode of an existing secondary database, for example, from
No recovery mode to Standby mode, the change takes effect only after the next log
backup is restored to the database.
14. Scenarios:
DB MIRRORING
What is Database Mirroring?
> Database mirroring is a primarily software solution for increasing database availability.
> It maintains two copies of a single database that must reside on different server instances of
SQL Server Database Engine.
(Or)
186
> Database mirroring transfers transaction log records directly from one server to another and
can quickly fail over to the standby server.
Mirroring pre-requisites:
> Make sure that the two partners that is the principal server and mirror server, are running the
same or same version
> Minimum SQL Server 2005 SP1 is required to configure mirroring or if you have RTM then you
have enable trace
DBCC TRACEON (1400)--Only for Sql 2005 RTM...From Sql 2008 onwards no need to have any sp
even can configure with RTM
187
• Principal – the server that holds a copy of database that is accessible to client
applications at any given time.
(Or)
• Mirror – the server that holds copy of database that is always in restoring state that is
not accessible to the applications.
(Or)
• Witness – the optional server that is useful to provide an automatic failover mechanism
in case of any failure on principal server.
(Or)
• Endpoint: A SQL Server object that enables Principal, Mirror & Witness servers to
communicate over the network.
188
How to create port:
Snapshot on mirror is possible for read-only purpose that is ideal for reporting requirements.
Transaction safety level – that determines whether the changes on the principal database are
applied to the mirror database synchronously or asynchronously. Two safety levels—OFF and
FULL.
1. Synchronous:
> In this mode when any log record sent from principle to mirror, acknowledgement has to
send back from mirror to principle. Moreover first the transaction should commit @ mirror
server.
2. Asynchronous:
> In this mode when any log record sent from principle to mirror, acknowledgement no need to
send back from mirror to principle. Moreover first the transaction should commit @ principle
server itself.
> Another configuration mode (possible) asynchronous mode with no witness server although
this is possible to setup it is not recommended because it combines the risk of data loss and
split-brain scenario.
189
Mirroring Architecture:
Architecture Points:
1. In mirroring transaction log record write into log file (Log buffer).
2. The same Log records sent from principle to mirror server database
3. The complete data transfer from principle to mirror by ENDPOINTS.
4. Default end points for mirroring used [Priniciple-5022, Mirror-5023, Witness-5024]
5. Exact copy of the database maintained at mirror server.
190
2. Restore in mirror server with same db name with no recovery mode
Endpoint: Mainly help to communicate and data transfer from principle to mirror server
database
• Database mirroring architecture is more robust and efficient than Database Log
Shipping.
• It can be configured to replicate the changes synchronously to minimized data loss.
• It has automatic server failover and client failover mechanism.
• Configuration is simpler than log shipping and replication, and has built-in network
encryption support (AES algorithm).
• Does not require special hardware (such as shared storage, heart-beat connection) and
cluster ware, thus potentially has lower infrastructure cost
191
> If transaction is not commit in mirror due to any reason again the same transaction sends
from principle to mirror for further process.
> This mode supports both automatic and manual failover methods.
192
1. MONITORING IN MIRRORING:
Note: In Sql server database mirroring to monitor we have inbuilt tool released called as
"LAUNCH DATABASE MIRRORING MONITOR".
> DB> right click> click > mirror monitor> Check the columns
CURRENT ROLE
MIRRORING STATE
> Unsent Log: When principle database\server is not online then what ever the transactions are
running principle should be store in "UNSENT LOG"
> Unsent log size increases more if mirror database not present for longer time.
> Unrestored log: After some time mirror server \database present the data which is pending in
principle server unsent log will be sent to mirror unrestored log.
> Mirror commit overhead: How much time take to commit any transaction in mirror server.
T-SQL Method:
Use msdb
193
2. To list all the endpoints
3. Manually fail over a database mirroring session at the mirror in High Availability Mode by
issuing the following command at the principal
4. Select * from sys.database_mirroring ---> provides information about principal and mirror
Whenever principle database\server down witness come into picture to perform failover to
mirror server and brings the mirror database online i.e...Accessible to end user\application.
Whenever principle goes down DBA should perform manual failover if no witness.
Note: This type of failover supports in “HIGH AVALIBILITY WITH AUTOMATIC FAILOVER + High
protection” modes
194
Post failover:
> Application\end users can auto redirect connection to mirror server. No need to inform DBA
specifically to app team.
> Logins need to create in mirror server whenever you create in principle instance...
3. SWITCHOVER PROCESS:
> In mirroring there is no separate switch over process. Why because if we do failover then
automatically roles get changes and again no switch over concept is required.
Note: In High performance mode, we cannot be able to perform manual failover. If we need to
do failover then first need to change the database transfer type from Asynchronous mode to
synchronous mode.i.e. please change the mirroring mode from high performance to high
protection.
> In High performance mode, only supports FORCE FULL FAILOVER MODE
195
1. Break the mirroring between principle and mirror server.
3. Apply one transactional log backup in principle server database and copy the log backup to
mirror server.
4. Take the tail-log backup file and restore in mirror database with no recovery by using with
move.
with no recovery,
5. Reconfigure the mirroring for the database and check the stats of mirroring monitor.
Method 2: Instead of breaking the mirroring we can perform "PAUSE Mirroring” from database
properties.
After 4th step we should resume the mirroring and check whether mirroring is working or not.
http://www.mssqltips.com/sqlservertip/2834/how-to-add-a-database-file-to-a-mirrored-sql-
server-database/
196
FILE MOVEMENT IN MIRRORING:
Start activity @ Principle server:
1. Collect logical file names for your databases [User and system]
sp_helpfile
alter database dbname modify file (name='logical file name', filename='newpath') -----Data file
alter database dbname modify file (name='logical file name', filename='newpath') -----log file
Note: Verify whether logical name is updated with new path by using sp_helpfile again...
> If witness included, then automatic failover occur and mirror become principle.
4. Move physical files to new path and bring the instance online.
Note: While moving physical .mdf, .ldf, .ndf files use always copy & paste method but not cut
and paste.
Note: When you are performing @mirror server automatic failover happen and principle
become online at principle instance.
Note: If we don’t have witness then DBA need to perform manual failover before taking the
instance offline in both principle and mirror instance.
197
6. INSERT \UPDATE\DELETE \SHRINK\TRUNCATE IN PRINCIPLE WHAT HAPPEN TO MIRROR
DATABASE?
If we delete, insert, update, truncate, shrink records in principle server then automatically
delete, insert, update, truncate, shrink records in mirror databases also without effect mirroring
configuration.
7. JOBS IN MIRRORING?
Only 1 job: “Database Mirroring Monitor Job” gets created in both principle and mirror server.
But witness does not have the job
• Mirror db offline
> In mirroring users move automatically from principle to mirror server database after
synchronization.
(Or)
> When create any login @ principle then create the same login @mirror server as well.
198
> If you map the same login @ principle to any mirror configured database, then same login
create as a user in mirror database.
> Automatically same user account replicate from principal db to mirror db.
How to find:
sp_change_users_login @action='report'
How to fix:
Once fixed, automatically a mapping happen between login and user along with the
permissions.
Note: After creating login @mirror server login state in DISABLE status. Ensure you have to
enable the login.
Note: In real time whenever you create any login in principle ensure that the same login you
create in mirror immediately.
> If we take manual full or t-log backup for principle database there is no impact to mirror
server database and mirroring still works.
199
11. PATCH MANAGEMENT PROCESS IN MIRRORING?
Process steps:
1. Apply in witness server instance even there is no impact to mirroring [Only impact failover
process become manual].
2. Once done, perform patch in mirror server instance only after witness online.
3. Once mirroring patch completed, perform manual FAILOVER to mirror instance before start
patch in principle.
4. Apply patch in principle instance and fail back if required after the service pack.
http://blog.sqlauthority.com/2010/04/05/sql-server-2008-introduction-to-snapshot-
database-restore-from-snapshot/
Main purpose:
1. We generate snapshot on mirror database (Restoring status) and can perform data reading
from snapshot in mirror server.
i.e. we cannot update\delete \insert into snapshot database tables due to read-only.
3. If we want to read recent \concurrent changes from snapshot required to generate always
RECENT snapshot to read recent updates.
200
(Name =’dbname’, Filename='path.ss')
No. You cannot delete the database until you first delete the snapshot database.
Yes possible.
3. Network issue
201
12. Mirror instance down
Advantages:
Automatic failover
Auto client re direct
Maintains activities can perform with less down time
Data safety is always 100% high in synchronous data transfer type.
Disadvantages:
> Direct there is no reporting support from mirror server due to restoring state until you have
used Database SNAPSHOT
> Multiple mirror servers does not support. Only 1 mirror server is possible.
> It does not support bulk -logged or simple recovery model.
For mirrored configured database, you cannot be able to change the recovery model from full
to any other bulk-logged\simple.
Note: why the reason: mirroring supports only full recovery model. If you change the recovery
model mirroring is going to fails.
If you need to change, break the mirroring and then perform the recover model change
operation. But again can't configure mirroring.
202
> If any page is corrupted in mirror server database in 2005 SQL then mirroring is in out of sync
and DBA should fix this issue manually by performing auto page recovery process.
> Where as in SQL 2008 onwards AUTO PAGE RECOVERY is possible. If any page is corrupted in
MS then get the same copy of the page from PS.
> Data encrypt and sent from Principle Server to Mirror Server
> Data compress and sent from Principle Server to Mirror Server
> I/O errors on the principle server may be fixed during the mirroring session
> I/O errors on the mirror server require the mirroring session to be suspended
QUORUM:
1. Quorum contains the information of which instance is currently acting as principle (online)
and which instance is acting as a mirror server.
Note: When witness included in mirror configuration, always before perform automatic failover
witness reads from QUORUM to know from which instance to failover.
DYNAMIC MANAGEMENT VIEWS: This concept is introduced in SQL Server 2005 version.
The main purpose we can monitor SQL Server without consuming hardware resources like
DBCC queries.
SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER BY
name
203
Output:
List of DMV'S
1. Returns a row for every automatic page-repair attempt on any mirrored database on the
server instance.
2. This view contains rows for the latest automatic page-repair attempts on a given mirrored
database, with a maximum of 100 rows per database.
connection_id
transport_stream_id
state
state_desc
connect_time
login_time
authentication_method
principal_name
remote_user_name
last_activity_time
is_accept
login_state
login_state_desc
peer_certificate_id
encryption_algorithm
204
encryption_algorithm_desc
receives_posted
is_receive_flow_controlled
sends_posted
is_send_flow_controlled
total_bytes_sent
total_bytes_received
total_fragments_sent
total_fragments_received
total_sends
total_receives peer_arbitration_id
2. Sys.dm_db_mirroring_connections:
Db id
File id
Page id
Error type
Page status
Modification time
205
DIFFERENCE BETWEEN LOGSHIPPING AND DB MIRRORING
206
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER,
AUTHENTICATION = WINDOWS NEGOTIATE
, ENCRYPTION = REQUIRED ALGORITHM RC4)
207
REPLICATION
What is Replication?
> It is a set of technologies for copying and distributing data and database objects from one
database to another and then synchronizing between databases to maintain consistency. Using
replication, you can distribute data to different locations and to remote or mobile users over
local and wide area networks, dial-up connections, wireless connections, and the Internet.
> E.g. it can map data from a data-type in DB2 to an equivalent in Sybase.
> Merge data from several source databases into one destination database
Replication Terminologies:
1. ARTICAL: A table, schema, column, rows, indexes, views, store procedure, triggers. Etc. can
called as "ARTICAL" in replication.
2. PUBLISHER: Where data coming from application and the article is online.
208
5. LOCAL PUBLICATION: The number of articles included as a part of publisher server called
"local publication"
6: LOCAL SUBSCRIPTION: The number of articles included as a part of subscriber server called
"local subscription
> Data transfer in replication: Replication always uses "replication agents" to transfer data.
Replication pre-requisites:
> Verify that there are no differences in system collation settings between the servers.
> Verify that the local windows groups and SQL Server Login definitions are the same on both
servers.
> Verify that external software components are installed on both servers.
> Verify that CLR assemblies deployed on the publisher are also deployed on the subscriber.
> Verify that SQL agent jobs and alerts are present on the subscriber server, if these are
required.
> Verify that for the certificates and keys used to access external resources, authentication and
encryption match on the publisher and subscriber server.
TYPES OF REPLICATIONS:
3. TRANSACTIONAL WITH UPDATABLE SUBSCRIPTION [BI DIRECTIONAL] --- Removed from SQL
SERVER 2012
209
4. MERGE REPLICATION [BI DIRECTIONAL]
> Snapshot replication refers to a replication method between databases. During this process,
data is infrequently updated at specified times by copying data changes from the original
database (publisher) to a receiving database (subscriber).
MINIMUM REQUIREMENTS:
210
ARCHITECTURE:
> Snapshot taken from publisher on article and stored into snapshot folder in distributor server
by SNAPSHOT AGENT
> From snapshot folder collect snapshot by distributor agent and distributes the data to
subscriber and store into subscriber server.
211
CONFIGURATION STEPS:
> Provide the password after adding publisher instance and specify the snapshot folder path to
store snapshots.
2. Configure publisher instance and select the article table to publish as a part of replication
3. Configure subscriber.
Key points:
SNAPSHOT Background: When you generate a snapshot initial schema+data (if any) takes from
publisher and store into snapshot folder.
> The same snapshot replicate to subscriber and apply at subscription article.
1. Snapshot agent
2. It prepares schema and initial data files of published tables and other objects, stores the
snapshot files.
2. Distributor agent
1. Distributor agent takes snapshot files from snapshot folder, apply to subscriber.
212
2. Depending on the push and pull type of replication distributor agent operate either from
distributor or from subscriber
SNAP.exe
Distrib.exe
1. SCH [Schema Script]: This contains the script of the published tables (schema)
2. BCP [Bulk Copy Program]: Contains data which need to move to subscriber
4. IDX [Index File]: Contains index of publisher tables which need to apply in subscriber after
creating.
PULL SUBSCRIPTION: In this mode distributor always works from subscriber and pulls data from
distributor snapshot.
PUSH SUBSCRIPTION: In this mode distributor always works from distributor and sends data to
subscriber article.
> Pull types always gives good performance compare to PUSH type.
213
REPLICATION ADVANTAGES AND DISADVANTAGES:
Advantages:
> Pub and sub db names can be different but still support.
Disadvantages:
> Con current changes in pub are not replicate to sub immediately. Only when after re
generating the new snapshot.
Note: When you configure snapshot replication with multiple subscribers then multiple
distribution agents will create...
214
2. TRANSACTION REPLICATION [ONE DIRECTION]
Transactional replication typically starts with a snapshot of the publication database objects
and data. As soon as the initial snapshot is taken, subsequent data changes and schema
modifications made at the Publisher are usually delivered to the Subscriber as they occur (in
near real time)
ARCHITECUTRE:
> Initial schema and data sends from publisher to subscriber by using snapshot agent method
> After con current changes replicate from publisher to subscriber by using a new agent creates
in transaction replication “log reader agent [logread.exe]".
1. Snapshot agent: Take initial schema and data in publisher and store to snapshot folder.
2. Distributor agent: Snapshot file and concurrent data should replicate to subscriber
3. Log reader agent: Concurrent data changes in publisher send to distributer to subscriber
CONFIGURATION STEPS:
1. Configure distributor
2. Configure publisher
3. Configure subscriber
215
DATA FLOW POINTS:
2. From there distributor agent takes the same snap and apply to subscriber.
3. After any concurrent changes [DML operation or any] @ publisher table then those changes
replicate immediately to subscriber by LOG READER AGENT via distributor agent
4. Log reader agent collect data from transaction log file @pub and stores into distributor >
distribution database. From their distributor agent takes and send to subscriber.
5. Transactional replication is always suitable for any types of critical transaction happen every
second.
216
> But if you insert data at sub data won’t come to publisher
> Generally in simple recovery model when check point the transaction in VLF's truncated.
> But in replication, when you perform any transaction as per WAL, should logged into
transaction log after replication MARK THOSE TRANSACTIONS AS
> Replication transaction and will not be deleted from log file in publisher until move to
distributor even truncate operation comes.
Replication Monitor:
Transactional Replication:
Latency: How much time take to send the transaction from pub to sub
TRACER TOKEN: Measures the latency values from publisher to distributor and distributor to
subscriber and overall total latency.
> Check column "PERFORMANCE" to know whether replication working fine or not.
217
REPLICATION JOBS:
1. Agent history clean up: distribution: Removes replication agent history from distribution
database
3. Expired subscription clean up: Detect and removed expired subscriptions from publication
database.
Detect whether any data validation errors then if any errors renationalize the subscriber with
fresh snapshot from publisher.
Schedule: No schedule
Note: This job not recommend to run in business hours which cause performance impact.
5. Replication agents checkup: Checks whether replication agents are running or not
218
3. TRANSACTIONAL WITH UPDATABLE SUBSCRIPTION [BI DIRECTIONAL]
> Updatable subscriptions for transactional replication allow Subscribers to replicate changes to
the Publisher. Triggers are added to the published tables in the subscription database, and
when a change is made at the Subscriber, the trigger fires:
1. When data DML operation performs at publisher, same data sends to sub by using SNPAHOT
and Log reader agents.
2. If you perform any DML operations at subscriber then data sent to publisher by using 2
methods.
• For queued updating subscriptions, the change is first propagated to a queue and then
applied to the Publisher by the Queue Reader Agent.
1. SNAPSHOT AGENT
2. LOG READER AGENT
3. QUEUE READER AGENT
4. DISTRIBUTOR AGENT
CONFIGURATION STEPS
1. Configure distribution
2. Configure publisher.
Changes: - When you configure, the article gets added with new extra column
"MSREPL_TRAN_VERSION"
Note: - "MSrepl_tran_version" >this column is used for change tracking and conflict detection.
219
> Concurrent changes from pub to sub by using log reader agent
3. Configure Subscriber:
Changes:-
1. At the time of configuration required to configure "LINKED SERVER" for the purpose of
connectivity from subscription to publisher articles.
2. You have option to select the type of data sending from sub to pub either "MSDTC
[IMMEDIATE SYNCRONISATION] OR QUEUE READER CHANGES"
Changes @subscriber:
Trigger @ subscriber:-
1. trg_MSsync_del_tab
2. trg_MSsync_ins_tab
3. trg_MSsync_upd_tab
1. Dbo.sp_MSsync_del_tab_1
2. Dbo.sp_MSsync_ins_tab_1
3. Dbo.sp_MSsync_upd_tab_1
220
Case 1: IF MSDTC
2. The trigger calls through MSDTC to the appropriate stored procedure at the Publisher.
3. The stored procedure performs the insert, update, or delete unless there is a conflict. If there
is a conflict, the change is rolled back at the Publisher and the Subscriber.
221
Case: 2 IF QUEUE READER AGENT
1. Updates made at the Subscriber are captured by triggers on the subscribing tables. The
triggers store these updates in MSreplication_queue.
2. The Queue Reader Agent reads from MSreplication_queue, and then applies queued
transactions to the appropriate publication using replication stored procedures.
3. While applying the queued transactions, conflicts (if any) are detected and resolved
according to a conflict resolution policy that is set when the publication is created.
4. Changes made at the Publisher as a result of changes replicated from a Subscriber are
propagated to all other Subscribers according to the Distribution Agent schedule.
Note: - ALWAYS FASTER AND COMMIT happen immediately in MSDTC type of data transfer.
But where as in Queued transaction, data not going to send to sub and first take same
transaction into queue "MSREPLICATION_QUEUE" and then store. From there sent to publisher
to "QUEUE reader agent"
222
4. MERGE REPLICATION [BI DIRECTIONAL]
1. Merge replication data can be read or write from any server site ...i.e. from publisher or
either subscriber
2. In Merge replication data transfer between multiple server or sites by using agent called
“MERGE AGENT " and place a major role.
Architecture Points:
2. Rowguid-column [unique identifier column] will help to avoid conflict detection in both
publication and subscription.
223
4. Primary key is not required. Merge agent collect or distribute all changes at publisher and
subscribers then collect all changes at all the locations[publisher, subscriber] and do processing
then distribute to all add maintain same data at all locations
Merge replication, like transactional replication, typically starts with a snapshot of the
publication database objects and data. Subsequent data changes and schema modifications
Publisher and Subscribers are tracked with triggers.
The Subscriber synchronizes with the Publisher when connected to the network and exchanges
all rows that have changed and Subscriber since the last time synchronization occurred.
1. Multiple Subscribers might update the same data at various times and propagate those
changes to the Publisher and to other Subscribers.
2. Subscribers need to receive data, make changes offline, and later synchronize changes
with the Publisher and other Subscribers.
3. Each Subscriber requires a different partition of data.
4. Conflicts might occur and, when they do, you need the ability to detect and resolve
them.
5. The application requires net data change rather than access to intermediate data states.
For example, if a row changes five times at a Subscriber before it synchronizes row will
change only once at the Publisher to reflect the net data change (that is, the fifth value).
224
CONFIGURATION STEPS:
Note:
1. In merge replication; distribution agent or server job is idle except storing snapshot history at
distribution database.
2. Merge agent collect all data from different sites and store into distribution database for processing of
data .Once done removes from distributor database and re send to other servers.
2. Configure publisher:
Note: - ROW GUID COLUMN gets added to publisher tables and same replicate to multiple subscribers.
Changes:-
Note: - Below triggers and Store procedures gets created in all servers included as a part of merge
replication.
Store Procedures:
dbo.MSmerge_del_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_ins_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_upd_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_sel_sp_80FDDC450D814A6F4A371E97BB9A4ECC
Triggers:
MSmerge_del_EDE22551187F43A2A520C328D819CF51
MSmerge_ins_EDE22551187F43A2A520C328D819CF51
MSmerge_upd_EDE22551187F43A2A520C328D819CF51
Tables:-From these tables merge agent read and gets\collect the data changes.
225
Merge Tables at both publisher and subscriber:
The Merge Agent detects conflicts by using the lineage column of the MSmerge_contents
system table.
> If column-level tracking is enabled for an article, the COLV1 column is also used. These
columns contain metadata about when a row or column is inserted or updated, and about
which nodes in a merge replication topology made changes to the row or column.
sp_showrowreplicainfo (Transact-SQL):
Displays information about a row in a table that is being used as an article in merge replication
> As the Merge Agent enumerates changes to be applied during synchronization, it compares
the metadata for each row at the Publisher and Subscriber.
226
5. PEER-TO-PEER REPLICATION [BI DIRECTIONAL]
> This enables applications that require scale-out of read operations to distribute the reads
from clients across multiple nodes. Because data is maintained across the nodes in near real-
time, peer-to-peer replication provides data redundancy, which increases the availability of
data.
MINIMUM REQUIREMENTS:
> Enterprise edition of SQL Server should be and minimum 2005 version...
> Table schema and table name should be identical between all the nodes.
227
> Every node should perform 3 roles
228
4 pub+4 dist+ 12 subscribers
CONFIGURATION STEPS:
2. Take database full backup and restore into all other nodes with recovery
For 2005\8\8R2:
At the time of local publication configuring need to select "TRANSACTIONAL REPLICATION" due
to PEER TO PEER only added in 2012 in GUI configuration
4. Right click > properties on local publication> enable "PEER TO PEER REPLICATION TO TRUE"
6. Once done, automatically publication and subscriptions gets created in all the nodes.
Feature Restrictions
229
Peer-to-Peer Replication Advantages & Disadvantages:
Advantages:
Disadvantages:
1. Troubleshooting is difficult
5. End users also should point their connection to sub instance manually...i.e. no auto client
redirect method support like mirroring
230
REPLICATION SCENARIOS:
Transaction with updatable subscription: Snapshot agent, distributor agent, log reader, queue reader
agent.
Peer to Peer Replication: Snapshot agent, distributor agent, and log reader agent [Per each peer or
node]
2. HOW TO ADD AND DELETE ARTICLE IN EXISTING REPLICATION? WHAT IS DIFF BETWEEN
2000, 2005 AND 2008 ADDING ARTICLE PROCESS?
Adding article
2. Take the scripts in publisher and subscriber for create and drop scripts.[Purpose: like a backup of
existing replication configuration]
4. Renationalize the snap shot again from replication monitor: to replicate newly added article from
publisher to subscriber
Through query:
231
Example: EXEC SP_addartical
@publication = Pub_dbAmericasCitrixFarm,
@article = Table_2,
Removal of article:
2. Take the scripts in publisher and subscriber for create and drop scripts.
Example:
EXEC sp_dropsubscription
@publication = Pub_dbAmericasCitrixFarm,
@article = N'Table_2',
@subscriber = 'FTDCCWPCTRXSQL';
If we perform any schema changes to the existing replicated article, then we need to re
initialize snapshot from publisher one more time.
232
4. REMOVAL OF REPLICATION?
Method 1:
Go to the publication >replication> "Disable publishing and distribution" option>click next and
finish
Automatically delete distribution database. After go to local publication delete publication and
subscription from local subscriber.
Method 2:
233
7. HOW TO PERFORM MONITORING IN REPLICATION?
Undistributed commands: which displays how many commands or queries which are waiting to
move from distribution database to subscriber. It will also displays how much take to replicate
those commands from distributor to subscriber.
Note: Verify \check the history whether data is moving from "Publisher to distributor" or
"Distributor to subscriber" and check "undistributed command"
Note: Perform regular index maintenance, update stats, reindex, on Replication system tables
just like you do for user tables.
MSmerge_contents
MSmerge_genhistory
MSmerge_tombstone
MSmerge_current_partition_mappings
MSmerge_past_partition_mappings
Merge Replication:
Store Procedures:
dbo.MSmerge_del_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_ins_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_upd_sp_80FDDC450D814A6F4A371E97BB9A4ECC
dbo.MSmerge_sel_sp_80FDDC450D814A6F4A371E97BB9A4ECC
234
Triggers:
MSmerge_del_EDE22551187F43A2A520C328D819CF51
MSmerge_ins_EDE22551187F43A2A520C328D819CF51
MSmerge_upd_EDE22551187F43A2A520C328D819CF51
Trigger @ subscriber:-
1. Trg_MSsync_del_tab
2. Trg_MSsync_ins_tab
3. Trg_MSsync_upd_tab
1. Dbo.sp_MSsync_del_tab_1
2. Dbo.sp_MSsync_ins_tab_1
3. Dbo.sp_MSsync_upd_tab_1
> There will be no impact if we change recovery models due to replication support all type of
recovery models.
235
11. CAN WE MODIFY DISTRIBUTION DATABASE DATA?
Yes we can modify distribution database but it effect replication. Not recommended
> Truncate is a non-logged operation and it will not replicate automatically from publisher to
subscriber
> In subscriber, need to perform truncate operation again to sync both pub and sub.
Error messages:
Violation of PRIMARY KEY constraint 'PK_tablename'. Cannot insert duplicate key in object
'dbo.tablename'. (Source: MSSQLServer, Error number: 2627)
Replication sync gets fail if this error reported. data\transaction sends from publisher to
distributor. But from distributor to subscriber it fails and report the error in transactional
replication monitor.
Where to find:
> Go to replication monitor> All subscriptions column > Status column> double click> check the
information from publisher to distributor and dist> sub
> You can see error of primary key violation in dist to sub history.
236
From the error:
Transaction sequence no
Command ID:
Solution:
1. First find what transaction is inserted into subscriber which is causing this error by using
below commands and execute only on distribution database.
@xact_seqno_start = ‘0x00000018000000A1000300000000’,
@xact_seqno_end = '0x00000018000000A1000300000000',
2. COMMAND with transaction: {CALL [sp_MSins_dboetab] (N'3 ', N’c ', N’ap ')}
Yes. Scripting the replication configuration is a key part of any disaster recovery plan for a
replication topology
16. WHY DOES REPLICATION ADD A COLUMN TO REPLICATED TABLES; WILL IT BE REMOVED IF
THE TABLE ISN'T PUBLISHED?
> Merge replication adds the column rowguid to every table, unless the table already has a
column of data type unique identifier with the ROWGUIDCOL property set (in which case this
column is used). If the table is dropped from the publication, the rowguid column is removed; if
an existing column was used for tracking, the column is not removed.
237
Rowguid column: added in merge
MSrepl_tran_version: transactional updatable subscription
> Yes. There are no restrictions on the number or types of publications that can use the same
distribution database. All publications from a given Publisher must use the same Distributor and
distribution database.
> No. Replication does not encrypt data that is stored in the database or transferred over the
network
> TRUNCATE TABLE is a non-logged operation that does not fire trigger
No
> In SQL Server replication data transfer happen by using method of “BCP [BULK COPY
PROGRAM] - for data, sch –for schema, Ind-for indexes move”
> Yes we can modify the distribution database tables but not recommended...it will go to create
impact.
Steps:
> After schema changes in publisher article then we require to reiniatialize the snapshot agent
to generate new snapshot
238
> Go to subscriber and then renationalize to apply new snapshot into subscriber article.
Note2: If we run snapshot then only modified data changes replicate to subscriber.
Transactional replication
Snapshot Transactional with updatable
Replication Replication subscription Merge Replication Peer-2-Peer replication
one way one way replication - BI- Directional BI- Directional BI- Directional
replication use transactional
Only one
publishers are Only one publishers are Only one publishers are Only one publishers Multiple publishers
allowed allowed allowed are allowed are allowed
Merge replication
Peer 2 peer replication has
Updatable replication has has rich conflict
Not required[ rich conflict detection
Not required rich conflict detection and detection and
conflict detection] and handling capabilities
handling capabilities as well. handling
as well from 2008.
capabilities as well.
239
Basically with merge replication when a
Note: All schema
changes will
synchronization occurs, the final state of the
replicate rows is what is merged with the other side. So
automatically if I have a stock tracking table which each stock
without reinitiate is updated thousands of times between
snapshot agent
from SQL Server synchronizations only the last value of the
2005 stock will be replicated.
240
Peer-to-Peer was simply an
extension of bi-directional
Merge was designed for loosely
transactional replication with
connected systems that occasionally
low latency in mind. It is
synchronize their data
useful in redundant server-to-
server data propagation.
Transactional updatable: If we
update row 1000 times then all In merge replication, if we update row
1000 modifications replicate to 1000 times then last modification only
subscriber. replicate to subscriber
Advantages of Replication
241
CLUSTERING
Clustering:
> Windows clustering: 2 or more independent systems (referred as nodes) working for same
purpose which is to provide continuous high availability to the instance
Note:
2. Cluster feature is enabled for most of the O\S like win servers 2000, 2003, 2008, 2008 R2,
2012, 2014
4. GEO CLUSTER
5. VERITAS CLUSTER
1. Network Load Balancing acts as a front-end cluster, distributing incoming IP traffic across a
cluster of servers.
2. Up to 32 computers running a member of the Windows Server 2003 family can be connected
to share a single virtual IP address.
3. NLB enhances scalability by distributing its client requests across multiple servers within the
cluster.
4. As traffic increases, additional servers can be added to the cluster; up to 32 servers are
possible in any one cluster.
242
5. NLB also provides high availability by automatically detecting the failure of a server and
repartitioning client traffic among the remaining servers within 10 seconds, while it provides
users with continuous service.
1. Component Load Balancing distributes workload across multiple servers running a site's
business logic.
Failover Clustering:
Cluster Service acts as a back-end cluster; it provides high availability for applications such as
databases, messaging and file and print services.
MSCS attempts to minimize the effect of failure on the system as any node (a server in the
cluster) fails or is taken offline.
MSCS failover capability is achieved through redundancy across the multiple connected
machines in the cluster, each with independent failure states.
Hardware Information:
243
Operating System: Windows 7 Home Premium
Required Software:
2) Windows Server 2012 R2 (with trial license free for 180 days)
4) Three virtual machines: one setup as Domain Controller and DNS Server (DC), the other two
as clustered nodes (WIN1 and WIN2).
244
Cluster Configuration:
DC
Win1
Win2
DC Server configuration:
Dism /online /enable-feature /feature name: NetFX3 /all /Source: d:\sources\sxs /LimitAccess
testcluster.com
245
DNS IP: 192. 168.102.128
5. Add Extra nic card for Internal node1 and node2 communication.
1. Cluster configuration
2. Inventory
3. Storage
4. Network
246
5. System configuration
1. Check all ip's are working and pinging between all nodes.
OVERALL WINDOWS CLUSTER IPADDRESS FOR 2 NODE CLUSTER: Total 7 for only windows
Windows team handover to DBA team after windows cluster installation for Sql server
installation.
247
SCSI (Small Computer System Interface):
• Aside from speed, another great advantage the SCSI card can connect 15 or more
devices in a daisy chain.
QUORUM Overview:
The database resides in a file named \MSCS\quolog.log. The quorum is sometimes also referred
to as the quorum log.
Types of QUORUM:
1. Standard Quorum
1. Standard Quorum:
> This types generally used in WINDOWS SERVER 2003 O\S, which help to configure windows
cluster.
> If Standard quorum is down, then entire windows cluster not works until you reconfigure the
quorum...Why..? Due to cluster service always read node information from quorum
\MSCS\quolog.log only.
> If quorum is not available then failover stuff not going to happen.
248
2. MNS [Majority node set] quorum:
Note: If quorum disk goes down in Wind 2008\R2\2012 still my cluster work due to maintains a
local copy of "quolog.log" in each node. Due to this my cluster service can read from local copy
of each MNS quorum .This makes always my cluster to run even quorum down in wind 2008
onwards.
Heart Beat: Very important component in cluster which always sends UDP packs between
nodes for every 1.2 sec.
If packets fails for 5 times then the cluster services will initiate failover of your applications.
Cluster groups:
ACTIVE-PASSIVE
ACTIVE-ACTIVE
SINGLE NODE CLUSTER [Feature may add another node in same cluster]
249
Windows 2008 Os onwards:
> Single instance is a new terminology that started from SQL 2008 onwards. A single instance
cluster has only one instance of SQL Server, which is owned by one node and all other nodes
are in wait state till the failover occurs.
> This is equalent to multiple instance cluster, where for example if we have 4 nodes. Each node
contains one instance hosted (i.e. 3 instances hosted on three nodes) and the 4 th node is
maintained as a standby waiting for failover. N+1 means N nodes are hosted with SQL Server
instances and 1 node is in waiting state.
> This is also a multiple instance cluster where N nodes are hosted with SQL Server instances
and M nodes are in waiting state.
Cluster Service:
• The cluster service manages all the activity that is specific to the cluster. One instance of
the cluster service runs on each node in the cluster. The cluster service does the
following
250
• Handles event notification
Resource:
RESOURCES IN CLUSTER:
Disks
QUORUM
MSDTC0
PUBLIC IP
PRIVATE IP
MSDTC: is used by SQL Server and other applications when they want to make a distributed
transaction between more than one machines. A distributed transaction is simple a
transactions which spans between two or more machines. The basic concept is that machine 1
starts a transaction, and does some work. It then connects to machine 2 and does some
work. The work on machine 2 fails, and is cancled. The work on machine 1 needs to then be
rolled back.
251
RESOURCE STATE:
Offline
Offline_Pending
Online
Online_Pending
Failed
CLUSTERING TERMS:
Cluster Nodes:
• A cluster node is a server within a cluster group. A cluster node can be Active or it can
be Passive as per SQL Server Instance installation.
Heartbeat:
• Heartbeats are single User Datagram Protocol (UDP) packets exchanged between nodes
once every 1.2 seconds to confirm that each node is still available. If a node is absent for
five consecutive heartbeats, the node that detected the absence initiates a regroup
event to make sure that all nodes reach agreement on the list of nodes that remain
available.
Private Network:
• The Private Network is available among cluster nodes only. Every node will have a
Private Network IP address, which can be ping from one node to another. This is to
check the heartbeat between two nodes.
Public Network:
• The Public Network is available for external connections. Every node will have a Public
Network IP address, which can be connected from any client within the network.
252
Shared Cluster Disk Array:
• A shared disk array is a collection of storage disks that is being accessed by the cluster.
This could be SAN or SCSI RAIDs.
• Windows Clustering supports shared nothing disk arrays. Any one node can own a disk
resource at any given time. All other nodes will not be allowed to access it until they
own the resource (Ownership change occurs during failover).
Quorum Drive
• This is a physical drive assigned on the shared disk array specifically for Windows
Clustering. Clustering services write constantly on this drive about the state of the
cluster. Corruption or failure of this drive can fail the entire cluster setup.
Cluster Name
• This name refers to Virtual Cluster Name, not the physical node names or the Virtual
SQL Server names. It is assigned to the cluster as a whole.
Cluster IP Address
• This IP address refers to the address which all external connections use to reach to the
active cluster node.
• This account must be configured at the domain level, with administrator privileges on all
nodes within the cluster group. This account is used to administer the failover cluster.
• This includes any services, software, or hardware that can be configured within a
cluster. Ex: Generic Application, Service, Internet Protocol, Network Name, Physical
Disk.
Cluster Group
• This is the SQL Server Instance name that all client applications will use to connect to
the SQL Server.
253
SQL Server IP Address (Virtual IP Address)
• This refers to the TCP/IP address that all client applications will use to connect to SQL
+Server; the Virtual Server IP address.
• Certain SQL Server Components require MS DTC to be up and running. MS DTC is shared
for all named / default instances in cluster group.
• This is the SQL Server service account, and it must follow all the rules that apply to SQL
Service user accounts in a non-clustered environment.
4. Component selection
5. VITRUAL SERVER NAME [Extra screen in cluster setup]--Application and end user connectivity
to SQL Server
6. VITRUAL SERVER IP Address [Extra screen in cluster setup]--Application and end user
connectivity to SQL Server
9. Domain user and group account should require for SQL Server services.
254
SQL SERVER CLUSTER INSTALLATION [2008\R2\2012]
2. Virtual name
3. Virtual ip
8. 29 configuration checks
> Network ip
255
DIFFERENCES BETWEEN SQL 2005 CLUSTER AND SQL 2008\8 R2\12\14 CLUSTER:
SQL 2005:
2. Start servers+tools in active node but only SERVER replicate automatically into passive node
[DATABASE ENGINE SERVICE+ANALYSIS SERVICE+SQL AGENT+SQL FULL TEXT]-Cluster aware
services
3. First Sql installation complete in passive node and then complete in active node only cluster
aware service or components.
Note: Automatically installation replicate by using domain level admin account and password.
Extra screens:
Virtual name
Virtual ip
2. Start server and tools in Active node but no changes are replicate in passive node.
3. Once active node installation completed, then manually go to passive node > re run the setup
>"ADD NODE” option > then again install servers+tools
256
Again one more set of binary files gets create under node 2 as well.
Extra screens:
Network name
Network ip
Note:
Tools:
Reporting services
Integration services
Notification services.
Note: Only we can see cluster aware services from cluadmin but not unaware services...
257
CLUSTER SENEARIOS
1. FAILOVER AND FAILBACK IN CLUSTER:
Moving SQL Server resource from active node to passive node called "FAILOVER".
2 Ways we have
Method 1:
Go to cluadmin.msc> go to services and applications>go to Sql server > right click> move
application or services from NODE AAA TO NODE BBB... or best possible node or select node.
> Then automatically all the SQL Server resources will failover to another node.
Automatically cluster resource take bring online in another node in below process.
2. Automatic failover: If node is down then automatically SQL and disks resources move to
another node.
Note: When we perform any kind of failover we need to have instance level downtime.
i.e. Sql server services take offline in current node and after comes online into another node
Active node:
258
Passive node:
3. CLUSTER MONITORING?
Ex: Checking Sql server SERVICES, FAILOVER, FAILBACK, OFFLINE, ONLINE OF SQL SERVER
RESOURCE... etc.
Cluster Monitoring:
Look Alive check: (called as Basic resource health check) verifies that SQL Server is running on
the current node.
Is Alive check : (called as Thorough resource health check) runs every 60 seconds and verifies
instance is up and running or not using the command in Resource DLL called SELECT
@@SERVERNAME every 60 seconds.
259
Resources: Shared disks,
Quorum
MSDTC
IS\LOOK ALIVE RUNS> use administrator account> writes into quorum> cluster service reads
from quorum> required trigger automatic failover of apps.
3. Once passive node up, please go to active node start applying service pack for server +tools
Automatically same service pack replicate to node 2 servers (passive node).
Note: Until service pack complete, Sql server databases are not accessible to application and
downtime is more.
> If service pack fail in active node then it automatically fails in passive node.
> SP always apply to node 1binnary files and node2 binary files automatically
260
SQL SERVER 2008 \8 R2\12\14:
1. Always apply the service pack in passive node for Server +tool binary files. Meanwhile my
application still access the active node database and can run the business.
3. Once node up, go to active node > Perform manual failover of SQL resource to passive node.
4. Then apply service pack in previous active node for servers+tools binary files
Note:
> Very less down time required for any Maintains activities
> Risk is very less...in terms of if sp is failed in passive node no impact to another node and can
still my application run the business.
Note: FROM Active NODE DO the failover of SQL Server resources to passive node.
Note: From Sql server 2008 with minimal down time (only for restart) can apply the patch.
ACTIVE-ACTIVE:
> If it is active -active node then need to perform by manual failover to any node and make one
node as passive node .Then apply patch for passive after failback again to active node.
1. Failover multiple instance to node1 and now node 2 become passive node.
2. Apply patch for my passive node, for both instance binary files.
261
Note: Restart node by node when it is required.
Go to cluadmin> Services and Applications> Right click > check nodes> apply
> Preferred nodes are an attribute of a resource group which is outside of SQL Server and is a
Windows Resource.
> You can always find out which node owns the resource by using
SERVERPROPERTY ('ComputerNamePhysicalNetBIOS')
Example:
Then my node2 need to bare the load of both NODE 1 SBI+NODE2 ICICI instances.
End users may face the performance impact>How to avoid this case?
Your application gives good performance even 2 instances are running in single node.
Go to registry> hkey_local_mechine> system> current control set> control> session manager> "
PENDING FILE RENAME OPERATIONS> Open file and clear the data...
262
Note: Registry can be backup and restore by using IMPORT \EXPORT methods at windows level
C:\windows\cluster\clusterlog.txt
> After adding windows team or san team disk into disk management. DBA team check whether
disk is showing in my computer or not.
> Now DBA team need to make the added disk as a dependency to SQL Server service.
> Add storage to the SQL Server group and then perform below step.
Note: Adding dependency disk is completely offline operation i.e. need to take SQL Server
services offline...Downtime is required
Note: Adding dependency disk is completely online operation i.e. No need to take SQL Server
service offline or downtime is not required.
263
10. DIFFERENCE BETWEEN WINDOWS SERVER 2003\2005, WINDOWS SERVER 2008\R2\SQL
2008\R2, WINDOWS SERVER 2012\SQL SERVER 2012
264
Manual checks need to perform
Validation Run validation report to validate Run validation report to validate
before SQL Server installation on
report windows cluster setup. windows cluster setup.
cluster
Number of
8 16 16
nodes
Dependency Downtime required. Need to Online operation. NO downtime Online operation. NO downtime
disk restart SQL Server services required required
But in SQL Server 2008 Onwards
But in SQL Server 2008 Onwards if
In SQL Server 2005 cluster, if any if any resource offline then with
any resource offline then with in
Policies resource went to offline, we in default 15 min of interval
default 15 min of interval cluster try
require manually to bring online. cluster try to start the resource
to start the resource into online. …
into online. …
265
PERFORMANCE TUNING
266
LOCKS
Locks:
> To hold a specific object (tables, database, pages, rows, instance, extent, key……etc) in SQL
Server by using this locking concept.
> To provide consistence data or right data or correct data to the end user
Note: Lock internally managed by lock manager and takes the decision depend on the
transaction what lock to be applied.
LOCK RESOURCES:
ROW LEVEL: Row identifier used to lock a single row with in a table
KEY LEVEL: row lock with in an index used to product key ranges in serializable transaction
SP_LOCK
OR
267
Output:
Request type
Request Session id
SP_WHO2
TYPES OF LOCKS:
1. Shared lock[S]: Multiple users can able to read the data on specific resource. No transaction
or query need to wait.
> When transaction starts internally lock manager applies shared lock and once reading
completed lock revoked automatically.
2. Exclusive Lock[X]: When we perform any insert, update and delete operations then exclusive
lock (X) placed on resource.
> Always lock manager gives the priority to DML operations compare to any select queries.
3. Update Lock [U]: Whenever we perform any update operations then update lock placed in
SQL Server.
> Update lock calls most of the time exclusive lock (X) by lock manager.
268
4. Schema Lock (SCH-L): When performing any locks at schema table then lock manager raise
Schema level lock.
5. Bulk Update [BU]: Bulk update lock generally placed by lock manager when there are any
bulk transactions.
2 Types:
> Instead of multiple row level locks better is table level lock. Which reduces number of locking
types and improves the performance by escalating lock.
> Instead of multiple page level of locks better is database level lock.
> SQL Server supports escalating the locks to the table level. The locks can only be escalated
from rows to the table or pages to the table level.
Note: In Sql server can be maintain by lock manger users are DBA does not have any consoling
locking system
269
BLOCKINGS
Blockings:
> Blocking occurs when one SPID holds a lock on a specific resource and a second SPID attempts
to acquire a conflicting lock type on the same resource.
Method 1: SP_WHO2
Method 4: Go to database > right click> reports> standard reports> all blocking sessions
Solution:
1. Check what is the spid causing the blocking in column "BLKBY” and find the queries which are
relate to by using
2. Share these spid and query information to apps guys and ask confirmation from their side to
kill one of the SPID to resolve the blockings.
3. Once apps team confirms then we will proceed for killing the spid. Ensure apps team send
email but not verbally.
KILL SPID
270
5. Ask apps team to check the query status and keep monitoring from DBA end.
- Application runs slowly and apps side queries take longer time
- If blockings run long time then it may down the Sql server. ASAP inform after monitoring to
apps team and kill after confirmation.
Long-running queries.
Note: BLOCKING information never stores in error log or event viewer. We have to capture
when blockings are running.
Blocking:
Create table A
Open--54:
Open--56:
Open --58:
271
Select * from sys.sysprocesses where spid>50 and blocked<>0
Dbcc inputbuffer(54)
sp_lock
to inform app team which query imp based on kill the session
272
DEAD LOCKS
Deadlock: A deadlock occurs when two or more tasks permanently block each other by each
task having a lock on a resource which the other tasks are trying to lock.
Error message: Transaction (Process ID 55) was deadlocked on lock resources with another
process and has been chosen as the deadlock victim.Rerun the transaction.
How to find?
Note: By default deadlock information never capture in SQL Server error log or event viewer or
you cannot find by using any query or DMV's.
We have to enable trace or run the profiler to capture the victim transactions for deadlocks
Method: 1
Enable DBCC TRACEON (1222, 1204,-1) then after automatically SQL Server captures deadlock
transaction into SQL Server errorlogs.
-1: Global Trace it will check the all the data bases.
DBCC TRACESTATUS
Method 2:
273
Sysadmin
Once run then automatically deadlock transactions captured into profiler tool in a graph
representation.
Pass the final information to dead lock manager and kills victim transaction to fix deadlock.
3. Avoid cursors.
274
INDEXES
Index
> An index is a collection of pages associated with a table (or view) used to speed retrieval of
rows from the table or enforce uniqueness
Types of indexes:
1. HEAP TABLE
> An index can be an unnecessary storage and maintenance overhead. (Eg: Audit Log)
> Using a table scan to find data can be quicker than maintaining and using an index. (Eg: less
rows)
> Frequently rows are added, deleted, and updated. The overhead of index maintenance can be
more costly than the benefits.
> Contain predominantly duplicate data rows. A table scan can be quicker than an index lookup.
275
2. CLUSTER INDEX:
> A poorly-chosen clustered index lead to high execution times and storage space.
> One clustered per Table. As only one physical way to store data.
Note:
> In cluster index data always stores in databases which are at leaf level.
> Index data always stores at root and intermediate levels where it contains only index pages.
Note:
> Primary Key by default creates a clustered Index. But whereas cluster index never create
primary key on table.
Columns:
Ind id
Index name
[COLUMN NAME]ASC
276
Note: Per table can create only one cluster index. The reason can arrange data physically in 1
way.Mutliple physical ways are not possible.
2.6 byte page pointer [Address reference at bottom level in page for next page]
Limitations:
> To return a range of values by using operators (BETWEEN, >, >=, <, and <=). Once first row is
found subsequent indexed values will be physically adjacent.
> The data values in the rows are already sorted. This improves query performance.
> Data retrieved from a table already sorted. (Saves sort cost)
277
Which columns can’t be Clustered key?
> Changes to a clustered index mean that the entire row of data must be moved to keep the
data values of a row in physical order.
> In non-cluster index data pages stores in storage location instead of leaf nodes
> Only pointer address maintain in leaf level and index pages gets created in root, intermediate
level and leaf level as well.
> In this index type performance is not faster compare to cluster index due to multiple levels of
search.
Per table:
Syntax:
[COLUMN NAME]ASC
1. Only one clustered index per table, where as you can have more than one non cluster index
2. Cluster index is faster than a non-clustered index, because, the clustered index has to refer
back to the table, if the selected column is not present in the index.
278
3. Clustered index determines the storage order of rows in the table, and hence doesn't require
additional disk space, but whereas a Non Clustered index is stored separately from the table,
additional storage space is required.
3. UNIQUE INDEX
> This is the type of index where it did not allow duplicate values on created column.
> This type of index allow UNIQUE null value but not multiple null values.
Syntax:
Note: Can create unique index in cluster index as well .But there is no use as cluster index
properties only applies for the tables
4. COMPOSITE INDEX
When you include more than a column at the time of index creation called as COMPOSITE index
Or
Can configure single index more than one column (Multiple) called as composite index
Note: Reduces number of index creation and storage space. Improve performance as well.
279
5. COVERING INDEX
> If the query include all the columns in a table with indexes called "COVERING index".
> All columns should include as a part of the indexes if we miss one column then it became
composite index.
> Reduces DISK I\O contention and improve the query performance.
> Partitioning can improve performance by distributing data across files, reduce contention,
and increase parallel IO operation.
> This type of index can use when any column data type as a "XML" type.
> This type of index improve query performance 50 times faster when retrieving any XML data
from the tables.
> Can this index used on row level on table included in portion.
> It reduce storage cost, improve query performance, reduce the Maintaince cost and
Maintaince is not required until any alter index performed.
> Allow null values due to this only NON CLUSTER FILTERED INDEX Can create.
280
9. SPATIAL INDEX: [NEW 2008]
> Can create 249 spatial index on a spatial column supported table.
> Location aware devices and services like GPS/ online mapping needs spatial data support to
manage location aware data
2. Vertipaq – a New Microsoft tech – different way to store columns and compress data in
index.
3. In regular index, all indexed data from each row kept in a single page. And columns of
different rows spread across.
4. In column store index, data from each column are kept together in same page.
> The SQL Server in-memory columns tore index stores and manages data by using column-
based data storage and column-based query processing.
> Column store indexes work well for data warehousing workloads that primarily perform bulk
loads and read-only queries.
> Use the column store index to achieve up to 10x query performance gains over traditional
row-oriented storage, and up to 7x data compression over the uncompressed data size.
281
Reason For efficiency:
Index Pub:
FILL FACTOR:
> It tells how much percentage of index data need to fill in index pages [ROOT and intermediate
levels]
> A fill factor is reserved free space on each leaf level page which is used for future growth of
data or index in table and reduces the page splits.
Assign Fill factor: right click on server> properties > database settings> default index fill factor>
provide the value
Note: As per MS recommend fill factor always set 80% and rest of 20% used for index
Maintaince.
PAD INDEX:
> This is depends on fill factor percentage value It tells how much percentage of data need to
fill in data pages [Leaf levels]
> By default there is no value and it always depends on fill factor parameter.
Note: Without enabling or set fill factor pad index not possible to enable.
282
WHAT IS INDEX SEEK AND INDEX SCAN?
INDEX SCAN: It scans all the rows in table and can we consider as a table scan. Which takes long
time generally to retrieve data.
INDEX SEEK: Only search for qualified rows in SQL Server and it is always faster than index scan.
The pages gets effected with fragmentation due to huge insert\update or delete
283
FRAGMENTATION
Avg_fragementaion_in_pernt:
>0 to <5: Indexes are good and no fragmentation .No action required.
284
Fragmented Index:
FRAGEMENTATION TYPES
1. Internal Fragmentation: Index pages takes more pages than needed. Data stored in pages in
improper order.
REORGANIZE: Internally the data gets rearranged inside the data pages and no effect to existing
indexes.
REBUILD: Existing indexes dropped after recreating newly with same name and on same
columns. At the time index recreation automatically data gets rearranged.
285
Alter INDEX INDEXNAME ON TABELNAME REBUILD
Note: When you are creating index\rebuild\reorganize the table loaded into TEMPDB system
database and then perform the operations. If Tempdb disk does not have sufficient disk space
then index may fail.
Note: Upto SQL Server 2000 version rebuilding the index required down time due to table
cannot be accessible to end users. [OFFLINE INDEXING]
When rebuild index is running still table can be accessible by end users.
Note: Rebuild or reorganize always performs at Tempdb. 120% of free space should require
depends on the table size.
How to find:
UNUSED INDEXES: If indexes are not used they should be dropped as Indexes reduces the
performance for INSERT/UPDATE statement. Indexes are only useful when used with SELECT
statement.
To find whether any unused indexes in SQL Server can get by DMV.
Note: if any UN unused indexes in SQL Server then we cannot get good performance. To use
indexes drop existing indexes and recreate the same indexes.
286
ISOLATION LEVELS
> Isolation levels in SQL Server control the way locking works between transactions.
1. READ UNCOMMITTED
2. READ COMMITTED [THE DEFAULT]
3. REPEATABLE READ
4. SERIALIZABLE
5. SNAPSHOT [SQL 2005 NEW FEATURE]
> Before I run through each of these in detail you may want to create a new database to run the
examples, run the following script on the new database to create the sample data.
Note: You’ll also want to drop the Isolation Tests table and re-run this script before each
example to reset the data.
Id INT IDENTITY,
Col1 INT,
Col2 INT,
Col3 INT
SELECT 1,2,3
287
UNION ALL SELECT 1,2,3
1.Dirty Reads – This is when you read uncommitted data, when doing this there is no guarantee
that data read will ever be committed meaning the data could well be bad.
2. Phantom Reads – This is when data that you are working with has been changed by another
transaction since you first read it in. This means subsequent reads of this data in the same
transaction could well be different.
1. READ UNCOMMITTED
> This is the lowest isolation level there is. Read uncommitted causes no shared locks to be
requested which allows you to read data that is currently being modified in other transactions.
It also allows other transactions to modify data that you are reading.
> As you can probably imagine this can cause some unexpected results in a variety of different
ways. For example data returned by the select could be in a half way state if an update was
running in another transaction causing some of your rows to come back with the updated
values and some not to.
> To see read uncommitted in action let’s run Query1 in one tab of Management Studio and
then quickly run Query2 in another tab before Query1 completes.
Query1
BEGIN TRAN
288
UPDATE Tests SET Col1 = 2
ROLLBACK
Query2
> Notice that Query2 will not wait for Query1 to finish, also more importantly Query2 returns
dirty data. Remember Query1 rolls back all its changes however Query2 has returned the data
anyway, this is because it didn’t wait for all the other transactions with exclusive locks on this
data it just returned what was there at the time.
> There is a syntactic shortcut for querying data using the read uncommitted isolation level by
using the NOLOCK table hint. You could change the above Query2 to look like this and it would
do the exact same thing.
> This is the default isolation level and means selects will only return committed data. Select
statements will issue shared lock requests against data you’re querying this causes you to wait
289
if another transaction already has an exclusive lock on that data. Once you have your shared
lock any other transactions trying to modify that data will request an exclusive lock and be
made to wait until your Read Committed transaction finishes.
> You can see an example of a read transaction waiting for a modify transaction to complete
before returning the data by running the following Queries in separate tabs as you did with
Read Uncommitted.
Query1
BEGIN TRAN
ROLLBACK
Query2
> Notice how Query2 waited for the first transaction to complete before returning and also how
the data returned is the data we started off with as Query1 did a rollback. The reason no
isolation level was specified is because Read Committed is the default isolation level for SQL
Server. If you want to check what isolation level you are running under you can run “DBCC
useroptions”. Remember isolation levels are Connection/Transaction specific so different
queries on the same database are often run under different isolation levels.
290
3. REPEATABLE READ
> This is similar to Read Committed but with the additional guarantee that if you issue the same
select twice in a transaction you will get the same results both times. It does this by holding on
to the shared locks it obtains on the records it reads until the end of the transaction, this means
any transactions that try to modify these records are force to wait for the read transaction to
complete.
> As before run Query1 then while its running run Query2
Query1
BEGIN TRAN
ROLLBACK
Query2
> Notice that Query1 returns the same data for both selects even though you ran a query to
modify the data before the second select ran. This is because the Update query was forced to
wait for Query1 to finish due to the exclusive locks that were opened as you specified
Repeatable Read.
291
> If you rerun the above Queries but change Query1 to Read Committed you will notice the two
selects return different data and that Query2 does not wait for Query1 to finish.
Note:
> One last thing to know about Repeatable Read is that the data can change between 2 queries
if more records are added. Repeatable Read guarantees records queried by a previous select
will not be changed or deleted, it does not stop new records being inserted so it is still very
possible to get Phantom Reads at this isolation level.
4. SERIALIZABLE
> This isolation level takes Repeatable Read and adds the guarantee that no new data will be
added the chance of getting Phantom Reads. It does this by placing range locks on the queried
data. This causes any other transactions trying to modify or insert data touched on by this
transaction to wait until it has finished.
> You know the drill by now run these queries side by side…
Query1
BEGIN TRAN
292
SELECT * FROM IsolationTests
ROLLBACK
Query2
VALUES (100,100,100)
> You’ll see that the insert in Query2 waits for Query1 to complete before it runs the chance of
a phantom read. If you change the isolation level in Query1 to repeatable read, you’ll see the
insert no longer gets blocked and the two select statements in Query1 return a different
amount of rows.
Serializable: Phantom reads are node possible due to transaction execution performs serially.
> Well it’s more in the way it works, using snapshot doesn’t block other Queries from inserting
or updating the data touched by the snapshot transaction. Instead it creates its own little
snapshot of the data being read at that time, if you then read that data again in the same
transaction it reads it from its snapshot, This means that even if another transaction has made
293
changes you will always get the same results as you did the first time you read the data. To use
the snapshot isolation level you need to enable it on the database by running the following
command
SET ALLOW_SNAPSHOT_ISOLATION ON
Note:
Advantages of Snapshot:
No blockings
No deadlocks
> The table data gets mapped into space file in TEMPDB after set the isolation level as
SNAPSHOT then what ever the SELECT queries reads from Tempdb sparse file and whatever
inserts\updates\deletes performs on actual table in the user database
294
SWITCHES
/3GB SWITCH:
> By default SQL Server can be able to utilize 2 GB max memory in 32-bit o/s.
> If required to use SQL Server more than 2 GB [Max 3 GB] then recommended to enable /3GB
switch in BOOT.INI file.
C:\BOOT.INI
> Note pad will be opened and at last enable /3GB and save the notepad file. After your SQL
Server can be able to consume 3 GB max.
If RAM<4GB:
/3GB
/3GB /PAE
/PAE
Note: After enabling PAE switch at o\s level SQL Server can go and utilize entire memory from
O\S .Due to this chance of running Server out of memory which can leads to Server hang.
Required to control or restrict or cap memory at SQL Server level by using SQL Server SWITCH
“AWE"
295
AWE [Addressing windowing extension]:
Note: As there is a risk involved with memory consumption from OS Mode, it is always
recommended to set "MAX SERVER MEMORY" restriction.
Note: In 64-bit O/S no need to enable any switches to use memory by SQL Server applications.
Note:
> IN WINDOWS and SQL 32 BIT O\S Need to enable AWE & PAE switches to use more memory
to SQL Server.
> In x 64 bit O\S no need to enable AWE OR PAE switch .Directly we can enable MIN AND MAX
memory values at SQL Server level.
SQL 2005 or LOWER VERSIONS: After setting AWE with min and max memory required to
restart ONLY SQL SERVER SERVICES.
SQL 2008 ONWARDS: No need to perform ant Sql server service restart after min and max
memory.
32-bit 64-bit
Lesser Data Transfer Speeds upto 32bits Faster (double) performance benefit, speeds upto 64bits
Maximum RAM supported is 4GB Maximum RAM Supported is 7-8TB
Less memory registers allocated Has more memory registers allocated
Reserves less space for OS reserved portions when
Reserves more space for OS reserved portions
compared to 64 bit
Memory mapped files are more difficult to implement
It’s easy to map memory mapped files
in 32 bit
32 bit less expensive 64 bit expensive
Offers special enhanced security features like Kernel Patch
Protection, Support for hardware backup data execution
protection
More compatible device drivers Incompatible Device Drivers
296
LOCK PAGES IN MEMORY
> It is a Windows policy determines which accounts can use a process to keep data in physical
memory, preventing the system from paging the data to virtual memory on disk.
Note: Always SQL Server gives good performance when read or write from memory instead of
disk. To control IO from Disk better to add SQL Server service account need to add under Group
policies.
GPEDIT.MSC -> Computer Configuration -> Windows Settings -> Security Settings -> Local
Policies -> User Rights Assignment -> Lock Pages in Memory -> Add Service Account to use the
Physical Memory.
> In 32bit it is used to extend memory access beyond 4 GB or to enable the AWE feature. For
64bit systems, it is to possibly gain performance and to “lock pages” for the buffer pool.
297
DAC (Dedicated Administration Connection)
• The DAC allows an administrator to access any emergency issue a running instance of
SQL Server Database Engine to troubleshoot problems on the server—even when the
server is unresponsive to other client connections.
• The DAC is available through the Sql cmd utility and SQL Server Management Studio
[Query analyzer].
• By default DAC is enabled to connect through local connection. That means you can
connect to the local instance of SQL server using DAC without making any changes.
• To connect the SQL server using DAC from remote machine using TCP/IP, we have to
enable the 'remote admin connections’ using sp_configure. We can enable the remote
DAC connection using the below query.
Sp_configure: This procedure allows you to enable internal configurations of SQL Server .Eg,
memory, DAC, Query timeout. Etc.
Note1:
• By default SQL server listen to DAC connection on port number 1434. If the port number
1434 is not available, SQL server dynamically assign a port number during the start up
and this can be found in the SQL server error log as given below.
• DAC only allows you to connect query analyzer but not object explore SSMS.
298
Server nameAdmin: host name ---Default instance
>> By SQLCMD:
-P: Password
Note: If the DAC is already in use, the connection will fail with an error indicating it cannot
connect.
DAC Limitations:
• Only one DAC connection is allowed per instance. If a DAC connection is already open,
new connection request will be denied with error 10053.
• You can’t connect SSMS object explorer using the DAC connection, but you can connect
a query analyser window.
• SQL server prohibits running parallel queries or commands on DAC connection. Error
3637 is generated if you try to perform a backup or restore operation.
• Only login with Sysadmin rights can establish the DAC connections.
• Cannot run parallel queries in SQL Server instance as only 1 DAC connection is allowed.
• SQL Server services are not possible to restart by using DAC query analyzer.
299
By default always DAC uses 1434 port number but if 1434 is not enabled in server then SQL
uses dynamic port at the time of SQL Server start up.
• mstsc \admin
Max any server RDP allow only 2 connections. If I need to connect for any emergency then
should use above method without disconnecting any users.
300
CDC (CHANGE DATA CAPTURE)
1. Microsoft SQL Server 2008 has introduced a very exciting feature for logging DML changes.
2. Change data capture provides information about DML changes on a table and a database.
3.Change data capture records insert, update, and delete activity , that’s applied to a SQL
Server table and makes a record available of what changed, where, and when, in simple
relational 'change tables’.
4. Also stores historical data and COLUMN level changes in SQL Server by using CDC feature.
5. Change data capture is available only on the Enterprise, Developer, and Evaluation editions
of SQL Server
>>>How it works>
1. The source of change data for change data capture is the SQL Server transaction log.
2. As inserts, updates, and deletes are applied to tracked source tables, entries that
describe those changes are added to the log.
3. The log serves as input to the change data capture process. This reads the log and
adds information about changes to the tracked table’s associated change table.
EXEC sys.sp_cdc_enable_db
301
[cdc].[captured_columns]
[cdc].[change_tables]
[cdc].[ddl_history]
[cdc].[index_columns]
[cdc].[lsn_time_mapping]
[dbo].[systranschemas]
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'MyTable',
@role_name = NULL
Note: Few CDC system table and 2 CDC jobs create automatically inside of the SQL Server
databases
cdc.change_tables: This table returns list of all the tables which are enabled for capture.
cdc.ddl_history: This table contains history of all the DDL changes since capture data enabled.
cdc.lsn_time_mapping: This table maps LSN number (for which we will learn later) and time.
dbo.systranschemas:
302
After enabling CDC on table one more addition tracking table
Ex: CDC.DBO_STAB_CT
cdc.DBNAME_capture
cdc.DBNAME_cleanup
Findings:
1: Delete operation
2: Insert operation
3: Before update
4: After update
Along with this data gets captured into CDC defined table.
Note: Enable CDC only with confirmation from apps team or client... If you enable it consumes
more hardware resource and additional storage is required.
303
WAITING TYPES
> When if any query waiting for resources then relevant wait type comes into picture. Which
cause high performance impact.
1. LCK_M_S: Occurs when a task is waiting to acquire a shared lock. [Occurs mostly in
blockings]
2. ASYNC_IO_COMPLETION: Occurs when a task is waiting for I/Os to finish.
3. ASYNC_NETWORK_IO: Occurs on network writes when the task is blocked behind the
network. Verify that the client is processing data from the server.
http://logicalread.solarwinds.com/common-sql-server-wait-types/#.VfRorhGqqko
304
SQL SERVER ARCHITECTURE
> SQL Server architecture is mainly divided into different components i.e. SNI Protocol Layer,
Relational Engine, Storage Engine, Buffer Pool. Majorly classified as two main engines:
Relational Engine and the Storage engine.
> The SQL Server Network Interface (SNI) is a protocol layer that establishes the network
connection between the client and the server. It consists of a set of APIs that are used by both
the database engine and the SQL Server Native Client (SNAC).
305
The SNI protocol layer is common to both the Database Engine and SQL Server Native Client. The SNI
protocol layer is not directly configured. Instead, the server and SQL Server Native Client are configured to
use a network protocol. Then, the Database Engine and SQL Server Native Client automatically use the
appropriate protocol settings. The server creates a SQL Server object called a TDS endpoint for each
network protocol. On the server, the TDS endpoints are installed by SQL Server during SQL Server
installation.
new data access technology called “SQL Native Client” that will ship with SQL Server 2005.
➤Shared memory: Simple and fast, shared memory is the default protocol used to connect
from a client running on the same computer as SQL Server. It can only be used locally, has no
configurable properties, and is always tried first when connecting from the local machine.
➤TCP/IP: TCP/IP is the most commonly used access protocol for SQL Server. It enables you to
connect to SQL Server by specifying an IP address and a port number. Typically, this happens
automatically when you specify an instance to connect to. Your internal name resolution
system resolves the e part of the instance name to an IP address, and either you connect to the
default TCP port number 1433 for default instances or the SQL Browser service will find the
right port for a named instance using UDP port 1434.
➤Named Pipes: TCP/IP and Named Pipes are comparable protocols in the architectures in
which they can be used. Named Pipes was developed for local area networks (LANs).
306
TDS is a Microsoft-proprietary protocol originally designed by Sybase that is used to interact
with a database server. Once a connection has been made using a network protocol such as
TCP/IP, a link is established to the relevant TDS endpoint that then acts as the communication
point between the client and the server.
The Relational Engine is also sometimes called the query processor because its primary function
is query optimization and execution.
1. Command Parser:
The Command Parser’s role is to handle T-SQL language events. It first checks the syntax and
returns any errors back to the protocol layer to send to the client. If the syntax is valid, then the
next step is to generate a query plan or find an existing plan. A Query plan contains the details
about how SQL Server is going to execute a piece of code. It is commonly referred to as an
execution plan.
Plan Cache: Creating execution plans can be time consuming and resource intensive, so The
Plan Cache, part of SQL Server’s buffer pool, is used to store execution plans in case they are
needed later.
2. Query Optimizer:
The Query Optimizer is one of the most complex and secretive parts of the product. It is
what’s known as a “cost-based” optimizer, which means that it evaluates multiple ways to
execute a query and then picks the method that it deems will have the lowest cost to execute.
This “method” of executing is implemented as a query plan and is the output from the
optimizer.
3. Query Executor:
The Query Executor’s job is self-explanatory; it executes the query. To be more specific, it
executes the query plan by working through each step it contains and interacting with the
Storage Engine to retrieve or modify data.
307
The Storage Engine:
> The Storage engine is responsible for managing all I/O to the data, and contains the Access
Methods code, which handles I/O requests for rows, indexes, pages, allocations and a Buffer
Manager, which deals with SQL Server’s main memory consumer, the buffer pool. It also
contains a Transaction Manager, which handles the locking of data to maintain Isolation (ACID
properties) and manages the transaction log.
1. Access Methods:
> Access Methods is a collection of code that provides the storage structures for data and
indexes as well as the interface through which data is retrieved and modified. It contains all the
code to retrieve data but it doesn’t actually perform the operation itself; it passes the request
to the Buffer Manager.
2. Buffer Manager:
> The Buffer Manager manages the buffer pool, which represents the majority of SQL
Server’s memory usage. If you need to read some rows from a page the Buffer Manager will
check the data cache in the buffer pool to see if it already has the page cached in memory. If
the page is already cached, then the results are passed back to the Access Methods. If the page
isn’t already in cache, then the Buffer Manager will get the page from the database on disk, put
it in the data cache, and pass the results to the Access Methods.
Data Cache: The data cache is usually the largest part of the buffer pool; therefore, it’s the
largest memory consumer within SQL Server. It is here that every data page that is read from
disk is written to before being used.
3. Transaction Manager:
> The Transaction Manager has two components that are of interest here: a Lock Manager
and a Log Manager.
308
> The Lock Manager is responsible for providing concurrency to the data. The Access
Methods code requests that the changes it wants to make are logged, and the Log Manager
writes the changes to the transaction log. This is called Write-Ahead Logging.
Checkpoint Process:
> A checkpoint is a point in time created by the checkpoint process at which SQL Server can
be sure that any committed transactions have had all their changes written to disk. This
checkpoint then becomes the marker from which database recovery can start.
> The checkpoint process ensures that any dirty pages associated with a committed
transaction will be flushed to disk. Unlike the lazy writer, however, a checkpoint does not
remove the page from cache; it makes sure the dirty page is written to disk and then marks the
cached paged as clean in the page header.
Lazy writer:
> The lazy writer is a thread that periodically checks the size of the free buffer list. When it’s
low, it scans the whole data cache to age-out any pages that haven’t been used for a while. If it
finds any Dirty pages that haven’t been used for a while, they are flushed to disk before being
marked as free in memory.
Checkpoint occurs:
Cluster failover
Snapshot is created
Commit is issues.
309
Add file and file group arch
Lazy writer
Check point.
Dirty pages
Dirty data.
https://technet.microsoft.com/en-us/library/ms173789(v=sql.105).aspx
Performance:
• Disk Issues
• DB Design
310
Anything changed in SQL Server Management Studio will be traced by the SQL Profiler. So it can
basically be used for database performance check. We also have "SQL Server Performance Monitor"
to monitor the System and Server performance too.
SQL Profiler captures SQL Server events from a server. The events are saved in a trace file that
can be used to analyse and diagnose problem.
• It is used to find the cause of the problem by stepping through problem queries.
• SQL Profiler also supports auditing the actions performed on instances of SQL Server.
• Monitor the performance of an instance of the SQL Server Database Engine, Analysis
Server, or Integration Services (after they have occurred).
311
• Analyze performance by identifying slowly executing queries.
• Collect a sample of events for tuning the physical database design by using database
engine tuning advisor
• Aggregate trace results to allow similar event classes to be grouped and analyzed. These
results provide counts based on a single column grouping.
Profiler is tool that monitors the events and activity running on a SQL Server. Using profiler, this
monitoring can be viewed, saved, and replayed.
312
Mostly these events we are using in real time:
ErrorLog
EventLog
Exception
2.All Errors and Warnings
Hash Warning
Execution Warnings
Sort Warnings
313
Missing Column Statistics
Bitmap Warning
Lock: Acquired
Lock: Cancel
Lock: Deadlock
3.Lock and Timeout Events
Lock: Deadlock Chain
Lock: Timeout
Audit Login,
5.Security Audit:
314
Audit Logout
7.TSQL events:
SQL: BatchStarting
SQL: StmtStarting
8.High CPU usage/Long running
SP: StmtStarting
queries
SP: Starting
RPC: Starting
To start, stop and delete a trace you use the following commands.
To find traceid
This will give you a list of all of the traces that are running on the server.
To start a trace
Sp_trace_setstatus traceid, 1
To stop a trace
Sp_trace_setstatus traceid, 0
Sp_trace_setstatus traceid, 2
315
To delete you need to stop the trace first and then you can delete the trace. This will close out
the trace file that is written.
1. DBCC opentran--To check what the current open transactions are running
2. Check blockings
3. Check deadlocks
4. Check indexes
Joins
Number of columns (eg:: 6 columns to get data but used select * from option then it will use all
the columns)
316
4. Error & Warnings( syntax errors)
5. Performance( show plan)
6. Table scans
7. Security audits(failed logins, password changes)
8. Monitor server control, memory changes(cpu, reads, writes)
9. Sessions, transactions, tuning
PERFORMANCE TOOL
Performance monitor:
This is one of tool in built in SQL Server and the main purpose is to capture resource utilization
in SQL Server in terms of CPU,
MEMORY,
network
1. MEMORY COUNTERS:
---memory
This counter tells how much time page residing in the buffer pool is nothing but Page life
Expectancy.
2. Lazy Writes/Sec:
This counter tracks how many times a second that the Lazy Writer process is moving dirty pages
from the buffer to disk in order to free up buffer space.
Note: Generally speaking, this should not be a high value, say more than 20 per second or so
317
3. Checkpoint Pages/Sec: When a checkpoint occurs, all dirty pages are written to disk. This is a
normal procedure and will cause this counter to rise during the checkpoint process.
4. Page reads/sec: Number of physical database page reads issued. 80 – 90 per second is
normal. Anything value more then leads to memory pressure.
5. Page writes/sec: Number of physical database page writes issued. 80 – 90 per second is
normal. Anything value more than leads to memory pressure.
6. Stolen pages:
https://www.simple-talk.com/blogs/2009/08/03/stolen-pages-ad-hoc-queries-and-the-
sins-of-dynamic-sql-in-the-application/
318
this. The word ‘stolen’ is a bit misleading as this is a
perfectly legitimate exercise. Stolen pages are buffer cache
pages that are ‘stolen’ to use for other server memory requests.
Stolen pages are used for several miscellaneous server purposes
such as procedure cache, sorting or for hashing operations (query
workspace memory)
Number of pages used for miscellaneous server purposes (including procedure cache).
8. Buffer Cache hit ratio: Percentage of pages that were found in the buffer pool without
having to incur a read from disk.
9. Target Server Memory (KB): Total amount of dynamic memory the server can consume.
10. Total Server Memory (KB): Total amount of dynamic memory (in kilobytes) that the server
is using currently
2. DISK COUNTERS:
----Logical disk
1. Avg. Disk Sec/Read: Measure of disk latency. Avg. Disk sec/Read is the average time to read
data from disk
319
Good < 12 Msec (.012 seconds)
2. Avg. Disk sec/Write: Measure of disk latency. Avg. Disk sec/Write is the average time, in
seconds, of a write of data to the disk.
3. Avg. Disk Read Queue Length: Avg. Disk Read Queue Length is the average number of read
requests that were queued for the selected disk during the sample interval
4. Avg. Disk Write Queue Length: Avg. Disk Read Queue Length is the average number of write
requests that were queued for the selected disk during the sample interval
5. Avg disk reads\sec: Per second how many number of reads should perform in disk
6. Avg disk writes\sec: Per second how many number of writes should perform in disk
%Processor Time:
%Privileged Time
320
%User time
Interrupts time
4. NETWORK COUNTERS:
TCP: Segments/sec.
https://msdn.microsoft.com/en-us/library/ms167313.aspx
321
> The main purpose of this tool is to provide recommendations to improve the query
performance by analysis the query [work load file] on a table in the database.
> Database tuning advisor can analyze the performance effects of workloads run against one or
more databases or a SQL profiler trace (they may contain T-SQL batch or remote procedure
call). After analyzing, it recommends to add, remove or modify physical design structure such as
clustered and non-clustered indexes, indexed views and portioning.
Workload: a work load is set of transact-SQL statements that executes against databases you
want to tune.
Steps:
Note: After analysis DTA provides recommendations and then implement always on test server
first. Check the performance and after implement on prod system.
From DTA reports> statement cost report > Value have to check to know how much percentage
of query faster.
Example:
Create table [Table] (userID varchar (55), FirstName nvarchar (55), Lastname nvarchar (55))
; With sequence as (
Select
322
'Doe' + right ('000000' + cast (N as varchar (10)), 6)
DBCC FREEPROCCACHE
GO
DBCC DROPCLEANBUFFERS
GO
GO
DBCC FREESESSIONCACHE
Note:
Statistics: Mainly it contains table updated information and density value information.
ACID PROPERTIES
What is Transaction?
323
Transaction is a set of logical unit of work and it contains one or more database operations. A
valid transaction should be met (ACID) Atomicity, Consistency, Isolation, Durability properties.
Atomicity: A transaction must be an automatic unit of work, either all of it data modification
are performed or none of them is performed.
Consistency: When completed a transaction must leave all data in consistent state. In a
relational database, all rules must be applied to transactions modifications to maintain all data
integrity. All internal data structures, such as B-tree indexes (or) doubled linked lists must be
correct at end of transaction.
Durability: After a transaction has completed, its effects are permanently in place in the
system. The modification persists[continues] even in the event of a system failure.
324
Note: In Sql server express edition we won’t have Sql server agent service due to this we cannot
be able to configure jobs or either Maintaince plans to automate.
Steps:
Instance name
Path
Type of backup
Retention period
Path
Scheduler name
Schedule
4. Your task scheduler contact batch file and trigger backups via sp.
QUERY TUNING
325
1. Check any open Tran are there by using
DBCC OPENTRAN
If yes then according to your project process we find blockings and speak to apps team to
confirm which SPID required to kill with approvals via email.
KILL SPID
If no blockings then?
Note: By default SQL Server db engine will not capture any deadlock information DBA team
need to enable trace flags.
2. Logical Operation
7. Cache Size
8. Rebinds
9. Rewinds
326
10. Operator cost
Actual Execution Plan - (CTRL + M) - is created after execution of the query and contains the
steps that were performed
Estimated Execution Plan - (CTRL + L) - is created without executing the query and contains an
approximate execution plan
6. Check whether any indexes are created on the table at or not. If no then inform your
dev\apps team to suggest create index which improves performance of query.
If We FRAGMENTATION level
If no FRAGMENTATION
327
8. Check for any missing indexes by using
10. Check CPU and memory utilisation.get Top 10 queries which are consuming more CPU and
memory in the Sql then inform share the details to apps team.
11. Run profiler or perfmon tool to capture events or counters depending on type of
parameters.
13. Run the DTA [Database tuning advisor] to get any suggestions to improve the performance
Note: How to find which SQL Server instance is consuming more CPU or memory when multiple
instances in a single server.
Go to task manager> view > select columns > Select PID [Process id]. Take PID for SQL Server
and compare with configuration manager> Process id for specific instance. Compare both PID
for task manager and configuration manager > Then conclude the instance name which takes
more memory or CPU…..
1. Go to task manager> check whether SQL Server .exe (SSMS.EXE) is using more CPU or not?
328
If using then look further investigation from inside Sql server else if any other resource .exe is
using inform to concern team.
Check any long running or any open transactions are running in SQL Server causing high CPU.
2. If it’s SQL server then open perfmon and add the counter for
% Processor time
%user time
If you see the %privileged time to be high like covering around 40% and above then it could be
some driver issue.
3. First and very important one is to know what’s the architecture i.e. 64 bit (X64 or IA 64) or 32
bit(x86):
Note: -Improper index or missing index or fragmented index which lead to query to run longer
.It leads to high CPU.
Dbcc tracestatus (-1); It’s always good to check the reason of enabling the trace flags, if
enabled.
sp_configure 'advanced', 1
go
go
sp_configure
go
Priority Boost: - it’s not recommended to be enabled as it can cause high CPU utilization. If it’s
enabled (value =1) please disable (set value to 0) it - it’s a static parameter which needs SQL
restart
329
Max degree of parallelism: - Although this parameter doesn’t make much of a difference but
sometimes I have seen SPID stuck in ACCESS_METHODS_SCAN_RANGE_GENERATOR, cx packet
or other parallel wait types. If you see any such waits when you run this query:
Note: - If you have more than 8 processors, please make sure the max degree of parallelism
parameter is 8 or below 8.
8. DBCC FREEPROCACHE: prepares new plan by query optimizer and recompilation starts
again...Which may improve the performance and reduce CPU utilization.
9. Check with developer on designing of query or Application design or any further bugs...
330
> The Max Degree of Parallelism or MAXDOP is a configuration indicating how the SQL Server
optimizer will use the CPUs. This is a server wide configuration that by default uses all of the
CPUs to have the available portions of the query executed in parallel. MAXDOP is very
beneficial in a number of circumstances, but what if you have a reporting like query that runs in
an OLTP system that monopolizes much of the CPU and adversely affects typical OLTP
transactions.
Or
> When SQL Server runs on a computer with more than one microprocessor or CPU, it detects
the best degree of parallelism, that is, the number of processors employed to run a single
statement, for each parallel plan execution. You can use the max degree of parallelism option
to limit the number of processors to use in parallel plan execution. To enable the server to
determine the maximum degree of parallelism, set this option to 0, the default value. Setting
maximum degree of parallelism to 0 allows SQL Server to use all the available processors up to
64 processors. To suppress parallel plan generation, set max degree of parallelism to 1. Set the
value to a number greater than 1 to restrict the maximum number of processors used by a
single query execution. The maximum value for the degree of parallelism setting is controlled
by the edition of SQL Server, CPU type, and operating system. If a value greater than the
number of available processors is specified, the actual number of available processors is used. If
the computer has only one processor, the max degree of parallelism value is ignored.
Complex/Long running queries – During query optimization, SQL Server looks for queries
that might benefit from parallel execution. It distinguishes between queries that benefit from
parallelism and those that do not benefit, by comparing the cost of an execution plan using a
single processor versus the cost of an execution plan using more than one processor and uses
the cost threshold for parallelism value as a boundary point to determine short or long query.
In a parallel query execution plan, the INSERT, UPDATE, and DELETE operators are executed
serially. However, the WHERE clause of an UPDATE or a DELE
TE statement, or the SELECT part of an INSERT statement may be executed in parallel. The
actual data changes are then serially applied to the database.
331
Index data definition language (DDL) – Index operations that create or rebuild an index
(REBUILD only, not applicable to REORGANIZE), or drop a clustered index and queries that use
CPU cycles heavily are the best candidates for a parallel plan. For example, joins of large tables,
large aggregations, and sorting of large result sets are good candidates.
Other Operations – Apart from the above, this option also controls the parallelism of
DBCC CHECKTABLE, DBCC CHECKDB, and DBCC CHECKFILEGROUP.
The degree of parallelism value is set at the SQL server instance level and can be modified by
using the sp_configure system stored procedure (command shown below). You can override
this value for individual query or index statements by specifying the MAXDOP query hint or
MAXDOP index option. Note that this can be set differently for each instance of SQL Server. So
if you have multiple SQL Server instances in the same server, it is possible to specify a different
Maximum DOP value for each one.
--and let you set only when show advanced options is set to 1
GO
GO
GO
GO
By default, when SQL is installed on a server, the parallelism setting is set to 0 meaning that the
optimizer can utilize all the available processors to execute an individual query. This is not
necessarily the most optimal setting for the application and the types of queries it is designed
to support.
332
Scenario 1 – For OLTP application, a typical setting is 1 would help. The reason for this is
that in an OLTP environment, most of the queries are expected to be point queries which
address one or a relatively small number of records. Such queries do not need parallelized
processing for efficient execution. If there are specific queries which have a need for a setting
greater than 1, then the source code needs to be examined to see if a MAXDOP hint can be
availed.
Scenario 2 – For OLAP application, the setting should typically be default 0 (up to 8
processors) or be greater than 1, because each queries, such application will use, will typical
target thousands of, millions of records and also there might a scenario when you drop the
index before ETL operation and re-create it once refreshed data is uploaded in typical data
warehousing application. There will definitely be performance advantages in using multiple
processors to do these works in parallel fashion.
Note: Using a setting of 0 in these applications is not recommended, especially when there are
more than 8 processors in order to keep the coordination costs, context switching down to
manageable levels. It is typical to start with a value of 4 and experiment with the reporting
queries to see if this needs to be adjusted upwards.
333
MEMORY ISSUE
Steps: 1. Go to task manager> check SQLsrv.exe or SSMS.exe how much memory utilizing.
2. If any other .exe is using more Memory then inform to relevant team. If SQL Server is
consuming then go to SSMS> try to check any spid consuming more memory by using
3. If example: is Sql server is consuming 20 GB out of total 24 GB physical memory, then how to
find actual utilization from Sql end.
Find Value or parameter: STOLEN POTENTIAL [Shows free memory from SQL level]
CONVERT VALUE INTO: VALUE*8 /1024/1024= Actual free memory out of total SQL Server
consumed memory.
4. Please check whether memory capping or restriction is done at SQL Server level.
5. If not please inform apps team to raise Change record and you provide suggestion how much
memory capping to be set dedicatedly for SQL Server.
Always keep 80% of memory Sql server and rest of 20 % of memory to o\s out of total physical
memory.
6. Still memory issue, then enable the memory related counters to find the standard values.
7. Verify the execution plans and sometimes query optimizer may prepare wrong plans which
leads to high CPU and high memory
Note: plz get approval from application team before performs the cache clear.
334
UPDATE STATISTICS
https://www.mssqltips.com/sqlservertip/2766/sql-server-auto-update-and-auto-create-
statistics-options/
https://www.mssqltips.com/sqlservertip/2904/sql-servers-auto-update-statistics-async-
option/
This is used after inserting huge rows into table or deleting from a table the pages of the
index will be not in the proper way so by running this thing we can bring them to proper
way.
Updating statistics is valuable for ensuring the SQL Server optimizer has the current
statistical information to most efficiently process the query results. As a best practice,
the UPDATE STATISTICS command should be issued on a regular basis to provide SQL
Server with the most recent data. Automatically updating statistics is possible with the
'Auto Update Statistics' database configuration, but this could cause performance issues if a
large table's statistics are updated during the middle of the day. To prevent this problem,
the 'Auto Update Statistics' option can be disabled on a per database basis. This paradigm
creates the need to rebuild all of the statistics in a manual manner. Does a happy medium
exist to let SQL Server automatically manage the statistics on small tables and let me
manage the statistics on the larger tables?
Solution
The simple answer is 'yes'. SQL Server statistics can be configured on a per database basis
with the ability to not automatically re-compute indexes on larger tables with the
NORECOMPUTE option of the UPDATE STATISTICS command. So let's break down the
configurations and commands to make this a reality.
database->properties->options
Update statistics provide query optimizer with the best possible information for generating
query plans. Statistics are updated when they are created or modified. They are also
periodically updated by SQL SERVER , unless the administrator has disabled automatic statistics
updates.
335
If your queries are executing slower, then it is time to update the statistics. It is
recommended to update your statistics after you insert a larger amount of data into
ascending or descending key columns because in such cases the statistics histogram might
not have any information about the newly inserted values. It is also highly recommended to
update the statistics after maintenance except index maintenance (the data distribution
does not change if you rebuild, defragment or reorganize an index).
> By default, the query optimizer already updates statistics. as necessary to improve the query
plan in some cases you can improve query performance by using UPDATE STATISTICS
> Updating statistics ensures that queries compile with up-to-date statistics. However, updating
statistics causes queries to recompile. We recommend not updating statistics too frequently
because there is a performance tradeoff between improving query plans and the time it takes
to recompile queries
Note:
1. Update statistics is the one of the DBA Maintaince activity and runs every day after business
hours.
2. After rebuild index no need to perform again update statistics. Why because whenever index
runs automatically Update statistics also gets updated.
3. Whenever create any index, always first create STATISTICS and then create indexes.
336
Problem
I am new to SQL Server statistics. Could you please describe what are the benefits of using
the Auto Create Statistics and Auto Update Statistics options? I am also curious about how
these statistics are automatically created and updated. Check out this tip to learn more.
Solution
How do the statistics increase the performance of your SQL Server query execution? The
statistical histograms are used by the Query Optimizer to choose the optimal query
execution plan. If a query predicate contains a column with statistics, the Query Optimizer
does not have to estimate the number of rows affected by that query, thus the Query
Optimizer has enough information to create the execution plan. The SQL Server creates
statistics in different ways:
When you set the AUTO_CREATE_STATISTICS option on, the Query Optimizer creates
statistics on individual columns used in a predicate, if these statistics are not already
available. These statistics are necessary to generate the query plan. They are created on
columns that do not have a histogram in an existing statistics object. The name of the auto-
created statistics includes the column name and the object ID in hexadecimal format:
_WA_Sys_<column_name>_<XXXX>. These statistics are used by the Query Optimizer to
determine the optimal Query Execution Plan. The characteristics of these statistics objects
are further discussed here.
You can enable the automatic statistics creation by running this SQL statement:
Statistics are checked before query compilation or before executing a cached query plan.
Statistics are considered out-of-date when:
337
More detailed information about automatic statistics maintenance is available in MSDN.
You can turn on automatic statistics update by running this SQL statement:
The outdated statistics can cause a lot of performance issues therefore it is recommended to
enable it. The default option is ON. The usual symptoms of non-updated statistics are
suboptimal query plans and degraded performance. Sometimes it is even worse to have
outdated statistics than no statistics at all!
If you enable this option then the Query Optimizer will run the query first and update the
outdated statistics afterwards. When you set this option to OFF, the Query Optimizer will
update the outdated statistics before compiling the query. This option can be useful in OLTP
environments while it can have negative effects in data warehouses. You can find additional
details about this option in this tip.
Problem
I have been trying to tune my SQL Server and I noticed a few database settings for
statistics. I also read the previous tip on automatic statistics updates. Could you please
describe the Auto Create Statistics Async option? How is this configured and when should I
use it and when should I avoid using it?
Solution
When this option is enabled, the Query Optimizer will not wait for the update of statistics,
but will run the query first and update the outdated statistics afterwards. Your query will
execute with the current statistics and a background process will start to update the
statistics in a separate thread. It should be noted that the Query Optimizer may choose a
338
suboptimal query plan if statistics are outdated when the query compiles. When this
background operation is complete, the new query requests will use the new updated
statistics.
When this option is disabled, the Query Optimizer will update the outdated statistics before
compiling the query therefore possibly getting a better plan based on the most current
statistics. This is referred to as synchronous statistics updates.
This option enables faster plan generation, but may create sub-optimal query plans because
the plan compilation may be based on stale statistics.
You can get more predictable query response times with asynchronous statistics updates if
your SQL Server frequently executes the same query or similar cached query plans.
The Query Optimizer can execute your queries without waiting for updated statistics when
the AUTO_UPDATE_STATISTICS_ASYNC option is used. This way none of your queries will
be delayed when the statistics are updated. It is also possible that some applications
experience timeouts when the statistics are updated before running the query. This situation
can be avoided by enabling asynchronous statistics updates.
Using synchronous statistics will ensure statistics are up-to-date before executing queries
on the changed data. It is recommended to use this when your queries can wait until the
statistics are updated. In most cases this is very minimal and you will not notice this, that
is why this is the default behavior.
ACTIVITY MONITOR
> Activity monitor is used to get the information about users connections to the database
engine and the locks that they hold. Activity monitor is used to troubleshooting database
locking issues, and to terminate deadlocked or unresponsive process.
339
> To use activity monitor: VIEW SERVER STATE permission on server and SELECT permission to
the sysprocesses & syslocks tables in the master database.
> To kill process: Sysadmin and processadmin database roles and permissions are required to
KILL a process.
This snapshot is very helpful to get a quick performance snapshot without the need to use
other monitoring tool for the same purpose.
% Processor Time – is the percentage of time the processors spend to execute threads that are
not idle
Waiting Tasks – is the number of tasks that are waiting for processor, I/O, or memory to be
released so the tasks can be processed
Database I/O – is the data transfer rate in MB/s from memory to disk, disk to memory, or disk
to disk
Batch Requests/sec – is the number of SQL Server batches received by the instance in a second
> The Processes pane shows the information about the currently running processes on the SQL
databases, who runs them, and from which application
340
Session ID – is a unique value assigned by Database Engine to every user connection. This is
the spid value returned by the sp_who procedure
User Process – 1 for user processes, 0 for system processes. The default filter is set to 1, so only
user processes are shown
Task State – the task state, blank for tasks in the runnable and sleeping state. The value can
also be obtained using the sys.dm_os_tasks view, as the task_state column. The states returned
can be:
“PENDING: Waiting for a worker thread.
RUNNABLE: Runnable, but waiting to receive a quantum.
RUNNING: Currently running on the scheduler.
SUSPENDED: Has a worker, but is waiting for an event.
DONE: Completed.
SPINLOOP: Stuck in a spinlock.” [2]
Command – the current command type. The value can also be obtained using
the sys.dm_exec_requests view, as the command column
Wait Time (ms) – how long in milliseconds the task is waiting for a resource. The value can also
be obtained using the sys.dm_os_waiting_tasks view, as the wait_duration_ms column
Wait Type – the last/current wait type. The value can also be obtained using
the sys.dm_os_waiting_tasks view, as the wait_type column. The waits can be resource, queue
and external waits
Wait Resource – the resource the connection is waiting for. The value can also be obtained
using the sys.dm_os_waiting_tasks view, as the resource description column
Blocked By – the ID of the session that is blocking the task. The value can also be obtained using
the sys.dm_os_waiting_tasks view, as the blocking_session_id column
Head Blocker – the session that causes the first blocking condition in a blocking chain
341
Memory Use (KB) – the memory used by the task. The value can also be obtained using
the sys.dm_exec_sessions view, as the memory usage column
Host Name – the name of the computer where the current connection is made. The value can
also be obtained using the sys.dm_exec_sessions view, as the host_name column
Workload Group – the name of the Resource Governor Workload group [3]. The value can also
be obtained using the sys.dm_resource_governor_workload_groups view, as the name column
Wait Category – the categories are created combining closely related wait types. The wait types
are shown in the Wait Type column of the Processes pane
Wait Time (ms/sec) – the time all waiting tasks are waiting for one or more resources
Recent Wait Time (ms/sec) – the average time all waiting tasks are waiting for one or more
resources
342
Average Waiter Count – is calculated for a typical point in time in the last sample interval and
represents the number of tasks waiting for one or more resources
Cumulative Wait Time (sec) – the total time waiting tasks have waited for one or more
resources since the last SQL Server restart, or DBCC SQLPERF last execution
Shows information about the database files on the SQL Server instance. For each database, all
database files are listed – MDF, LDF and NDF, their paths, and names
MB/sec Read – shows recent read activity for the database file
MB/sec Written – shows recent write activity for the database file
Response Time (ms) – average response time for recent read-and-write activity
Expensive queries are the queries that use much resources – memory, disk, and network. The
pane shows expensive queries executed in the last 30 seconds. The information is obtained
from the sys.dm_exec_requests and sys.dm_exec_query_stats views. A double-click on the
query opens the monitored statement
The context menu for the specific query provides options to open the query in Query Editor,
and show the execution plan
343
Query – the SQL query statement monitored
Executions/min – the number of executions per minute, since the last recompilation. The value
can also be obtained using the sys.dm_exec_query_stats view, as the execution_count column
CPU (ms/sec) – the CPU rate used, since the last recompilation. The value can also be obtained
using the sys.dm_exec_query_stats view, as the total_worker_time column
Physical Reads/sec, Logical Writes/sec, and Logical Reads/sec – the rate of physical
reads/logical writes/logical reads per second. The value can also be obtained using
the sys.dm_exec_query_stats view, as the total_physical_reads/ total_logical_writes/
total_logical_reads columns
Average Duration (ms) – average time that the query runs. Calculated based on
the total_elapsed_time and execution_count columns in the sys.dm_exec_query_stats view
Plan Count – the number of duplicate query plans. A large number requires investigation and
potential explicit query parameterization
EXECUTION PLAN
Execution plan graphically displays the data retrieval methods chosen by SQL Server. It
represents the execution cost of specific statements and quires in SQL Server. This graphical
approach is very useful for understanding the performance of the query.
What is an execution plan? When would you use it? How would you view the execution plan?
344
An execution plan is basically a road map that graphically or textually shows the data
retrieval methods chosen by SQL Server Query optimizer for a stored procedure or ad-hoc
query and is a very useful tool for a developer to understand the performance characteristics of
a query or stored procedure science the plan is the one that SQL Server will place in its cache
and use to execute the stored procedure or query. From within Query Analyzer is an option
called “Show Execution Plan” (located on the query drop-down menu). If this option is tuned on
it will display query execution plan in separate window query is ran again.
1. Physical operations
2. Logical operations
3. Actual Number rows
4. Estimated I/O cost
5. Estimated CPU cost
6. Number of Executions
7. Estimated Number of Executions
8. Estimated Operator cost
9. Estimated Subtree cost
10. Estimated Number of Rows
11. Estimated Row Size
12. Actual rebinds
13. Actual rewinds
14. Key lookup
15. Nested look up
16. Index seek
17. Index scan
RAID LEVELS
RAID Levels: [RAID stands for Redundant Array of Inexpensive or Independent Disks]
> RAID is a disk system that contains multiple disk drives, called an array, to provide greater
performance, fault tolerance, storage capacity, at a moderate cost. While configuring your
server system, you typically have to make a choice between hardware RAID and software RAID
for the server’s internal disk drives
345
TYPES OF RAIDS:
1. RAID -0:
1. Minimum 2.Disks.
2. RAID-1:
1. Minimum 2 disks.
346
3. Excellent redundancy (as blocks are mirrored)
3. RAID-5:
3. Best cost effective option providing both performance and redundancy. Use this for DB that
is heavily read oriented. Write operations will be slow.
1. Minimum 4 disks.
347
If you can afford the dollar, this is the BEST option for any mission critical applications
(especially databases)
Note: From disk management you can find whether RAID is configured or not by using "if any 2
or 3 disks letter having same ...That mean raid implemented"
TEMPDB ARCHITECTURE
– User Objects
– Internal Objects
– Version Stores
User Objects:
• Table variables
348
Note: These lists are not designed to be all inclusive.
Internal Objects:
• Work files needed for many GROUP BY, ORDER BY, UNION, and SELECT DISTINCT operations.
• Work files for sorts that result from creating or rebuilding indexes (SORT_IN_TEMPDB).
• Storing temporary large objects (LOBs) as variables or parameters (if they won’t fit into
memory).
Version Stores:
• The version store is a collection of pages used to store row level versioning of data.
2. Online-Index-Build Version Store: Used for online index builds or rebuilds. EE edition only.
349
• Tempdb may not backed up, restore, can’t implement any HA options.
• Generally, there are three major problems you run into with Tempdb:
SOLUTION:
• sys.dm_db_file_space_usage: Returns one row for each data file in Tempdb showing space
usage.
• sys.dm_db_task_space_usage: Returns one row for each active task and shows the space
allocated and deallocated by the task.
• sys.dm_db_session_space_usage: Returns one row for each session, with cumulative values
for space allocated and deallocated by the session.
Performance Counters:
DMV
350
• sys.dm_db_file_space_usage
– 1101 or 1105: A session has to allocate more space in tempdb in order to continue
– 3967: The version store has been forced to shrink because tempdb is full.
Note: Be sure auto growth is turned on for tempdb, and ensure that you have enough available
free disk space.
> Removing the primary filegroup, primary data file, or log file.
351
> Setting the database to OFFLINE.
Auditing: Auditing an instance of SQL Server or a SQL Server database involves tracking and
logging events that occur on the system.
Working with SQL Server 2008 auditing need to keep four things in mind:
352
• Auditing is a new feature in SQL Server 2008 which enables database administrators to
capture the events. I hope this feature will be light weight compared to other third party
audit event collectors.
• The audit object provides a manageable auditing framework that makes it easy to
define the events that should be logged and the locations where the log should be
stored.
• SQL Server helps you to implement a comprehensive auditing solution to secure
database your database and meet regulatory compliance requirements.
• Assign all users to meaningful logical groups.
• Assign permissions to groups.
• Always use Windows authentication mode if possible.
• Change the SA account password to a known value if you might ever need to use it.
Always use a strong password for the SA account and change the SA account password
periodically.
• Do not manage SQL Server by using the SA login account; assign Sysadmin privilege to a
knows user a group.
• Rename the SA account to a different account name to prevent attacks on the SA
account by name.
• User has to perform any activities on the database then DBA need to provide relevant
permissions to the user.
Creating an audit, and reviewing audit results using SSMS, is a four-step process, as outlined
in the previous section:
The first step is to create a new audit object. To create a new audit object using SSMS, go to
the SQL Server instance you want to audit, open up “Security,” and you will see the “Audits”
folder, as shown in Figure 2:
353
Figure 2: Choose “New Audit” to create an audit from within SSMS.
Right-click on the “Audits” folder and select “New Audit,” and the “Create Audit” dialog box
appears, as shown in Figure 3:
354
Figure 3: To create an audit, you have to assign it a name and specify where the audit data
will reside.
The first thing you need to do is to decide if you want to use the name that is automatically
generated for you as the audit object name, or to assign your own name. Since numbers
don’t mean much to me, I assigned it my own name.
Next, you have to provide a “Queue Delay” number. This refers to the amount of time after
an audit event has occurred before it is forced to be processed and written to the log. The
default value is 1000 milliseconds, or 1 second. While I am going to accept the default for
this demo, you might want to consider increasing this value if you have a very busy server.
The next option on the screen is called “Shut down server on audit log failure”. If you select
this option, and later SQL Server is restarted, and for whatever reason the audit data can’t
be logged, then SQL Server will not start, unless you manually start it at the command line
using a special parameter. This option should only be used in environments where very tight
355
auditing standards are followed and you have 24/7 staff available to deal with the problem,
should it occur.
Next, beside “Audit,” in the dialog box, there is a drop-down box with “File” selected by
default. This option is used to tell SQL Server where you want the audit logs to be stored.
Figure 4: Three are three options where you can store audit data.
SQL Server Audit allows you to store audit data in a file, in the Security Log, or the
Application Log. If you choose “File”, then you must also specify the location of the file,
along with additional information, such as how much data it can collect, and so on. If you
choose Security Log or Application Log, then the audit results are stored in these Windows
Operating System Event Logs. I am going to choose “Application Log”. Once this is done, the
dialog box should look as shown in Figure 5:
356
Figure 5: Once all the data has been provided, click “OK” to create the audit.
Now that the audit has been configured, click on “OK” to save it. It should then appear in
the SSMS Object Browser, as shown in Figure 6:
Figure 6: Notice the red arrow next to the newly created audit.
The red arrow next to the audit object means that it is not currently enabled. That’s OK for
now, we can enable it later.
357
Creating a Server or Database Audit Specification
Now that we have created the audit, we need to create the matching audit specification. If
we wanted to do an instance-wide audit, we would create a server audit specification. But
for this example, where the goal is to audit the SELECT activity on a single table in a single
database, a database audit specification is created.
To create a database audit specification using SSMS, open up the database to be audited,
then open up the security folder under it. Next, right-click on “Database Audit
Specifications” and select “New Database Audit Specification”, as shown in Figure 7:
Figure 7: To create a database audit specification, you must do so from within the database
you want to audit.
The “Create Database Audit Specification” dialog box appears, as shown in Figure 8:
358
Figure 8: The “Create Database Audit Specification” dialog box has many options to
complete.
You can either choose to accept the default name assigned to this database specification, or
you can enter your own. Next, select the appropriate audit object from the Audit dropdown
box, as shown in Figure 9:
Figure 9: The “Create Database Audit Specification” dialog box has many options to
complete.
359
In this case there is only one audit object, the “EmployeePayHistory”, as this is a newly
installed SQL Server and doesn’t have any other audit objects on it.
Next, you must specify the kind of audit activity you want to capture by selecting from the
“Audit Action Type” drop-down box, as shown in Figure 10:
Figure 10: You can select from many pre-defined audit actions.
For this example, I want to choose the “SELECT” “Audit Action Type,” as the goal is to record
all SELECT activity for the payroll table. Of course, you can choose any audit action type you
want, but you can only choose from those that are listed. You can’t create your own.
Now that the audit action type has been chosen, the “Object Class” must be chosen – see
Figure 11:
360
Figure 11: In this case, you can choose from three object classes.
The object class allows us to narrow down the scope of what we want to audit. For this
example, because we want to monitor activity on a table, “Object” is selected.
The next step is to specify the object, or the table name, that is to be audited. To do this,
click on the browse button under “Object Name,” and the “Select Objects” dialog box
appears, as shown in Figure 12:
Figure 12: The “Select Objects” dialog box allows you to select which object to audit.
Having clicked on the “Browse” button, the list of available objects will appear, as shown in
Figure 13:
361
Figure 13: Select the object to be audited from this list.
Browse through the “Browse for Object” dialog box until you find the object or objects you
want to audit, then select them. Above, I have selected a single table:
HumanResources.EmployeePayHistory.
Once the objects have been selected, click “OK,” and the “Select Object” dialog box
reappears, as shown in Figure 14:
362
Figure 14: The audited object has been selected.
Now that the object to be audited has been selected, click “OK,” and you are returned to
the original “Create Database Audit Specification” dialog box, as shown in Figure 15:
There is one last step, and that is to specify what security principals (user accounts) that we
want to monitor. To do this, click on the browse button under “Principal Name,” and
another “Select Object” dialog box appears.
I am going to spare you seeing this screen again, and skip immediately to the “Browse for
Object” dialog box, where you can see what principals you can choose from, as shown in
Figure 16:
363
Figure 16: Select the principal you want to audit.
In this case, public is chosen, because the goal of this audit is to identify anyone who runs a
SELECT against the payroll table. Optionally, you can select on specific users or roles. Click
on “OK” for this dialog box, then click on “OK” for the “Select Objects” dialog box, and we
reach the final screen, seen on Figure 17:
364
Figure 17: We are finally done creating the database audit specification.
Since we are only interested in auditing this one table for a single action, we will stop now.
If you wanted to, you could continue to add addition actions and objects to this audit
specification. Click on “OK,” and the database Audit Specification will be saved, and you can
view it in object explorer, as shown in Figure 18:
Figure 18: Notice the red arrow next to the specification, which tells us that it is turned off.
Once the new database audit specification has been created, it has a red arrow next to it,
indicating that it is turned off. We will turn it on in the next step.
http://bradmcgehee.com/2010/03/30/an-introduction-to-sql-server-2008-audit/
365
NEW FEATURES LIST FOR 2005, 2008, 2008R2, 2012, 2014
366
SQL SERVER 2012 SQL SERVER 2014
PowerView
http://mcpmag.com/articles/2012/03/14/top- http://sqlmag.com/sql-server-
12-features-of-sql-server-2012.aspx 2014/sql-server-2014-important-new-
features
367
DMV’S & SP & DBCC
> The main purpose we can monitor SQL Server without consuming hardware resources like
DBCC queries.
> The DMV’S newly introduced in SQL Server 2005 gives the database administrator
information about the current state of the SQL Server machine.
> These Values will help the administrator to diagnose problems and tune the server for
optimal performance.
> The DMV’S in SQL Server are designed to give you a window into what’s going on inside SQL
Server
> They can provide information on what’s currently happening inside the server as well as the
objects it’s strong. They are designed to be used instead of system tables and various functions.
> DMV’S are stored in sys schema and they start with dm_in the name
SELECT name, type, type_desc FROM sys.system_objects WHERE name LIKE 'dm_%' ORDER
BY name
Output:
List of DMV'S
Server-scoped dynamic management views and functions: These require VIEW SERVER STATE
permission on the server.
368
Database-scoped dynamic management views and functions: These require VIEW DATABASE
STATE permission on the database.
There are multiple categories close to 17... In which these views and functions have been
organized
We have 85 of these views and functions. To give a further split, 76 of these are views and 9 of
them are functions... Below are the 4 types of DMV which can be used frequently.
> This section contains the dynamic management views that are associated with the SQL Server
Operating System (SQLOS). The SQLOS is responsible for managing operating system resources
that are specific to SQL Server.
Locks:
Blockings:
> Returns information about the wait queue of tasks that are waiting on some resource.
Sys.dm_os_wait_stats:
> Returns information about all the waits encountered by threads that executed. You can use
this aggregated view to diagnose performance issues with SQL Server and also with specific
queries and batches.
369
2. Database Related DMV’S:
Mirroring:
1. Sys.dm_db_mirroring_auto_page_repair:
1. Returns a row for every automatic page-repair attempt on any mirrored database on the
server instance.
2. This view contains rows for the latest automatic page-repair attempts on a given mirrored
database, with a maximum of 100 rows per database.
2. Sys.dm_db_mirroring_connections:
Avg_fragementaion_in_pernt:
Missing Index:
sys.dm_db_index_usage_stats:
Returns counts of different types of index operations and the time each type of operation was
last performed in SQL Server.
370
4. Execution related DMV’S:
Sys.dm_exec_cached_plans:
> Returns a row for each query plan that is cached by SQL Server for faster query execution. You
can use this dynamic management view to find cached query plans, cached query text, the
amount of memory taken by cached plans, and the reuse count of the cached plans.
Sys.dm_exec_connections:
> Returns information about the connections established to this instance of SQL Server and the
details of each connection.
Sys.dm_exec_sessions:
> shows information about all active user connections and internal tasks. This information
includes client version, client program name, client login time, login user, current session
setting, and more.
Sys.dm_exec_cursors:
> Returns information about the cursors that are open in various databases.
Sys.dm_repl_articles:
> Returns information about database objects published as articles in a replication topology.
Sys.dm_repl_tranhash:
Sys.dm_repl_schemas:
371
> Returns information about table columns published by replication.
Sys.dm_repl_traninfo:
> A stored procedure is a group of Sql statements that has been created and stored in the
database. Stored procedure will accept input parameters so that a single procedure can be used
over the network by several clients using different input data. Stored procedure will reduce
network traffic and increase the performance. If we modify stored procedure all the clients will
get the updated stored procedure
System stored procedures are stored in the master database and these are starts with
a sp_ prefix. These procedures can be used to perform variety of tasks to support Sql server
functions for external application calls in the system tables and use to perform many
administrative and informational activities.
User Defined stored procedures are usually stored in a user database and are typically designed
to complete the tasks in the user database. While coding these procedures don’t
use sp_ prefix because if we use the sp_ prefix first it will check master database then it comes
to user defined database
Stored procedure are modules or routines that encapsulate code for reuse. A stored procedures
can take input parameters, return tabular or scalar results and messages to the client.
Extended stored procedures are the procedures that call functions from DLL files that an
instance of Microsoft SQL Server can dynamically load and run. Now a day’s extended stored
372
procedures are depreciated for that reason it would be better to avoid using of Extended
Stored procedures.
USE AdventureWorks2008R2;
GO
CREATE PROCEDURE dbo.sp_who
AS
SELECT FirstName, LastName FROM Person.Person;
GO
EXEC sp_who;
EXEC dbo.sp_who;
GO
DROP PROCEDURE dbo.sp_who;
GO
You can create the procedure once, store it in the database, and call it any number of times in
your program.
If the operation requires a large amount of SQL code is performed repetitively, stored
procedures can be faster. They are parsed and optimized when they are first executed, and a
compiled version of the stored procedure remains in memory cache for later use. This means
the stored procedure does not need to be reparsed and reoptimized with each use resulting in
much faster execution times.
373
An operation requiring hundreds of lines of Transact-SQL code can be performed through a
single statement that executes the code in a procedure, rather than by sending hundreds of
lines of code over the network.
Users can be granted permission to execute a stored procedure even if they do not have
permission to execute the procedure's statements directly.
> DBCC commands are used to check the consistency of the database or database objects.
While executing DBCC commands the DB engine creates a database snapshot and then runs the
checks against this snapshot. After the DBCC command is completed, this snapshot is dropped.
> It’s consuming hardware resources. The DBCC commands are most useful for performance
and troubleshooting exercises.
Maintenance
Informational
Validation
Miscellaneous
MAINTENANCE COMMANDS:
CLEANTABLE:
> Reclaims space from dropped variable-length columns in tables or indexed views.
DBREINDEX:
> Rebuilds one or more indexes for a table in the specified database.
374
DROPCLEANBUFFERS:
DBCC DROPCLEANBUFFERS
FREEPROCCACHE:
> Removes all elements from the plan cache, removes a specific plan from the plan cache by
specifying a plan handle or SQL handle, or removes all cache entries associated with a specified
resource pool.
DBCC FREEPROCCACHE
INDEXDEFRAG:
SHRINKDATABASE:
> Shrinks the size of the data and log files in the specified database.
SHRINKFILE:
> Shrinks the size of the specified data or log file for the current database, or empties a file by
moving the data from the specified file to other files in the same filegroup, allowing the file to
be removed from the database. You can shrink a file to a size that is less than the size specified
when it was created. This resets the minimum file size to the new value.
INFORMATIONAL COMMANDS:
375
Performs tasks that gather and display various types of information.
CONCURRENCYVIOLATION:
DBCC CONCURRENCYVIOLAION
UPDATEUSAGE:
> Reports and corrects pages and row count inaccuracies in the catalog views. These
inaccuracies may cause incorrect space usage reports returned by the sp_spaceused system
stored procedure.
INPUTBUFFER:
> Displays the last statement sent from a client to an instance of Microsoft SQL Server 2005.
OPENTRAN:
> Displays information about the oldest active transaction and the oldest distributed and
nondistributed replicated transactions.
OUTPUTBUFFER:
> Returns the current output buffer in hexadecimal and ASCII format for the specified
session_id.
PROCCACHE:
DBCC PROCCACHE
376
SHOW_STATISTICS:
> Displays the current distribution statistics for the specified target on the specified table.
SHOWCONTIG:
TABLE level scan performed.
> Displays fragmentation information for the data and indexes of the specified table or view.
SQLPERF:
> Provides the transaction log space usage statistics for all databases. It can also be used to
reset wait and latch statistics.
TRACESTATUS:
USEROPTIONS:
> Returns the SET options active (set) for the current connection.
VALIDATION COMMANDS:
377
> Performs validation operations on a database, table, index, catalog, filegroup, or allocation of
database pages.
CHECKDB:
> Checks the logical and physical integrity of all the objects in the specified database.
the DBCC CHECKALLOC, DBCC CHECKTABLE, or DBCC CHECKCATALOG commands do not have to be run
separately from DBCC CHECKDB
CHECKALLOC:
> Checks the consistency of disk space allocation structures for a specified database.
CHECKTABLE:
> Checks the integrity of all the pages and structures that make up the table or indexed view.
CHECKCATALOG:
DBCC CHECKCATALOG performs various consistency checks between system metadata tables. DBCC
CHECKCATALOG uses an internal database snapshot to provide the transactional consistency that it needs
to perform these checks.
If a snapshot cannot be created DBCC CHECKCATALOG acquires an exclusive database lock to obtain the
required consistency. If any inconsistencies are detected, they cannot be repaired and the database must
be restored from a backup.
CHECKCONSTRAINTS:
378
> Checks the integrity of a specified constraint or all constraints on a specified table in the
current database.
CHECKFILEGROUP:
> Checks the allocation and structural integrity of all tables and indexed views in the specified
file group of the current database.
CHECKIDENT:
> Checks the current identity value for the specified table and, if it is needed, changes the
identity value.
Configure a linked server to enable the SQL Server Database Engine to execute commands against OLE
DB data sources outside of the instance of SQL Server. Typically linked servers are configured to enable
the Database Engine to execute a Transact-SQL statement that includes tables in another instance of SQL
Server, or another database product such as Oracle. Many types OLE DB data sources can be configured
as linked servers, including Microsoft Access and Excel.
Some OLE DB providers allow SQL Server to update data on the OLE DB source. Others provide only read-
only data access. For information about each OLE DB provider, consult documentation for that OLE DB
provider.
379
An OLE DB provider is a DLL that manages and interacts with a specific data source. An OLE DB data
source identifies the specific database that can be accessed through OLE DB. Although data sources
queried through linked server definitions are ordinarily databases, OLE DB providers exist for a variety of
files and file formats. These include text files, spreadsheet data, and the results of full-text content
searches.
The Microsoft SQL Server Native Client OLE DB Provider (PROGID: SQLNCLI11) is the official OLE DB
provider for SQL Server.
Typically, linked servers are used to handle distributed queries. When a client application executes a
distributed query through a linked server, SQL Server parses the command and sends requests to OLE DB.
The rowset request may be in the form of executing a query against the provider or opening a base table
from the provider.
For a data source to return data through a linked server, the OLE DB provider (DLL) for that data source
must be present on the same server as the instance of SQL Server.
When a third-party OLE DB provider is used, the account under which the SQL Server service runs must
have read and execute permissions for the directory, and all subdirectories, in which the provider is
installed.
Managing Providers
There is a set of options that control how SQL Server loads and uses OLE DB providers that are specified
in the registry.
380
You can use stored procedures and catalog views to manage linked server definitions:
Create a linked server definition by running sp_addlinkedserver.
View information about the linked servers defined in a specific instance of SQL Server by running
a query against the sys.servers system catalog views.
Delete a linked server definition by running sp_dropserver. You can also use this stored
procedure to remove a remote server.
You can also define linked servers by using SQL Server Management Studio. In the Object Explorer, right-
click Server Objects, select New, and select Linked Server. You can delete a linked server definition by
right-clicking the linked server name and selecting Delete.
When you execute a distributed query against a linked server, include a fully qualified, four-part table
name for each data source to query. This four-part name should be in the
form linked_server_name.catalog.schema.object_name.
381
Introduction - How To Setup Linked Servers
If you want to use a database from another Server Instance in your queries, you should do some
workaround to reach your goal, or better say if you have distributed databases for an application and
you want to use distributed queries here is the simple, easy solution.
Background
Googling around would get you to this solution but it's important to know how much time you
would spend on something. On this issue, it was so time consuming to find out how to really config
the server to get it working, so now it's not.
382
2. Go to [Server Objects] and on the [Linked Servers] perform a [Right-Click] and select [New Linked
Server...].
3. In the new window on [General] page, you have to set several parameters as below:
383
1. [Linked server]: LLSS (The name that will be used for addressing the Shared server)
Hide Copy Code
[ZZZZZZZZZ]
[Server2005]
5. [Data source]: XXX.XXX.XXX.XXX\DDSS (The Network name of SQL Server that is going to be
shared on current Instance).
Hide Copy Code
[ XXX.XXX.XXX.XXX\DDSS]
384
[192.168.100.134\Server2005]
[XX]
[sa]
[YY]
[MyS@P@ss ]
385
5. Press [OK ] and you are ready to go … and use the Linked Server as mentioned in part 3.1.
[XX]
[sa]
[YY]
[MyS@P@ss ]
5. Press [OK ] and you are ready to go … and use the Linked Server as mentioned in part 3.1.
386
REAL TIME CLASSES
387
REAL TIME CLASSES
PROCESS:
2 DB Maintains Activities(Daily\weekly\Monthly)
8 Capacity planning\management
9 Interview Handling
10 Day-to-Day Activities
11 General Responsibility
13 Escalation Matrix
14 KT Questions
388
15 On-call & Bridge call
Service operations
P1 15min 4hrs[Platinum]
389
P4 1 day 3days [bronze]
P5 1 Day 7 Days[plastic]
Servers:
Note:
1. Every DBA can have user name and password to login into BCM REMEDY ticketing tools.
2. Alerts calls assigned to DBA queue after L1 team assign\ distribute the calls based on the
resource availability.
System raised
User raised
System raised: This type of alerts raised by monitoring tool... i.e. BMC Patrol, and SCOM
(System Centre Operation Manager)
How it works:
When any error tracks into SQL server logs or event viewer\agent error logs then patrol agent
per each server can monitor and raise alert to BMC remedy to the based on the priority.
390
User raised: Depending on the user requirement to raise a call based on the impact to select
priority in SQL Server.
Note: Always work on any incidents based on the priority and SLA’s
Change Management:
BMC Remedy
Normal Changes (CR)::: Ex: File movement, adding file, changing memory
parameters, High availability configurations…
Standard Changes (S-CR) ::: Patch management, DR Testing
Emergency Changes (E-CR) ::: disk full ,100% cpu utilization, log file full,
Database\Sql service down, Master database down
1. Get all the requirements including down time details from apps team
2. Raise a change as per the plan
3. Once change raise, pull all technical steps including pre-installation, implementation,
and verification and back out plan.
4. Once completed, get technical side approval
5. Once approved, get SDM Approval.[Delivery manager approval\client approval]
6. After implement the change as per implementation date and finish as per the time lines.
391
7. Once implement success then Change is success else failed.
Change States:
Raise Change> Impact assessment> File the technical steps> Pass for technical approval> Pass
to SDM approval> Once Approved,
Problem Management:
Capacity planning\management:
Rules:
392
Every year once one resource move to another location and work from there, and end
of the same day has to provide a clear BCP team and again this is ITIL process.
RACI Matrix: Defines Roles and responsibilities from each team which relate to each team.
R- Responsibility
A-Accountable
C-Consult
I-Inform
RACI is a simple Excel sheet
Daily Maintaince: Differential backup, Transaction log backup, Blocking jobs, CPU Utilization
Weekly Maintaince: Full backup, Rebuild indexes, Reorganize index, DBCC Checkdb, Purging
jobs,
393
Delivery manager Level 3(L3)
Level 1(L1)
When you complaint against a person ensure you have follow process as per reporting
structure.
SLA [Service Level Agreement]: This is the agreement between client and company
SLA:
> This defines a business which needs to run continuously without having any service
interruption.
394
Real time: DR test will happen every 1 year or 6 months.
Team size: 12
24*7
L1 3
L24
L3 3
Team lead: 1
Manager: 1
Shift Management:
Shift handover:
395
4. How many tickets are closed?
How many prod and dev\uat\pre-prod servers? 350+ Prod and UAT\PRE-PROD\DEV-150+
Backup strategy:
Logshipping: 100 +
Mirroring: 200+
Replication: 2000+
In disk: 7 days
In Tape: 2 months
Old backups automatically deleted from disk or tape by using clean up job.
396
DBA Daily Maintaince Activities:
Database growth report by monthly job as per schedule every month 1 st.
Database refresh
Do we have a client interaction?
How many changes are implemented today or last week or last month?
397
10 Days Sql server agent error log can be maintained.
4. In technical roles, Performing SQL Server various version installation of 2005, 2008, 2008 R2
and 2012 along the configurations.
5. Applying service packs, hot fixes and cumulative updates for various versions and if any
failures trouble shoots accordingly.
6. Whenever if we have any issues with file and file groups, to resolve adding the over flow files
to database.
398
7. As a part of security, creation of users, login and providing relevant Permissions to the
requestor.
8. Implementing best security hardening rules in production servers.
9. Configured different types of backups for various versions and involved in recovery scenarios
if database crashes or depending on the requirement.
10. Defining the best backup strategy especially for production servers.
11. Suggest to application team on recovery models for database to resolve log file related issues
especially on production servers.
12. Performed system and user database file movements whenever if we have any disk space
crunch.
13. Involve recovering of system and user database if any corruption happened especially
MASTER REBULDING, model, msdb and tempdb.
14. Performed attach\detach, import \export, copy database techniques.
15. Configured jobs, maintaince plans, and dB mail to send any notification to end user on
production servers.
16. As a part high availability and disaster recovery solutions, configured log shipping, db
mirroring, and replication and clustering.
17. Involved in UPGRADATION and Migration activities and resolved if any ORPAN user issues.
18. As a part of performance tuning,
- Resolving blockings
- Resolving deadlocks
- Maintain indexes
- Query tuning
- Running Sql server profiler to capture event classes
- Running performance monitor tool to capture counters
Process roles:
We need to write
Daily Responsibilities:
399
6) Perform any Change Requests during your shift (if any scheduled).
Health Checks:
4) Checking Backups
6) HA Synchronization/failure checks
7) Event Viewer
9) Database settings
12) Storage
These are the task list that must be done by a DBA on daily basis.
1) Check System Event Logs and SQL Server Error Logs for unusual events.
3) Confirm that backups have been made successfully and moved to the backup storage or
secure location.
4) Monitor disk space to make sure SQL Servers will not run out of disk space.
5) Periodically monitor performance of database and system using both System Monitor and
SQL Server Profiler.
400
7) Keep a log of any changes you make to servers, instance setting, database setting and
monitoring script.
8) Create SQL Server alerts to notify you of problems, and e-mailed to you. Take action as
needed.
9) Regularly restore backups to a test server in order to verify that you can restore them.
P1 Tickets:
2) Corruptions (Disk)
3) Hardware Failures
401
P2 Tickets:
3) Blockings
4) Deadlocks
7) Space Issues
9) Permission issues
402
403