Ads

25 August 2010

DBA INTERVIEW QUESTIONS - Part2

How to restart SQL Server in single user mode? How to start SQL Server in minimal configuration mode?

SQL Server can be started from command line, using the SQLSERVR.EXE. This EXE has some very important parameters with which a DBA should be familiar with. -m is used for starting SQL Server in single user mode and -f is used to start the SQL Server in minimal confuguration mode.

As a part of your job, what are the DBCC commands that you commonly use for database maintenance?

DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKCATALOG, DBCC CHECKALLOC, DBCC SHOWCONTIG, DBCC SHRINKDATABASE, DBCC SHRINKFILE etc. But there are a whole load of DBCC commands which are very useful for DBAs.

Define candidate key, alternate key, composite key.

A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.

What is RAID and what are different types of RAID configurations?

RAID stands for Redundant Array of Inexpensive Disks, used to provide fault tolerance to database servers. There are six RAID levels 0 through 5 offering different levels of performance, fault tolerance.

What is blocking and how would you troubleshoot it?

Blocking happens when one connection from an application holds a lock and a second connection requires a conflicting lock type. This forces the second connection to wait, blocked on the first.

Can you have a nested transaction?

Yes, very much. Check out BEGIN TRAN, COMMIT, ROLLBACK, SAVE TRAN and @@TRANCOUNT

What are statistics, under what circumstances they go out of date, how do you update them?

Statistics determine the selectivity of the indexes. If an indexed column has unique values then the selectivity of that index is more, as opposed to an index with non-unique values. Query optimizer uses these indexes in determining whether to choose an index or not while executing a query.

Some situations under which you should update statistics:
1) If there is significant change in the key values in the index
2) If a large amount of data in an indexed column has been added, changed, or removed (that is, if the distribution of key values has changed), or the table has been truncated using the TRUNCATE TABLE statement and then repopulated
3) Database is upgraded from a previous version


Look up SQL Server books online for the following commands: UPDATE STATISTICS, STATS_DATE, DBCC SHOW_STATISTICS, CREATE STATISTICS, DROP STATISTICS, sp_autostats, sp_createstats, sp_updatestats

DBA INTERVIEW QUESTIONS - Part1

# How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables?

* One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table. It will be a good idea to read up a database designing fundamentals text book.

# What’s the difference between a primary key and a unique key?

* Both primary key and unique enforce uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn’t allow NULLs, but unique key allows one NULL only.

# What are user defined datatypes and when you should go for them?

* User defined datatypes let you extend the base SQL Server datatypes by providing a descriptive name, and format to the database. Take for example, in your database, there is a column called Flight_Num which appears in many tables. In all these tables it should be varchar(8). In this case you could create a user defined datatype called Flight_num_type of varchar(8) and use it across all your tables. See sp_addtype, sp_droptype in books online.

# What is bit datatype and what’s the information that can be stored inside a bit column?

* Bit datatype is used to store boolean information like 1 or 0 (true or false). Untill SQL Server 6.5 bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit datatype can represent a third state, which is NULL.

# Define candidate key, alternate key, composite key.

* A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key.

16 August 2010

Lock escalation

Lock escalation

A lock escalation occurs when the number of locks held on rows and tables in the database equals the percentage of the lock list specified by the maxlocks database configuration parameter. Lock escalation might not affect the table that acquires the lock triggering the escalation. To reduce the number of locks to about half the number held when the lock escalation began, the database manager begins converting many small row locks to table locks for all active tables, beginning with any locks on large object (LOB) or long VARCHAR elements. An exclusive lock escalation is a lock escalation in which the table lock acquired is an exclusive lock.

For partitioned tables, lock escalation is to the data partition level. This allows the table to be accessible to other transactions even if the data partition is escalated to share, exclusive, or super exclusive, as other non-escalated data partitions are unaffected. The transaction may continue to row lock on other data partitions after escalation for a given data partition. For partitioned tables, the notification log messages for escalations will include the data partition escalated as well as the table name.

Triggering Lock Escalation:

A lock escalation is triggered when any of the following conditions is true

· The number of locks held (different from acquired) by a statement on an index or a heap within a statement exceeds the threshold (currently set to 5000 (approx)). These locks include the intent locks as well. Note the lock escalation will not trigger if

o The transaction acquires 2,500 locks each on two index/heap(s) in a single statement.

o The transaction acquires 2,500 locks on the non-clustered index and 2,500 locks on the corresponding base table in a single statement.

o The same heap/index is referenced more than one time in a statement; the locks on each instance of those are counted separately. So for example, in the case of a self-join on a table t1, if each instance has 3000 locks within the statement, it will not trigger lock escalation

· The memory taken by lock resources > 40% of the non-AWE (32-bit) or regular (64-bit) enabled memory when the locks configuration option is set to 0, the default value. In this case, the lock memory is allocated dynamically as needed.

· The memory taken by lock resources is > 40% of the configured memory of locks (i.e. when a non-zero value for the locks configuration option). When locks configuration option is used, the locks memory is statically allocated when SQL Server starts.

When the lock escalation is triggered, the SQL Server attempts to escalate the lock to table level but the attempt may fail if there are conflicting locks. So for example, if the SH locks need to be escalated to the table level and there are concurrent X locks on one or more rows/pages of the target table, the lock escalation attempt will fail. However, SQL Server periodically, for every 1250 (approx) new locks acquired by the lock owner (e.g. transaction), attempts to escalate the lock. If the lock escalation succeeds, the SQL Server releases the lower granularity locks, and the associated lock memory, on the index or the heap. A successful lock escalation can potentially lead to blocking (because at the time of lock escalation, there cannot be any conflicting access) of future concurrent access to the index or the heap by transactions in conflicting lock mode. So the lock escalation is not always a good idea for all applications.



Disabling Lock Escalation:

SQL2005 provides supports disabling lock escalation using two trace flags as follows:

· TraceFlag-1211: It disables lock escalation at the current threshold (5000) on a per index/heap per statement basis. When this trace flag is in effect, the locks are never escalated. It also instructs SQL Sever to ignore the memory acquired by the lock manager up to a maximum statically allocated lock memory or 60% of non-AWE(32-bit)/regular(64-bit) of the dynamically allocated memory. At this time an out of lock memory error is generated. This can potentially be damaging as a misbehaving application can exhaust SQL Server memory by acquiring large number of locks. This, in the worst case, can stall the Server or degrade its performance to an unacceptable level. For these reasons, a caution must be exercised when using this trace flag

· TraceFlag-1224: This trace flag is similar to trace flag 1211 with one key difference. It enables lock escalation when lock manager acquires 40% of the statically allocated memory or (40%) non-AWE(32-bit)/regular(64-bit) dynamically allocated memory. Additionally, if this memory cannot be allocated due to other components taking up more memory, the lock escalation can be triggered earlier. SQL Server will generate an out of memory error when memory allocated to lock manager exceeds the statically allocated memory or 60% of non-AWE(32-bit)/regular memory for dynamic allocation.



If both trace flags (1211 and 1224) are set at the same time, the trace flag 1211 takes precedence. You can use dbcc tracestatus (-1) command to find the status of all trace flags enabled in SQL Server.

Please also refer to the http://support.microsoft.com/kb/323630/en-us.

Limitations of Lock Escalation:

There are some limitations in the current lock escalation mechanism in SQL Server. We will consider removing one or more of these limitations in future.

· Trace flags can only be used to disable lock escalation at an instance level. More often than not, you want to disable lock escalation at an object level. You can get around this issue by starting a dummy transaction and locking a resource (e.g. a row) to prevent lock escalation.

· The lock escalation triggering is hard coded to approx 5000 locks which may be too many locks for a small table and too few for a large table.
Locks are not escalated to individual table partitions, but to the table instead. So two users accessing distinct partitions of a table in conflicting mode may get blocked immediately after lock escalation.

Live Lock

What is Live Lock?

A Live lock is one, where a request for exclusive lock is denied continuously because a series of overlapping shared locks keeps on interfering each other and to adapt from each other they keep on changing the status which further prevents them to complete the task. In SQL Server Live Lock occurs when read transactions are applied on table which prevents write transaction to wait indefinitely. This is different then deadlock as in deadlock both the processes wait on each other.

A human example of live lock would be two people who meet face-to-face in a corridor and each moves aside to let the other pass, but they end up moving from side to side without making any progress because they always move the same way at the same time and never cross each other. This is good example of live lock.

PERFMON TOOL

Several ways to improve the performance of SQL Server Database Applications such as Query Execution Plans, Sql Profiler, DTA (Database Tuning Advisor), Server Level properties, Object level improvements (Indexes, statistics, Other maintenance Activities) And Server Level Operations.

Database performance can be identified in 3 levels they are Server Level (Operating Systems, Networking Protocols); Database Level (Sql Server Databsae Engine, SQL Server) ; Object Level (Objects within the database).

Perfmon is mainly used to identify the sever levels activities, based on the identified results we can take appropriate action to improve the performance. It can be Hardware, Memory (or) System changes.


PERFMON : PERFMON is a windows inbuilt tool which can provide the workload of the resources running in the system. It can be used to find out Windows resources data as well as SQL Server resources. With PERFMON We can analyze the following,


• Understand workload and its effect on your system's resources.
• Test configuration changes or other tuning efforts by monitoring the results.


Start by monitoring the activity of the following components in order:
• Memory
• Processors
• Disks
• Network


Following counters can be helpful to trace the data

1:
Component : Disk
Performance aspect being monitored : Usage
Counters to monitor :
Physical Disk\Disk Reads/sec, Physical Disk\Disk Writes/sec, LogicalDisk\% Free Space, Interpret the % Disk Time counter carefully. Because the _Total instance of this counter may not accurately reflect utilization on multiple-disk systems, it is important to use the % Idle Time counter as well. Note that these counters cannot display a value exceeding 100%.


2 :
Component : Disk
Performance aspect being monitored : Hindrances
Counters to Monitor : Physical Disk\Avg. Disk Queue Length (all instances)


3:
Component : Memory
Performance aspect being monitored : Usage
Counters to Monitor : Memory\Available Bytes, Memory\Cache Bytes


4:
Component : Memory
Performance aspect being monitored : Hindrances
Counters to Monitor : Memory\Pages/sec, Memory\Page Reads/sec, Memory\Transition Faults/sec, Memory\Pool Paged Bytes, Memory\Pool Nonpaged Bytes.
Although not specifically Memory object counters, the following are also useful for memory analysis: Paging File\% Usage object (all instances), Cache\Data Map Hits %, Server\Pool Paged Bytes and Server\Pool Nonpaged Bytes


5:
Component : Network
Performance aspect being monitored : Throughput
Counters to Monitor : Protocol transmission counters (varies with networking protocol); for TCP/IP: Network Interface\Bytes total/sec, Network Interface\ Packets/sec, Server\Bytes Total/sec, or Server\Bytes Transmitted/sec and Server\Bytes Received/sec


6:
Component : Processor
Performance aspect being monitored : Usage
Counters to Monitor : Processor\% Processor Time (all instances)


7:
Component : Processor
Performance aspect being monitored : Hindrances
Counters to Monitor : System\Processor Queue Length (all instances),
Processor\ Interrupts/sec, System\Context switches/sec



How to Create and perform :
1. Go to RUN and type PERFMON then Enter
2. Double-click Performance Logs and Alerts, and then double-click Counter Logs. Any existing logs will be listed in the details pane. A green icon indicates that a log is running; a red icon indicates that a log has been stopped.
3. Right-click a blank area of the details pane, and click New Log Settings.
4. In Name, type the name of the log, and then click OK.
5. On the General tab, click Add Objects and select the performance objects you want to add, or click Add Counters to select the individual counters you want to log.

It can be run for certain time period. The data can be saved as .CSV or text files. When you get the data you can make a charts using Excel and it can be understandable what necessary action to be taken for improving performance.

04 August 2010

Blocked Process Report

USE [msdb]
GO
IF EXISTS (SELECT name FROM msdb.dbo.sysoperators WHERE name = N'TEST')
EXEC msdb.dbo.sp_delete_operator @name=N'TEST'

USE [msdb]
GO
EXEC msdb.dbo.sp_add_operator @name=N'TEST',
@enabled=1,
@weekday_pager_start_time=90000,
@weekday_pager_end_time=180000,
@saturday_pager_start_time=90000,
@saturday_pager_end_time=180000,
@sunday_pager_start_time=90000,
@sunday_pager_end_time=180000,
@pager_days=0,
@category_name=N'[Uncategorized]'

/* creating the table to capture the Event information */

USE Master
GO

IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[dbo].[BLOCKED_PROCESS_REPORT]') AND OBJECTPROPERTY(id, N'IsUserTable') = 1)
DROP TABLE [dbo].[BLOCKED_PROCESS_REPORT]
GO

CREATE TABLE [dbo].[BLOCKED_PROCESS_REPORT] (
[PostTime] [datetime] NOT NULL ,
[SQLInstance] varchar(20),
[Databaseid] int,
[computerName] Varchar(20),
[SessionLoginName] Varchar(30),
[SPID] int,
[TransactionID] int,
[EventSequence] int,
[objectID] int,
[IndexID] int,
[TextData] nvarchar(4000) ,
[duration] int,
[RecordID] [int] IDENTITY (1,1) NOT FOR REPLICATION NOT NULL,
[Flag] [int] NOT NULL CONSTRAINT [DF_BLOCKED_PROCESS_REPORT_Flag] DEFAULT ((0))
) ON [PRIMARY]
GO

CREATE INDEX [BLOCKED_PROCESS_REPORT_IDX01] ON [dbo].[BLOCKED_PROCESS_REPORT]([Posttime]) WITH FILLFACTOR = 100 ON [PRIMARY]
GO

/*Creating the Job that will enter values into the Deadlockevents table created above*/
/*Service account and sql operator option are optional*/

USE [msdb]
GO

IF EXISTS (SELECT job_id FROM msdb.dbo.sysjobs_view WHERE name = N'Capture BLOCKED_PROCESS_REPORT Event')
EXEC msdb.dbo.sp_delete_job @job_name = N'Capture BLOCKED_PROCESS_REPORT Event', @delete_unused_schedule=1

GO
--DECLARE @ServiceAccount varchar(128)
--SET @ServiceAccount = N''
--DECLARE @SQLOperator varchar(128)
--SET @SQLOperator = N''

BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0

IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'Capture BLOCKED_PROCESS_REPORT Event',
@enabled=1,
@notify_level_eventlog=2,
@notify_level_email=3,
@notify_level_netsend=0,
@notify_level_page=0,
@delete_level=0,
@description=N'Job for responding to blocking events',
@category_name=N'[Uncategorized (Local)]',
--@owner_login_name=@ServiceAccount,
--@notify_email_operator_name=@SQLOperator,
@job_id = @jobId OUTPUT

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

/* Insert graph into LogEvents*/

EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert data into LogEvents',
@step_id=1,
@cmdexec_success_code=0,
@on_success_action=1,
@on_success_step_id=0,
@on_fail_action=2,
@on_fail_step_id=0,
@retry_attempts=0,
@retry_interval=0,
@os_run_priority=0, @subsystem=N'TSQL',
@command=N'

INSERT INTO BLOCKED_PROCESS_REPORT (
[PostTime] ,
[SQLInstance] ,
[Databaseid] ,
[computerName],
[SessionLoginName],
[SPID] ,
[TransactionID] ,
[EventSequence] ,
[objectID] ,
[IndexID] ,
[TextData],
[duration]
)

VALUES (
GETDATE(),
N''$(ESCAPE_NONE(WMI(SQLInstance)))'',
N''$(ESCAPE_NONE(WMI(Databaseid)))'',
N''$(ESCAPE_NONE(WMI(ComputerName)))'',
N''$(ESCAPE_NONE(WMI(SessionLoginname)))'',
N''$(ESCAPE_NONE(WMI(SPID)))'',
N''$(ESCAPE_NONE(WMI(TransactionID)))'',
N''$(ESCAPE_NONE(WMI(EventSequence)))'',
N''$(ESCAPE_NONE(WMI(objectid)))'',
N''$(ESCAPE_NONE(WMI(indexid)))'',
N''$(ESCAPE_SQUOTE(WMI(Textdata)))'',
N''$(ESCAPE_NONE(WMI(Duration)))''
)',
@database_name=N'master',
@flags=0

IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION

GOTO EndSave

QuitWithRollback:
IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

/*Creating the alert and associating it with the Job to be fired */

USE [msdb]
GO

IF EXISTS (SELECT name FROM msdb.dbo.sysalerts WHERE name = N'Respond to BLOCKED_PROCESS_REPORT')
EXEC msdb.dbo.sp_delete_alert @name=N'Respond to BLOCKED_PROCESS_REPORT'

GO

DECLARE @server_namespace varchar(255)
IF ISNULL(CHARINDEX('\', @@SERVERNAME), 0) > 0
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\' + SUBSTRING(@@SERVERNAME, ISNULL(CHARINDEX('\', @@SERVERNAME), 0) + 1, LEN(@@SERVERNAME) - ISNULL(CHARINDEX('/', @@SERVERNAME), 0))
ELSE
SET @server_namespace = N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER'

EXEC msdb.dbo.sp_add_alert @name=N'Respond to BLOCKED_PROCESS_REPORT',
@enabled=1,
@notification_message=N'Your Message',
@wmi_namespace=@server_namespace,
@wmi_query=N'SELECT * FROM BLOCKED_PROCESS_REPORT Where Duration > 30',
@job_name='Capture BLOCKED_PROCESS_REPORT Event' ;

EXEC msdb.dbo.sp_add_notification @alert_name=N'Respond to BLOCKED_PROCESS_REPORT', @operator_name=N'Test', @notification_method = 1
GO
select * from master..BLOCKED_PROCESS_REPORT

Memory Consumption_Cached Pages Count

SELECT count(*)AS cached_pages_count
,name ,index_id
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT object_name(object_id) AS name
,index_id ,allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT object_name(object_id) AS name
,index_id, allocation_unit_id
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = db_id()
GROUP BY name, index_id
ORDER BY cached_pages_count DESC