Ads

31 January 2011

SQL SERVER NUMA

NUMA :- Non Uniform Memory Access
============================

Each group of processors has its own memory and possibly its own I/O channels. However, each CPU can access memory associated with the other groups in a coherent way. Each group is called a NUMA node.

The number of CPUs within a NUMA node depends on the hardware vendor. It is faster to access local memory than the memory associated with other NUMA nodes. This is the reason for the name, non-uniform memory access architecture.

On NUMA hardware, some regions of memory are on physically different buses from other regions. Because NUMA uses local and foreign memory, it will take longer to access some regions of memory than others. Local memory and foreign memory are typically used in reference to a currently running thread. Local memory is the memory that is on the same node as the CPU currently running the thread. Any memory that does not belong to the node on which the thread is currently running is foreign. Foreign memory is also known as remote memory. The ratio of the cost to access foreign memory over that for local memory is called the NUMA ratio. If the NUMA ratio is 1, it is symmetric multiprocessing (SMP). The greater the ratio, the more it costs to access the memory of other nodes. Windows applications that are not NUMA aware (including SQL Server 2000 SP3 and earlier) sometimes perform poorly on NUMA hardware.

The main benefit of NUMA is scalability. The NUMA architecture was designed to surpass the scalability limits of the SMP architecture. With SMP, all memory access is posted to the same shared memory bus. This works fine for a relatively small number of CPUs, but not when you have dozens, even hundreds, of CPUs competing for access to the shared memory bus. NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection.

NUMA node with 4 processors
Hardware-NUMA vs. Soft-NUMA

NUMA can match memory with CPUs through specialized hardware (hardware NUMA) or by configuring SQL Server memory (soft-NUMA). During startup, SQL Server configures itself based on underlying operating system and hardware configuration or the soft-NUMA setting. For both hardware and soft-NUMA, when SQL Server starts in a NUMA configuration, the SQL Server log records a multimode configuration message for each node, along with the CPU mask.

Hardware NUMA

Computers with hardware NUMA have more than one system bus, each serving a small set of processors. Each group of processors has its own memory and possibly its own I/O channels, but each CPU can access memory associated with other groups in a coherent way. Each group is called a NUMA node. The number of CPUs within a NUMA node depends on the hardware vendor. Your hardware manufacturer can tell you if your computer supports hardware NUMA.

If you have hardware NUMA, it may be configured to use interleaved memory instead of NUMA. In that case, Windows and therefore SQL Server will not recognize it as NUMA. Run the following query to find the number of memory nodes available to SQL Server:

SELECT DISTINCT memory_node_id
FROM sys.dm_os_memory_clerks

If SQL Server returns only a single memory node (node 0), either you do not have hardware NUMA, or the hardware is configured as interleaved (non-NUMA). If you think your hardware NUMA is configured incorrectly, contact your hardware vendor to enable NUMA. SQL Server ignores NUMA configuration when hardware NUMA has four or less CPUs and at least one node has only one CPU.

Soft-NUMA

SQL Server allows you to group CPUs into nodes referred to as soft-NUMA. You usually configure soft-NUMA when you have many CPUs and do not have hardware NUMA, but you can also use soft-NUMA to subdivide hardware NUMA nodes into smaller groups. Only the SQL Server scheduler and SQL Server Network Interface (SNI) are soft-NUMA aware. Memory nodes are created based on hardware NUMA and therefore not impacted by soft-NUMA. So, for example, if you have an SMP computer with eight CPUs and you create four soft-NUMA nodes with two CPUs each, you will only have one memory node serving all four NUMA nodes. Soft-NUMA does not provide memory to CPU affinity.

The benefits of soft-NUMA include reducing I/O and lazy writer bottlenecks on computers with many CPUs and no hardware NUMA. There is a single I/O thread and a single lazy writer thread for each NUMA node. Depending on the usage of the database, these single threads may be a significant performance bottleneck. Configuring four soft-NUMA nodes provides four I/O threads and four lazy writer threads, which could increase performance.

You cannot create a soft-NUMA that includes CPUs from different hardware NUMA nodes. For example, if your hardware has eight CPUs (0..7) and you have two hardware NUMA nodes (0-3 and 4-7), you can create soft-NUMA by combining CPU(0,1) and CPU(2,3). You cannot create soft-NUMA using CPU (1, 5), but you can use CPU affinity to affinitize an instance of SQL Server to CPUs from different NUMA nodes. So in the previous example, if SQL Server uses CPUs 0-3, you will have one I/O thread and one lazy writer thread. If, in the previous example SQL Server uses CPUs 1, 2, 5, and 6, you will access two NUMA nodes and have two I/O threads and two lazy writer threads.

HOW TO SET NUMA ON SQL SERVER

Consider the following example. A computer with eight CPUs does not have hardware NUMA. Three soft-NUMA nodes are configured. Database Engine instance A is configured to use CPUs 1 through 4. A second instance of the Database Engine is installed and configured to use CPUs 5 through 8. The example can be visually represented as:

CPUs 1 2 3 4 5 6 7 8

Soft-NUMA <-N0--><-N1-><----N2---->

SQL Server

Instance A, which experiences significant I/O, now has two I/O threads and two lazy writer threads, while instance B, which performs processor-intensive operations, has only one I/O thread and one lazy writer thread. Differing amounts of memory can be assigned to the instances, but unlike hardware NUMA, they both receive memory from the same operating system memory block and there is no memory-to-processor affinity.

NoteNote

The Soft-NUMA registry keys are not copied when you upgrade an instance of SQL Server.

Set the CPU affinity mask

  1. Run the following statement on instance A to configure it to use CPUs 1, 2, 3, and 4 by setting the CPU affinity mask:

    ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU=1 TO 4
  2. Run the following statement on instance B to configure it to use CPUs 5, 6, 7, and 8 by setting the CPU affinity mask:

    ALTER SERVER CONFIGURATION SET PROCESS AFFINITY CPU=5 TO 8

Map soft-NUMA nodes to CPUs

  • Using the Registry Editor program (regedit.exe), add the following registry keys to map soft-NUMA node 0 to CPUs 1 and 2, soft-NUMA node 1 to CPUs 3 and 4, and soft-NUMA node 2 to CPUs 5, 6, 7, and 8.

    SQL Server 2005

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    SQL Server 2008

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    SQL Server 2008 R2

    Type

    Value name

    Value data

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    CPUMask

    0x03

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node0

    DWORD

    Group

    0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    CPUMask

    0x0c

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node1

    DWORD

    Group

    0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2

    DWORD

    CPUMask

    0xf0

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\NodeConfiguration\Node2


27 January 2011

Types of Restore - 2008R2

Thanks to :- http://msdn.microsoft.com/en-us/library/ms191253.aspx

Restore and Recovery Overview (SQL Server)

SQL Server 2008 R2

SQL Server supports restoring data on the following levels:

  • The database (a complete database restore)

    The whole database is restored and recovered, and the database is offline for the duration of the restore and recovery operations.

  • The data file (a file restore)

    A data file or a set of files is restored and recovered. During a file restore, the filegroups that contain the files are automatically offline for the duration of the restore. Any attempt to access an offline filegroup causes an error.

    NoteNote

    You cannot back up or restore individual tables.

  • The data page (a page restore)

    Under the full recovery model or bulk-logged recovery model, you can restore individual databases. Page restores can be performed on any database, regardless of the number of filegroups.

NoteNote

In SQL Server 2008, you can restore a database backup that was created by using SQL Server 2000, SQL Server 2005, or SQL Server 2008. However, backups of master, model and msdb that were created by using SQL Server 2000 or SQL Server 2005 cannot be restored by SQL Server 2008. Also, SQL Server 2008 backups cannot be restored by any earlier version of SQL Server.

SQL Server backup and restore work across all supported operating systems, whether they are 64-bit or 32-bit systems. For information about the supported operating systems, see Hardware and Software Requirements for Installing SQL Server 2008 R2.

Restore Scenarios

A restore scenario in SQL Server is the process of restoring data from one or more backups and then recovering the database. The supported restore scenarios depend on the recovery model of the database and the edition of SQL Server.

The following table introduces the possible restore scenarios that are supported for different recovery models.

Restore scenario

Under simple recovery model

Under full/bulk-logged recovery models

Complete database restore

This is the basic restore strategy. A complete database restore might involve simply restoring and recovering a full database backup. Alternatively, a complete database restore might involve restoring a full database backup followed by restoring and recovering a differential backup.

For more information, see Performing a Complete Database Restore (Simple Recovery Model).

This is the basic restore strategy. A complete database restore involve restoring a full database backup and, optionally, a differential backup (if any), followed by restoring all subsequent log backups (in sequence). The complete database restore is finished by recovering the last log backup and also restoring it (RESTORE WITH RECOVERY).

For more information, see Performing a Complete Database Restore (Full Recovery Model)

File restore *

Restore one or more damaged read-only files, without restoring the entire database. File restore is available only if the database has at least one read-only filegroup.

Restores one or more files, without restoring the entire database. File restore can be performed while the database is offline or, for some editions of SQL Server 2005 and later versions, while the database remains online. During a file restore, the filegroups that contain the files that are being restored are always offline.

Page restore

Not applicable

Restores one or more damaged pages. Page restore can be performed while the database is offline or, for some editions of SQL Server 2005 and later versions, while the database remains online. During a page restore, the pages that are being restored are always offline.

An unbroken chain of log backups must be available, up to the current log file, and they must all be applied to bring the page up to date with the current log file.

For more information, see Performing Page Restores.

Piecemeal restore *

Restore and recover the database in stages at the filegroup level, starting with the primary and all read/write, secondary filegroups.

Restore and recover the database in stages at the filegroup level, starting with the primary filegroup.

* Online restore is supported only in SQL Server 2005 Enterprise Edition and later versions.

Regardless of how data is restored, before a database can be recovered, the SQL Server Database Engine guarantees that the whole database is logically consistent. For example, if you restore a file, you cannot recover it and bring it online until it has been rolled far enough forward to be consistent with the database.

Advantages of a File or Page Restore

Restoring and recovering files or pages, instead of the whole database, provides the following advantages:

  • Restoring less data reduces the time required to copy and recover it.

  • On SQL Server 2005 Enterprise Edition and later versions, restoring files or pages might allow other data in the database to remain online during the restore operation.

Recovery Models and Supported Restore Operations

The restore operations that are available for a database depend on its recovery model. The following table summarizes whether and to what extent each of the recovery models supports a given restore scenario.

Restore operation

Full recovery model

Bulk-logged recovery model

Simple recovery model

Data recovery

Complete recovery (if the log is available).

Some data-loss exposure.

Any data since last full or differential backup is lost.

Point-in-time restore

Any time covered by the log backups.

Disallowed if the log backup contains any bulk-logged changes.

Not supported.

File restore *

Full support.

Sometimes.**

Available only for read-only secondary files.

Page restore *

Full support.

Sometimes.**

None.

Piecemeal (filegroup-level) restore *

Full support.

Sometimes.**

Available only for read-only secondary files.

* Available only in the SQL Server 2005 Enterprise Edition and later versions.

** For the required conditions, see Restore Restrictions Under the Simple Recovery Model.

25 January 2011

Reporting Services Catalog Error.

Thanks to :- http://www.sqlcoffee.com/Troubleshooting028.htm


Reporting Services Catalog Error.


Applies to: Microsoft SQL Server 2008 Release Candidate 0 (RC0).

Error Message.

SQL Server 2008 RC 0 Setup shows the following error messages after running a set of rules to determine if the installation process will be blocked:

bulletRule "Reporting Services Catalog Database File Existence" failed
bulletRule "Reporting Services Catalog Temporary Database File Existence" failed.





Cause.

Setup program found the Reporting Services Catalog database file and Reporting Services Catalog temporary database file in the following path:

C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA



Solution.

Remove the following files from the "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA" directory:

bulletReportServer.mdf
bulletReportServer_log.LDF
bulletReportServerTempDB.mdf
bulletReportServerTempDB_log.LDF.






After deleting the files, click on the "Re-run" button of the Installation Rules setup page, and click "Next" after passing all the installation rules.


Log Shipping Jobs

Log Shipping Jobs

Log shipping involves four jobs, which are handled by dedicated SQL Server Agent jobs. These jobs include the backup job, copy job, restore job, and alert job.

The user controls how frequently log backups are taken, how frequently they are copied to each secondary server, and how frequently they are applied to the secondary database. To reduce the work required to bring a secondary server online, for example after the production system fails, you can copy and restore each transaction log backup soon after it is created. Alternatively, perhaps on a second secondary server, you can delay applying transaction log backups to the secondary database. This delay provides an interval during which you can notice and respond to a failure on the primary, such as accidental deletion of critical data.

Backup Job

A backup job is created on the primary server instance for each primary database. It performs the backup operation, logs history to the local server and the monitor server, and deletes old backup files and history information. By default, this job will run every 15 minutes, but the interval is customizable.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Backup" is created on the primary server instance.

SQL Server 2008 Enterprise and later versions support backup compression. When creating a log shipping configuration, you can control the backup compression behavior of log backups. For more information, see Backup Compression (SQL Server).

Copy Job

A copy job is created on each secondary server instance in a log shipping configuration. This job copies the backup files from the primary server to a configurable destination on the secondary server and logs history on the secondary server and the monitor server. The copy job schedule, which is customizable, should approximate the backup schedule.

When log shipping is enabled, the SQL Server Agent job category "Log Shipping Copy" is created on the secondary server instance.

Restore Job

A restore job is created on the secondary server instance for each log shipping configuration. This job restores the copied backup files to the secondary databases. It logs history on the local server and the monitor server, and deletes old files and old history information. The SQL Server job category "Log Shipping Restore" is created on the secondary server instance when log shipping is enabled.

On a given secondary server instance, the restore job can be scheduled as frequently as the copy job, or the restore job can delayed. Scheduling these jobs with the same frequency keeps the secondary database as closely aligned with the primary database as possible to create a warm standby database.

In contrast, delaying restore jobs, perhaps by several hours, can be useful in the event of a serious user error, such as a dropped table or inappropriately deleted table row. If the time of the error is known, you can move that secondary database forward to a time soon before the error. Then you can export the lost data and import it back into the primary database.

Alert Job

If a monitor server is used, an alert job is created on the monitor server instance. This alert job is shared by the primary and secondary databases of all log shipping configurations using this monitor server instance. Any change to the alert job (such as rescheduling, disabling, or enabling the job) affects all databases using that monitor server. This job raises alerts (for which you must specify alert numbers) for primary and secondary databases when backup and restore operations have not completed successfully within specified thresholds. You must configure these alerts to have an operator receive notification of the log shipping failure. The SQL Server Agent job category "Log Shipping Alert" is created on the monitor server instance when log shipping is enabled.

If a monitor server is not used, alert jobs are created locally on the primary server instance and each secondary server instance. The alert job on the primary server instance raises errors when backup operations have not completed successfully within a specified threshold. The alert job on the secondary server instance raises errors when local copy and restore operations have not completed successfully within a specified threshold.

What is a deadlock and what is a live lock?

What is a deadlock and what is a live lock? How will you go about resolving deadlocks?

Deadlock occurs when two user processes/transactions have locks on 2 separate objects and each process is trying to acquire a lock on the object that has been acquired by the other process. In such a scenario each process is waiting for the other process to release the lock to acquire a lock on the object itself. When a request for exclusive lock is denied again and again because a series of overlapping shared locks are interfering with each other and to adapt from each other they keep on changing the status, it is known as live lock.

One can resolve deadlocks by using TRY CATCH blocks. If the code inside a TRY statement fails, the CATCH automatically catches the control of the flow letting the transaction rollback and resume execution.

VLF IN LDF

SQL SERVER – Detect Virtual Log Files (VLF) in LDF

In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases.

However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file.

DBCC LOGINFO

You can find the result of above query to something as displayed in following image.

You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF.

24 January 2011

DONT KILL SPID

Killing a SQL Server thread? Don’t!

Sometimes, when you kill a session (i.e. a spid) in a SQL Server instance, the spid just refuses to go away not because it’s doing a rollback. Perhaps, it’s stuck on a certain dependency on something external to SQL Server or it’s just simply stuck for some decipherable reasons. And the spid may hang around for as long as the instance is online and will only go away when the instance is restarted.

In more than a few occasions, I have heard people wondering if they can just kill the thread of the SQL Server process from the OS, that is, the OS thread corresponding to the SQL Server spid.

That is a very bad idea! Killing such an OS thread will likely crash the SQL Server instance.

You can easily demonstrate the harm of doing this with Process Explorer. You’ll need to run Process Explorer locally on the server of your test SQL Server instance, and needless to say that this should not be done on a production instance.

First, connect to the test SQL Server instance, and open a New Query window in SQL Server Management Studio. Note the spid for the connection, and run the following query (replacing with the spid for the connection) to find the kpid value in the resultset for the spid:

select * from master..sysprocesses where spid =

This kpid is the id the OS assigns to the thread that is currently bound to the worker executing a task or idling on the spid.

Then, fire up Process Explorer and locate the OS process for the SQL Server instance. If you have multiple SQL Server instances running on the server, you can mouse over each sqlservr.exe to identify the right instance.

Double click on the process, and select the Thread tab. This will give you a list of all the OS threads for the SQL Server instances. Highlight the Thread ID corresponding to the kpid found above, and click on the Kill button to terminate the thread.

After you have killed the thread from Process Explorer, you may still be able to use the SQL Server instance for a while. You’ll for sure see stack dump error messages in the errorlog, and you may see error messages such as the following:

2010-02-04 14:45:31.03 Server Potential image corruption/hotpatch detected. This may be a sign of a hardware problem or caused by presence of CLR/jitted images on the stack. Check SQLDUMPER_ERRORLOG.log for details.

The session related to the kpid value would be dead. But the SQL Server instance itself may or may not continue to function. If you repeat the above steps to kill the threads for the other user sessions, eventually this will crash the SQL Server instance, i.e. the SQL Server instance will become unresponsive. After that, the only way to get the instance back online is to restart it. Most likely, you'll have to first kill the corresponding process from the OS because the instance probably won't respond to any of the normal restart methods.

17 January 2011

Top 10 SQL Server 2008 Features for the Database Administrator (DBA)

Author: Mike Weiner

Contributor: Burzin Patel,Sanjay Mishra

Reviewers: Lubor Kollar, Kevin Cox, Bill Emmert, Greg Husemeier, Paul Burpo, Joseph Sack, Denny Lee, Lindsey Allen, Mark Souza

Microsoft SQL Server 2008 provides a number of enhancements and new functionality, building on previous versions. Administration, database maintenance, manageability, availability, security, and performance, among others, all fall into the roles and responsibilities of the database administrator. This article provides the top ten new features of SQL Server 2008 (referenced in alphabetical order) that can help DBAs fulfill their responsibilities. In addition to a brief description of each feature, we include how this feature can help and some important use considerations.

1

Activity Monitor

When troubleshooting a performance issue or monitoring a server in real time, it is common for the DBA to execute a number of scripts or check a number of sources to collect general information about what processes are executing and where the problem may be. SQL Server 2008 Activity Monitor consolidates this information by detailing running and recently executed processes, graphically. The display gives the DBA a high-level view and the ability to drill down on processes and view wait statistics to help understand and resolve problems.

To open up Activity Monitor, just right-click on the registered server name in Object Explorer and then click Activity Monitor, or utilize the standard toolbar icon in SQL Server Management Studio. Activity Monitor provides the DBA with an overview section producing output similar to Windows Task Manager and drilldown components to look at specific processes, resource waits, data file I/Os, and recent expensive queries, as noted in Figure 1.

Figure 1: Display of SQL Server 2008 Activity Monitor view from Management Studio

image

NOTE: There is a refresh interval setting accessed by right-clicking on the Activity Monitor. Setting this value to a low threshold, under 10 seconds, in a high-volume production system can impact overall system performance.

DBAs can also use Activity Monitor to perform the following tasks:

  • Pause and resume Activity Monitor with a simple right-click. This can help the DBA to “save” a particular point-in-time for further investigation without it being refreshed or overwritten. However, be careful, because if you manually refresh, expand, or collapse a section, the data will be refreshed.
  • Right-click a line item to display the full query text or graphical execution plan via Recent Expensive Queries.
  • Execute a Profiler trace or kill a process from the Processes view. Profiler events include RPC:Completed, SQL:BatchStarting, and SQL:BatchCompleted events, and Audit Login and Audit Logout.

Activity Monitor also provides the ability to monitor activity on any SQL Server 2005 instance, local or remote, registered in SQL Server Management Studio.

2

[SQL Server] Audit

Having the ability to monitor and log events, such as who is accessing objects, what changes occurred, and what time changes occurred, can help the DBA to meet compliance standards for regulatory or organizational security requirements. Gaining insight into the events occurring within their environment can also help the DBA in creating a risk mitigation plan to keep the environment secure.

Within SQL Server 2008 (Enterprise and Developer editions only), SQL Server Audit provides automation that allows the DBA and others to enable, store, and view audits on various server and database components. The feature allows for auditing at a granularity of the server and/or database level.

There are server-level audit action groups, such as:

  • FAILED_LOGIN_GROUP, which tracks failed logins.
  • BACKUP_RESTORE_GROUP, which shows when a database was backed up or restored.
  • DATABASE_CHANGE_GROUP, which audits when a database is created, altered, or dropped.

Database-level audit action groups include:

  • DATABASE_OBJECT_ACCESS_GROUP, which is raised whenever a CREATE, ALTER, or DROP statement is executed on database objects.
  • DATABASE_OBJECT_PERMISSION_CHANGE_GROUP, which is raised when GRANT, REVOKE, or DENY is utilized for database objects.

There are also audit actions, such as SELECT, DELETE, or EXECUTE. For more information, including a full list of the audit groups and actions, see SQL Server Audit Action Groups and Actions.

Audit results can be sent to a file or event log (Windows Security or System) for viewing. Audit information is created utilizing Extended Events, another new SQL Server 2008 feature.

By using SQL Server 2008 audits, the DBA can now answer questions that were previously very difficult to retroactively determine, such as “Who dropped this index?”, “When was the stored procedure modified?”, “What changed which might not be allowing this user to access this table?”, or even “Who ran SELECT or UPDATE statements against the [dbo.Payroll] table?”

For more information about using SQL Server Audit and some examples of implementation, see the SQL Server 2008 Compliance Guide.

3

Backup Compression

This feature has long been a popular request of DBAs for SQL Server. The wait is finally over, and just in time! Many factors, including increased data retention periods and the need to physically store more data have contributed to the recent explosion in database size. Backing up a large database can require a significant time window to be allotted to backup operations and a large amount of disk space allocated for use by the backup file(s).

With SQL Server 2008 backup compression, the backup file is compressed as it is written out, thereby requiring less storage, less disk I/O, and less time. In lab tests conducted with real customer data, we observed in many cases a reduction in the backup file size between 70% and 85%. Testing also revealed around a 45% reduction in the backup and restore time. It is important to note that the additional processing results in higher processor utilization. To help segregate the CPU intensive backup and minimize its effect on other processes, one might consider utilizing another feature mentioned in this paper, Resource Governor.

The compression is achieved by specifying the WITH COMPRESSION clause in the BACKUP command (for more information, see SQL Server Books Online) or by selecting it in the Options page in the Back Up Database dialog box. To prevent having to modify all existing backup scripts, there is also a global setting to enable compressing all backups taken on a server instance by default. (This setting is accessed by using the Database Settings page of the Server Properties dialog box or by running sp_configure with backup compression default set to 1.) While the compression option on the backup command needs to be explicitly specified, the restore command automatically detects that a backup is compressed and decompresses it during the restore operation.

Backup compression is a very useful feature that can help the DBA save space and time. For more information about tuning backup compression, see the technical note on Tuning the Performance of Backup Compression in SQL Server 2008. NOTE: Creating compressed backups is only supported in SQL Server 2008 Enterprise and Developer editions; however, every SQL Server 2008 edition allows for a compressed backup to be restored.

4

Central Management Servers

DBAs are frequently responsible for managing not one but many SQL Server instances in their environment. Having the ability to centralize the management and administration of a number of SQL Server instances from a single source can allow the DBA to save significant time and effort. The Central Management Servers implementation, which is accessed via the Registered Servers component in SQL Server Management Studio, allows the DBA to perform a number of administrative tasks on SQL Servers within the environment, from a single management console.

Central Management Servers allow the DBA to register a group of servers and apply functionality to the servers, as a group, such as:

  • Multiserver query execution: A script can now be executed from one source, across multiple SQL Servers, and be returned to that source, without the need to distinctly log into every server. This can be extremely helpful in cases where data from tables on two or more SQL Servers needs to be viewed or compared without the execution of a distributed query. Also, as long as the syntax is supported in earlier server versions, a query executed from the Query Editor in SQL Server 2008 can run against SQL Server 2005 and SQL Server 2000 instances as well. For more information, see the SQL Server Manageability Team Blog, specifically Multiple Server Query Execution in SQL Server 2008 .
  • Import and evaluate policies across servers: As part of Policy-Based Management (another new SQL Server 2008 feature discussed in this article), SQL Server 2008 provides the ability to import policy files into particular Central Management Server Groups and allows policies to be evaluated across all of the servers registered in the group
  • Control Services and bring up SQL Server Configuration Manager: Central Management Servers help provide a central place where DBAs can view service status and even change status for the services, assuming they have the appropriate permissions
  • Import and export the registered servers: Servers within Central Management Servers can be exported and imported for use between DBAs or different SQL Server Management Studio instance installations. This is an alternative to DBAs importing or exporting into their own local groupings within SQL Server Management Studio.

Be aware that permissions are enforced via Windows authentication, so a user might have different rights and permissions depending on the server registered within the Central Management Server group. For more information, see Administering Multiple Servers Using Central Management Servers and a Kimberly Tripp blog: SQL Server 2008 Central Management Servers-have you seen these?

5

Data Collector and Management Data Warehouse

Performance tuning and troubleshooting are a time-consuming tasks that can require in-depth SQL Server skills and an understanding of database internals. Windows System monitor (Perfmon), SQL Server Profiler, and dynamic management views (DMVs) helped with some of this, but they were often intrusive, laborious to use, or the dispersed data collection methods were cumbersome to easily summarize and interpret.

To provide actionable performance insight, SQL Server 2008 delivers a fully extensible performance data collection and warehouse tool also known as the data collector. The tool includes several out-of-the-box data collection agents, a centralized data repository for storing performance data called management data warehouse, and several precanned reports to present the captured data. The data collector is a scalable tool that can collect and assimilate data from multiple sources such as dynamic management views , Perfmon, Transact-SQL queries, by using a fully customizable data collection frequency. The data collector can be extended to collect data for any measurable attribute of an application.

Another helpful feature of the management data warehouse is that it can be installed on any SQL Server and then collect data from one or more SQL Server instances within the environment. This can help minimize the performance impact on production systems and improve the scalability in terms of monitoring and collecting data from a number of servers. In lab testing we observed around a 4% reduction in throughput when running the agents and the management data warehouse on a server running at capacity (via an OLTP workload). The impact can vary based on the collection interval (as the test was over an extended workload with 15-minute-pulls into the warehouse), and it can be exacerbated during intervals of data collection. Finally, some capacity should be considered, because the DCExec.exe process will take up some memory and processor resources, and writes to the management data warehouse will increase the I/O workload and space allocation needed where the data and log files are located.

The diagram (Figure 2) below depicts a typical data collector report.

Figure 2: Display of SQL Server 2008 Data Collector Report

image

This report shows SQL Server processing over the period of time data was collected. Events such as waits, CPU, I/O, memory usage, and expensive query statistics are collected and displayed. A DBA can also drill down into the reports to focus on a particular query or operation to further investigate, detect, and resolve performance problems. This data collection, storage, and reporting can allow the DBA to establish proactive monitoring of the SQL Server(s) in the environment and go back over time to understand and assess changes to performance over the time period monitored. The data collector and management data warehouse feature is supported in all editions (except SQL Server Express) of SQL Server 2008.

6

Data Compression

The ability to easily manage a database can greatly enhance the opportunity for DBAs to accomplish their regular task lists. As table, index, and file sizes grow and very large databases (VLDBs) become commonplace, the management of data and unwieldy file sizes has become a growing pain point. Also, with more data being queried, the need for large amounts of memory or the necessity to do physical I/O can place a larger burden on DBAs and their organizations. Many times this results in DBAs and organizations securing servers with more memory and/or I/O bandwidth or having to pay a performance penalty.

Data compression, introduced in SQL Server 2008, provides a resolution to help address these problems. Using this feature, a DBA can selectively compress any table, table partition, or index, resulting in a smaller on-disk footprint, smaller memory working-set size, and reduced I/O. The act of compression and decompression will impact CPU; however, this impact is in many cases offset by the gains in I/O savings. Configurations that are bottlenecked on I/O can also see an increase in performance due to compression.

In some lab tests, enabling data compression resulted in a 50-80% saving in disk space. The space savings did vary significantly with minimal savings on data that did not contain many repeating values or where the values required all the bytes allocated by the specified data type. There were also workloads that did not show any gains in performance. However, on data that contained a lot of numeric data and many repeating values, we saw significant space savings and observed performance increases from a few percentage points up to 40-60% on some sample query workloads.

SQL Server 2008 supports two types of compressions: row compression, which compresses the individual columns of a table, and page compression, which compresses data pages using row, prefix, and dictionary compression. The amount of compression achieved is highly dependent on the data types and data contained in the database. In general we have observed that using row compression results in lower overhead on the application throughput but saves less space. Page compression, on the other hand, has a higher impact on application throughput and processor utilization, but it results in much larger space savings. Page compression is a superset of row compression, implying that an object or partition of an object that is compressed using page compression also has row compression applied to it. Also, SQL Server 2008 does support the vardecimal storage format of SQL Server 2005 SP2. However, because this storage format is a subset of row compression, it is a depreciated feature and will be removed from future product versions.

Both row and page compression can be applied to a table or index in an online mode that is without any interruption to the application availability. However, a single partition of a partitioned table cannot be compressed or uncompressed online. In our testing we found that using a hybrid approach, where only the largest few tables were compressed, resulted in the best performance in terms of saving significant disk space while having a minimal negative impact on performance. Because there are disk space requirements, similar to what would be needed to create or rebuild an index, care should be taken in implementing compression as well. We also found that compressing the smallest objects first, from the list of objects you desire to compress, minimized the need for additional disk space during the compression process.

Data compression can be implemented via Transact-SQL or the Data Compression Wizard. To determine how compressing an object will affect its size, you can use the sp_estimate_data_compression_savings system stored procedure or the Data Compression Wizard to calculate the estimated space savings. Database compression is only supported in SQL Server 2008 Enterprise and Developer editions. It is implemented entirely within the database and does not require any application modification.

For more information about using compression, see Creating Compressed Tables and Indexes.

7

Policy-Based Management

In a number of business scenarios, there is a need to maintain certain configurations or enforce policies either within a specific SQL Server, or many times across a group of SQL Servers. A DBA or organization may require a particular naming convention to be implemented on all user tables or stored procedures that are created, or a required configuration change to be defined across a number of servers in the same manner.

Policy-Based Management (PBM) provides DBAs with a wide variety of options in managing their environment. Policies can be created and checked for compliance. If a target (such as a SQL Server database engine, a database, a table, or an index) is out of compliance, the administrator can automatically reconfigure it to be in compliance. There are also a number of evaluation modes (of which many are automated) that can help the DBA check for policy compliance, log and notify when a policy violation occurs, and even roll back the change to keep in compliance with the policy. For more information about evaluation modes and how they are mapped to facets (a PBM term also discussed in the blog), see the SQL Server Policy-Based Management blog.

The policies can be exported and imported as .xml files for evaluation and implementation across multiple server instances. Also, in SQL Server Management Studio and the Registered Servers view, policies can be evaluated across multiple servers if they are registered under a local server group or a Central Management Server group.

Not all of the functionality of Policy-Based Management can be implemented on earlier versions of SQL Server. However, the policy reporting feature can be utilized on SQL Server 2005 and SQL Server 2000. For more information about administering servers by using Policy-Based Management, see Administering Servers by Using Policy-Based Management in SQL Server Books Online. For more information about the technology itself, including examples, see the SQL Server 2008 Compliance Guide.

8

Predictable Performance and Concurrency

A significant problem many DBAs face is trying to support SQL Servers with ever-changing workloads, and achieving some level of predictable performance (or minimizing variance in plans and performance). Unexpected query performance, plan changes, and/or general performance issues can come about due to a number of factors, including increased application load running against SQL Server or version upgrades of the database itself. Getting predictable performance from queries or operations run against SQL Server can greatly enhance the DBAs ability to meet and maintain availability, performance, and/or business continuity goals (OLAs or SLAs).

SQL Server 2008 provides a few feature changes that can help provide more predictable performance. In SQL Server 2008, there exist some enhancements to the SQL Server 2005 plan guides (or plan freezing) and a new option to control lock escalation at a table level. Both of these enhancements can provide a more predictable and structured interaction between the application and the database.

First, plan guides:

SQL Server 2005 enabled greater query performance stability and predictability by providing a new feature called plan guides to enable specifying hints for queries that could not be modified directly in the application. For more information, see the Forcing Query Plans white paper. While a very powerful feature, the USE PLAN query hint only supported SELECT DML operations and were often cumbersome to use due to the sensitivity of the plan guides to the formatting.

SQL Server 2008 builds on the plan guides mechanism in two ways: It expands the support for the USE PLAN query hint to cover all DML statements (INSERT, UPDATE, DELETE, MERGE), and it introduces a new plan freezing feature that can be used to directly create a plan guide (freeze) any query plan that exists in the SQL Server plan cache, as in the following example.

sp_create_plan_guide_from_handle

@name = N'MyQueryPlan',

@plan_handle = @plan_handle,

@statement_start_offset = @offset;

A plan guide created by either means has a database scope and is stored in the sys.plan_guides table. Plan guides are only used to influence the query plan selection process of the optimizer and do not eliminate the need for the query to be compiled. A new function, sys.fn_validate_plan_guide, has also been introduced to validate existing SQL Server 2005 plan guides and ensure their compatibility with SQL Server 2008. Plan freezing is available in the SQL Server 2008 Standard, Enterprise, and Developer editions.

Next, lock escalation:

Lock escalation has often caused blocking and sometimes even deadlocking problems, which the DBA is forced to troubleshoot and resolve. Previous versions of SQL Server permitted controlling lock escalation (trace flags 1211 and 1224), but this was only possible at an instance-level granularity. While this helped some applications work-around the problem, it caused severe issues for others. Another problem with the SQL Server 2005 lock escalation algorithm was that locks on partitioned tables were directly escalated to the table level, rather than the partition level.

SQL Server 2008 offers a solution for both of these problems. A new option has been introduced to control lock escalation at a table level. By using an ALTER TABLE command, option locks can be specified to not escalate, or escalate to the partition level for partitioned tables. Both these enhancements help improve the scalability and performance without having negative side-effects on other objects in the instance. Lock escalation is specified at the database-object level and does not require any application change. It is supported in all editions of SQL Server 2008.

9

Resource Governor

Maintaining a consistent level of service by preventing runaway queries and guaranteeing resources for mission-critical workloads has been a challenge. In the past there was no way of guaranteeing a certain amount of resources to a set of queries and prioritizing the access. All queries had equal access to all the available resources.

SQL Server 2008 introduces a new feature called Resource Governor, which helps address this issue by enabling users to differentiate workloads and allocate resources as they are requested. Resource Governor limits can easily be reconfigured in real time with minimal impact on the workloads that are executing. The allocation of the workload to a resource pool is configurable at the connection level, and the process is completely transparent to the application.

The diagram below depicts the resource allocation process. In this scenario three workload pools (Admin Workload, OLTP Workload, and Report Workload) are configured, and the OLTP Workload pool is assigned a high priority. In parallel, two resource pools (Admin Pool and Application Pool) are configured with specific memory and processor (CPU) limits as shown. As a final step the Admin Workload is assigned to the Admin Pool and the OLTP and Report workloads are assigned to the Application Pool.

image

Below are some other points you need to consider when using Resource Governor.

  • Resource Governor relies on login credentials, host name, or application name as a ‘resource pool identifier’, so using a single login for an application, depending on the number of clients per server, might make creating pools more difficult.
  • Database-level object grouping, in which the resource governing is done based on the database objects being referenced, is not supported.
  • Resource Governor only allows resource management within a single SQL Server instance. For managing multiple SQL Server instances or processes within a server from a single source, Windows System Resource Manager should be considered.
  • Only processor and memory resources can be configured. I/O resources cannot be controlled.
  • Dynamically switching workloads between resource pools once a connection is made is not possible.
  • Resource Governor is only supported in SQL Server 2008 Enterprise and Developer editions and can only be used for the SQL Server database engine; SQL Server Analysis Services (SSAS), SQL Server Integration Services (SSIS), and SQL Server Reporting Services (SSRS) cannot be controlled.

10

Transparent Data Encryption (TDE)

Security is one of the top concerns of many organizations. There are many different layers to securing one of the most important assets of an organization: its data. In most cases, organizations do well at securing their active data via the use of physical security, firewalls, and tightly controlled access policies. However, when physical medium such as the backup tape or disk on which the data resides is compromised, the above security measures are of no use, because a rouge user can simply restore the database and get full access to the data.

SQL Server 2008 offers a solution to this problem by way of transparent data encryption (TDE). TDE performs real-time I/O encryption and decryption of the data and log files by using a database encryption key (DEK). The DEK is a symmetric key secured by using a certificate stored in the master database of the server, or an asymmetric key protected by an Extensible Key Management (EKM) module.

TDE is designed to protect data ‘at rest’, which means the data stored in the .mdf, .ndf, and .ldf files cannot be viewed using a hex editor or other means. However, data that is not at rest, such as the results of a SELECT statement in SQL Server Management Studio, will continue to be visible to users who have rights to view the table. Also, because TDE is implemented at the database level, the database can leverage indexes and keys for query optimization. TDE should not be confused with column-level encryption, which is a separate feature that allows encryption of data even when it is not at rest.

Encrypting a database is a one-time process that can be initiated via a Transact-SQL command or SQL Server Management Studio, and it is executed as a background thread. You can monitor the encryption or decryption status using the sys.dm_database_encryption_keys dynamic management view. In a lab test we conducted, we were able to encrypt a 100 GB database using the AES_128 encryption algorithm in about an hour. While the overhead of using TDE is largely dictated by the application workload, in some of the testing conducted that overhead was measured to be less than 5%. One potential performance impact to be aware of is this: If any database within the instance does have TDE applied, the tempDB system database is also encrypted. Finally, of note when combining features:

  • When backup compression is used to compress an encrypted database, the size of the compressed backup is larger than if the database were not encrypted, because encrypted data does not compress well.
  • Encrypting the database does not affect data compression (row or page).

TDE enables organizations to meet the demands of regulatory compliance and overall concern for data privacy. TDE is only supported in the SQL Server 2008 Enterprise and Developer editions and can be enabled without changing existing applications. For more information, see Database Encryption in SQL Server 2008 Enterprise Edition or the SQL Server 2008 Compliance Guide discussion on Using Transparent Data Encryption.

In conclusion, SQL Server 2008 offers features, enhancements, and functionality to help improve the Database Administrator experience. While a Top 10 list was provided above, there are many more features included within SQL Server 2008 that help improve the experience for DBA and other users alike. For a Top 10 feature set for other SQL Server focus areas, see the other SQL Server 2008 Top 10 articles on this site. For a full list of features and detailed descriptions, see SQL Server Books Online and the SQL Server 2008 Overview Web site.







Courtesy :- SQLCAT

14 January 2011

Changing Roles Between Primary and Secondary Servers

Performing the Initial Role Change

The first time you want to fail over to the secondary database and make it your new primary database, there is a series of steps you must take. After you have followed these initial steps, you will be able to swap roles between the primary database and the secondary database easily.

  1. Manually fail over from the primary database to a secondary database. Be sure to back up the active transaction log on your primary server with NORECOVERY. For more information, see Failing Over to a Log Shipping Secondary.

  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.

  3. On your secondary database (the database you want to be the new primary), configure log shipping using SQL Server Management Studio. For more information, see How to: Enable Log Shipping (SQL Server Management Studio). Include the following steps:

    • Use the same share for creating backups that you created for the original primary server.

    • When adding the secondary database, in the Secondary Database Settings dialog box, enter the name of the original primary database in the Secondary database box.

    • In the Secondary Database Settings dialog box, select No, the secondary database is initialized.

Swapping Roles

After you have completed the steps above for the initial role change, you can change roles between the primary database and the secondary database by following the steps in this section. To perform a role change, follow these general steps:

  1. Bring the secondary database online, backing up the transaction log on the primary server with NORECOVERY.

  2. Disable the log shipping backup job on the original primary server, and the copy and restore jobs on the original secondary server.

  3. Enable the log shipping backup job on the secondary server (the new primary server), and the copy and restore jobs on the primary server (the new secondary server).

Important note Important

When you change a secondary database to the primary database, to provide a consistent experience to users and applications, you might have to re-create some or all of the metadata for the database, such as logins and jobs, on the new primary server instance. For more information, see Managing Metadata When Making a Database Available on Another Server Instance.

Failing Over to a Log Shipping Secondary

Failing Over to a Log Shipping Secondary

Failing over to a log shipping secondary is useful if the primary server instance fails or requires maintenance.

Preparing for a Controlled Failover

Typically, the primary and secondary databases are unsynchronized, because the primary database continues to be updated after its latest backup job. Also, in some cases, recent transaction log backups have not been copied to the secondary server instances, or some copied log backups might still not have been applied to the secondary database. We recommend that you begin by synchronizing all of the secondary databases with the primary database, if possible.

For information about log shipping jobs, see Log Shipping Overview.

Failing Over

To fail over to a secondary database:

  1. Copy any uncopied backup files from the backup share to the copy destination folder of each secondary server.

  2. Apply any unapplied transaction log backups in sequence to each secondary database. For more information, see How to: Apply a Transaction Log Backup (Transact-SQL).

  3. If the primary database is accessible, back up the active transaction log and apply the log backup to the secondary databases.

    If the original primary server instance is not damaged, back up the tail of the transaction log of the primary database using WITH NORECOVERY. This leaves the database in the restoring state and therefore unavailable to users. Eventually you will be able to roll this database forward by applying transaction log backups from the replacement primary database.

    For more information, see Working with Transaction Log Backups.

  4. After the secondary servers are synchronized, you can fail over to whichever one you prefer by recovering its secondary database and redirecting clients to that server instance. Recovering puts the database into a consistent state and brings it online. For more information, see How to: Recover a Database from a Backup Without Restoring Data (Transact-SQL).

    Note Note

    When you make a secondary database available, you should ensure that its metadata is consistent with the metadata of the original primary database. For more information, see Managing Metadata When Making a Database Available on Another Server Instance.

  5. After you have recovered a secondary database, you can reconfigure it to act as a primary database for other secondary databases. For more information, see Changing Roles Between Primary and Secondary Servers.

    If no other secondary database is available, see How to: Enable Log Shipping (SQL Server Management Studio) or How to: Enable Log Shipping (Transact-SQL).

DBA WORLD: Log Shipping Tables and Stored Procedures

DBA WORLD: Log Shipping Tables and Stored Procedures

Log Shipping Tables and Stored Procedures

Primary Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID. This table is only used on the primary server if a remote monitor server has not been configured.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs associated with this primary server.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs associated with this primary server.

log_shipping_monitor_primary

Stores one monitor record for this primary database.

log_shipping_primary_databases

Contains configuration information for primary databases on a given server. Stores one row per primary database.

log_shipping_primary_secondaries

Maps primary databases to secondary databases.

Primary Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_primary_database

Sets up the primary database for a log shipping configuration, including the backup job, local monitor record, and remote monitor record.

sp_add_log_shipping_primary_secondary

Adds a secondary database name to an existing primary database.

sp_change_log_shipping_primary_database

Changes primary database settings including local and remote monitor record.

sp_cleanup_log_shipping_history

Cleans up history locally and on the monitor based on retention period.

sp_delete_log_shipping_primary_database

Removes log shipping of primary database including backup job as well as local and remote history.

sp_delete_log_shipping_primary_secondary

Removes a secondary database name from a primary database.

sp_help_log_shipping_primary_database

Retrieves primary database settings and displays the values from the log_shipping_primary_databases and log_shipping_monitor_primary tables.

sp_help_log_shipping_primary_secondary

Retrieves secondary database names for a primary database.

sp_refresh_log_shipping_monitor

Refreshes the monitor with the latest information for the specified log shipping agent.

Secondary Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID. This table is only used on the secondary server if a remote monitor server has not been configured.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs associated with this secondary server.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs associated with this secondary server.

log_shipping_monitor_secondary

Stores one monitor record per secondary database associated with this secondary server.

log_shipping_secondary

Contains configuration information for the secondary databases on a given server. Stores one row per secondary ID.

log_shipping_secondary_databases

Stores configuration information for a given secondary database. Stores one row per secondary database.

Note Note

Secondary databases on the same secondary server for a given primary database share the settings in the log_shipping_secondary table. If a shared setting is altered for one secondary database, the setting is altered for all of them.

Secondary Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_secondary_database

Sets up a secondary database for log shipping.

sp_add_log_shipping_secondary_primary

Sets up the primary information, adds local and remote monitor links, and creates copy and restore jobs on the secondary server for the specified primary database.

sp_change_log_shipping_secondary_database

Changes secondary database settings including local and remote monitor records.

sp_change_log_shipping_secondary_primary

Changes secondary database settings such as source and destination directory, and file retention period.

sp_cleanup_log_shipping_history

Cleans up history locally and on the monitor based on retention period.

sp_delete_log_shipping_secondary_database

Removes a secondary database and the local history and remote history.

sp_delete_log_shipping_secondary_primary

Removes the information about the specified primary server from the secondary server.

sp_help_log_shipping_secondary_database

Retrieves secondary database settings from the log_shipping_secondary, log_shipping_secondary_databases, and log_shipping_monitor_secondary tables.

sp_help_log_shipping_secondary_primary

This stored procedure retrieves the settings for a given primary database on the secondary server.

sp_refresh_log_shipping_monitor

Refreshes the monitor with the latest information for the specified log shipping agent.

Monitor Server Tables

Table

Description

log_shipping_monitor_alert

Stores alert job ID.

log_shipping_monitor_error_detail

Stores error detail for log shipping jobs.

log_shipping_monitor_history_detail

Stores history detail for log shipping jobs.

log_shipping_monitor_primary

Stores one monitor record per primary database associated with this monitor server.

log_shipping_monitor_secondary

Stores one monitor record per secondary database associated with this monitor server.

Monitor Server Stored Procedures

Stored Procedure

Description

sp_add_log_shipping_alert_job

Creates a log shipping alert job if one has not already been created.

sp_delete_log_shipping_alert_job

Removes a log shipping alert job if there are no associated primary databases.

sp_help_log_shipping_alert_job

Returns the job ID of the alert job.

sp_help_log_shipping_monitor_primary

Returns monitor records for the specified primary database from the log_shipping_monitor_primary table.

sp_help_log_shipping_monitor_secondary

Returns monitor records for the specified secondary database from the log_shipping_monitor_secondary table.



12 January 2011

2 JoBS - Replication

Distribution clean up

If a subscription is not synchronized within the maximum distribution retention period (default of 72 hours) and there are changes in the distribution database that have not been delivered to the Subscriber, the subscription will be marked deactivated by the Distribution clean up job that runs on the Distributor. The subscription must be reinitialized.

Expired subscription clean up

If a subscription is not synchronized within the publication retention period (default of 336 hours), the subscription will expire and be dropped by the Expired subscription clean up job that runs on the Publisher. The subscription must be recreated and synchronized.

If a push subscription expires, it is completely removed, but pull subscriptions are not. You must clean up pull subscriptions at the Subscriber.