Author Archives: ryltechadmin

« All posts

OS Tools – CPU Analysis

Monitoring your CPU performance from time to time is important in order to know if your machines’ computing power is sufficient for your business needs, also it lets you know in advance about bottlenecks that might negatively affect your systems’ performance in the future and prevent problems from occurring by better planning your resource allocation.

Usually, users and managers only check the CPU performance when they notice a slowdown in the system or when something is not working right and most of these problems can be prevented by monitoring CPU performance as a routine maintenance task.

AimBetter enables you to easily analyze CPU processes, services and performance. It automatically monitors all your servers and helps you both routinely monitor your CPU performance and easily analyze it whenever you encounter any performance issue.

When most users and managers encounter a slowdown in their server’s performance, they usually open the task manager, in which they can see CPU usage per process but not per service, they can see the total threads and total processes but not threads per process and they have no way to know for any process how much CPU is used by the kernel and how much by the user, not to mention the different performance counters that provide critical information for troubleshooting.

In AimBetter, the CPU section of OS Tools is divided to 3 subsections: Process, Services and System.
The CPU section of the OS Tools is a complementary to the CPU section of the performance screen as it gives you detailed information about your CPU usage, in the process level, in the service level and in the performance counter level, it also shows statistic information about counters for the most recent round minute.

The process subsection presents the following information for all the processes: process name, process ID, threads launched, threads used, kernel CPU%, user CPU% and total CPU%.

The process subsection presents the following information

The services subsection presents the following information for all the services: process executable, process ID, CPU%.

2

In the services subsection there is also a drilldown of services per process:

3

The system subsection presents the following information for each performance counter: performance counter name, mean value, minimum value and maximum value.

4

As shown above, the following performance counters (statistics) are monitored: % registry quota in use, context switches/second, exception dispatches/second, file control bytes/second, file control operations/second, file data operations/second, file read bytes/second, file read operations/second, file write bytes/second, file write operations/second, active processes, process queue length, system calls/second, system up time and threads.

AimBetter provides thorough information about all the CPU metrics affected by your operating system and displays every detail that has a meaning regarding your systems’ performance, this way it lets you know immediately how well your hardware match your software requirements and your systems’ operational demands and help you plan hardware changes according to your business plans and growth expectations. It also helps you know which servers are available for more tasks and which ones are too loaded so that you could balance the load between your servers and optimize the total performance of your systems.

« All posts

OS Tools – Disk Analysis

Monitoring your disks regularly is important for knowing whether any of your disks are expected to run out of free space and how to allocate disk space for programs demanding it or expected to demand it and planning which programs use which disks in order to optimize disk usage to the organization’s systems.

Usually users and managers perform disk analysis only when they encounter problems that are suspected to be related to the disks or known to be related to the disks such as quick disk space consumption, long loading time for programs, not being able to save files, halted transactions, etc.

AimBetter enables you to easily analyze your disks’ performance, usage and activities and lets you know any information related to your disks any time and helps you plan which programs should be using which disks and when.

As can be seen in the following case study – http://ryltech.wpengine.com/articles/case-study-system-timeouts/ regularly monitoring the disks could prevent a serious timeouts problem that was bothering one of our clients for a few days before he contacted us and asked for our help. Luckily, that customer had AimBetter installed on his main server so we could discover the cause of the problem in 5 minutes. Reading the linked case study is highly recommended.

In AimBetter, the Disk section of OS Tools is divided to 3 subsections: Hot Files, Disk Breakdown and Physical Disk.

The Disk section of the OS Tools is a complementary of the Disk section of the performance screen as it gives you detailed information about your disk usage, files causing most disk IOs, processes on each disk and disk metrics.
The hot files subsection presents the following information about each one of the top 20 IO consuming files in your disks: Disk number, File path and name, reads/second, KB/read, writes/second, KB/write.

top 20 IO consuming files in your disks

The disk breakdown subsections presents information about each running process on the disk

The disk breakdown subsections presents the following information about each running process on the disk: process or executable file name, process ID, authority (user), reads/second, KB/read, writes/second, KB/write.

The physical disk subsection presents the following information: physical disk counters metrics, physical disk idle time percentage and physical disk performance counters (Average Disk seconds/Read, Average Disk seconds/Transfer, Average Disk seconds/Write).

3

4

As shown above, the following counters are monitored: % disk read time, % disk time, % disk write time, average disk bytes/read, average disk bytes/transfer, average disk bytes/write, average disk read queue length, average disk write queue length, current disk queue length, disk bytes/second, disk read bytes/second, disk reads/second, disk transfers/second, disk write bytes/second, disk writes/second and split IO/second. For each counter you receive the following information: counter name, counter instance (shows meaning), mean value, minimum value and maximum values. All values refer to the most recent round minute.

AimBetter provides thorough information about all the disk metrics and displays every detail that has a meaning regarding your machines’ performance. It immediately tells you about bottlenecks that need to be taken care of and lets you know whatever you need to know about your disks’ performance in any time, whether it’s real time or history, as AimBetter saves its logs for as long as you set it to so you don’t need to reproduce a problem you encounter in order to investigate its cause, for more information please read the case study linked above.

« All posts

What is Jitter and how to monitor it?

Jitter is the variation in latency as measured in the variability over time of the packet latency across a network. A network with constant latency has no jitter.

In quantitative terms, jitter is the average of the deviation from the network mean latency. The standards-based term for network jitter is Packet Delay Variation (PDV). PDV is an important Quality of Service (QoS) factor in assessment of network performance.

The delay is specified from the start of the packet being transmitted at the source to the start of the packet being received at the destination. A component of the delay which does not vary from packet to packet can be ignored, hence if the packet sizes are the same and packets always take the same time to be processed at the destination then the packet arrival time at the destination could be used instead of the time the end of the packet is received.

For interactive real-time applications, PDV can be a serious issue and hence VoIP and video streaming may need QoS-enabled networks to provide a high quality channel.

The effects of PDV in multimedia can be removed by a properly sized play-out buffer at the receiver, which can only cause a detectable delay before the start of media playback.

There are many tools in the market that monitor jitter, but most of them monitor network transportation only and nothing more. Among the best known network monitoring tools are: PRTG network monitor, OpManager, Cisco IOS SAA, Iperf and SolarWinds VNQM.

AimBetter is an all-in-one monitoring tool that monitors operating system servers, database servers and web servers including all the relevant metrics for each type.

In the following screenshot you can see the network jitter of a few servers:

Network jitter of a few servers

In the following screenshot you can see the jitter trend of a server:

The jitter trend of a server

While most network monitoring tools only allow you to view the current state, AimBetter records your metrics every minute and stores them for as long as you set it to so you could analyze trends and troubleshoot even when you don’t have the trouble in real time, moreover, AimBetter enables you to compare trends of equal periods, as shown in the following screenshot:

compare trends of equal periods

Not only that AimBetter allows you to view the average of each presented time frame, as most tools that present trends do, it also allows you to view peaks of time frames, as presented in the following screenshot:

View peaks of time frames

If your systems include VoIP, video streaming and/or UDP transportation using applications it is very important for you to monitor jitter in order to assure high QoS for these applications and AimBetter is designed to make it as easy as possible for you.

« All posts

Memory Optimized Tables in SQL 2014

In-Memory OLTP can significantly improve OLTP database application performance. In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP.
To use In-Memory OLTP, you need to define a heavily accessed table as memory optimized. Memory-optimized-tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables.
A query can reference both memory-optimized tables and disk-based tables. A transaction can update data in memory-optimized tables and disk-based tables. Stored procedures that only refer to memory-optimized tables can be natively compiled into machine code for further performance improvements.
The In-Memory OLTP engine is designed for extremely high session concurrency for OLTP transactions driven from a highly scaled-out middle-tier. To achieve this, it uses latch-free data structures and an optimistic, multi-version concurrency control. The result is predictable, sub-millisecond low latency and high throughput with linear scaling for database transactions.
The actual performance gain depends on many factors, but about 10 times performance improvements are common.
The following table summarizes the most benefited workload patterns by using In-Memory OLTP:

Implementation Scenario 1 Implementation Scenario 2 Benefits of In-Memory OLTP
High data insertion rate from multiple concurrent connections. Unable to keep up with the insert workload.Primarily append-only store. Reduce logging.Eliminate contention.
Read performance and scale with periodic batch inserts and updates. Unable to meet scale-up requirements. High performance read operations, especially when each server request has multiple read operations to perform. Lower latency data retrieval.
Minimize code execution time.
Eliminate contention when new data arrives.
Intensive business logic processing in the database server. Intensive computation inside stored procedures.Read and write contention.Insert, update, and delete workload. Minimize code execution time for reduced latency and improved throughput.Eliminate contention.
Low latency. Require low latency business transactions which typical database solutions cannot achieve. Eliminate contention.Minimize code execution time. Low latency code execution.Efficient data retrieval
Session state management. High scale load from numerous stateless web servers.
Frequent insert, update and point lookups.
Efficient data retrieval.
Optional IO reduction or removal, when using non-durable tables
Eliminate contention.

In-Memory OLTP will improve programming patterns that include concurrency scenarios, point lookups and workloads in which there are many inserts and updates, and business logic in stored procedures.

Integration with SQL Server means you can have both memory-optimized tables and disk-based tables in the same database, and query across both types of tables.

In SQL Server 2014 there are limitations in Transact-SQL surface area supported for In-Memory OLTP.
In-Memory OLTP achieves significant performance and scalability gains by using:

  • Algorithms that are optimized for accessing memory-resident data.
  • Optimistic concurrency control that eliminates logical locks.
  • Lock free objects that eliminate all physical locks and latches. Threads that perform transactional work don’t use locks or latches for concurrency control.
  • Natively compiled stored procedures, which have significantly better performance than interpreted stored procedures, when accessing a memory-optimized table.

SQL Server 2014 is designed to improve OLTP database application performance.

« All posts

SQL Server Backup to URL in SQL Server 2014

Windows Azure Blob storage service enables you to backup SQL Server databases to a URL. The backup and restore functionality are same or similar to when using DISK or TAPE, with a few differences that will be presented in this article.
Security
When creating a container for the Windows Azure Blob storage service, it is recommended you set the access to private. Setting the access to private restricts the access to users or accounts able to provide the necessary information to authenticate to the Windows Azure account.
The user account that is used to issue BACKUP or RESTORE commands should be in the db_backup operator database role with Alter any credential permissions.

Introduction to Key Components and Concepts

The following two sections introduce the Windows Azure Blob storage service, and the SQL Server components used when backing up to or restoring from the Windows Azure Blob storage service. It is important to understand the components and the interaction between them to do a backup to or restore from the Windows Azure Blob storage service.
Creating a Windows Azure account is the first step to this process. SQL Server uses the Windows Azure storage account name and its access key values to authenticate and write and read blobs to the storage service. The SQL Server Credential stores this authentication information and is used during the backup or restore operations.

Windows Azure Blob Storage Service

Storage Account: The storage account is the starting point for all storage services. To access the Windows Azure Blob Storage service, first create a Windows Azure storage account. The storage account name and its access key properties are required to authenticate to the Windows Azure Blob Storage service and its components.
Container: A container provides a grouping of a set of Blobs, and can store an unlimited number of Blobs. To write a SQL Server backup to the Windows Azure Blob service, you must have at least the root container created.
Blob: A file of any type and size. There are two types of blobs that can be stored in the Windows Azure Blob storage service: block and page blobs. SQL Server backup uses page Blobs as the Blob type. Blobs are addressable using the following URL format: https://.blob.core.windows.net//

SQL Server Components

URL: A URL specifies a Uniform Resource Identifier (URI) to a unique backup file. The URL is used to provide the location and name of the SQL Server backup file. In this implementation, the only valid URL is one that points to a page Blob in a Windows Azure storage account. The URL must point to an actual Blob, not just a container. If the Blob does not exist, it is created. If an existing Blob is specified, BACKUP fails, unless the “WITH FORMAT” option is specified.

Here is a sample URL value: http[s]://ACCOUNTNAME.Blob.core.windows.net//. HTTPS is not required, but is recommended.
Credential: A SQL Server credential is an object that is used to store authentication information required to connect to a resource outside of SQL Server. Here, SQL Server backup and restore processes use credential to authenticate to the Windows Azure Blob storage service. The Credential stores the name of the storage account and the storage account access key values. Once the credential is created, it must be specified in the WITH CREDENTIAL option when issuing the BACKUP/RESTORE statements.

Limitations

  • The maximum backup size supported is 1 TB.
  • You can issue backup or restore statements by using TSQL, SMO, or PowerShell cmdlets. A backup to or restoring from the Windows Azure Blob storage service by using SQL Server Management Studio Backup or Restore wizard is not currently enabled.
  • Creating a logical device name is not supported. So adding URL as a backup device using sp_dumpdevice or through SQL Server Management Studio is not supported.
  • Appending to existing backup blobs is not supported. Backups to an existing Blob can only be overwritten by using the WITH FORMAT option.
  • Backup to multiple blobs in a single backup operation is not supported.
  • Specifying a block size with BACKUP is not supported.
  • Specifying MAXTRANSFERSIZE is not supported.
  • Specifying backupset options – RETAINDAYS and EXPIREDATE are not supported.
  • SQL Server has a maximum limit of 259 characters for a backup device name. The BACKUP TO URL consumes 36 characters for the required elements used to specify the URL – ‘https://.blob.core.windows.net//.bak’, leaving 223 characters for account, container, and blob names put together.
Backup/Restore Statement Supported Exceptions Comments
BACKUP Yes BLOCKSIZE, and MAXTRANSFERSIZE are not supported. Requires WITH CREDENTIAL specified
RESTORE Yes   Requires WITH CREDENTIAL specified
RESTORE FILELISTONLY Yes   Requires WITH CREDENTIAL specified
RESTORE HEADERONLY Yes   Requires WITH CREDENTIAL specified
RESTORE LABELONLY Yes   Requires WITH CREDENTIAL specified
RESTORE VERIFYONLY Yes   Requires WITH CREDENTIAL specified
RESTORE REWINDONLY No    

Support for Backup Arguments

Argument Supported Exceptions Comments
DATABASE Yes    
LOG Yes    
TO (URL) Yes Unlike DISK and TAPE, URL does not support specifying or creating a logical name. This argument is used to specify the URL path for the backup file.
MIRROR TO No    
WITH OPTIONS:
CREDENTIAL Yes   WITH CREDENTIAL is only supported when using BACKUP TO URL option to back up to the Windows Azure Blob storage service.
DIFFERENTIAL Yes    
COPY_ONLY Yes    
COMPRESSION|NO_COMPRESSION Yes    
DESCRIPTION Yes    
NAME Yes    
EXPIREDATE | RETAINDAYS No    
NOINIT | INIT No   This option is ignored if used. Appending to blobs is not possible. To overwrite a backup use the FORMAT argument.
NOSKIP | SKIP No    
NOFORMAT | FORMAT Yes   This option is ignored if used. A backup taken to an existing blob fails unless WITH FORMAT is specified. The existing blob is overwritten when WITH FORMAT is specified.
MEDIADESCRIPTION Yes    
MEDIANAME Yes    
BLOCKSIZE No    
BUFFERCOUNT Yes    
MAXTRANSFERSIZE No    
NO_CHECKSUM | CHECKSUM Yes    
STOP_ON_ERROR | CONTINUE_AFTER_ERROR Yes    
STATS Yes    
REWIND | NOREWIND No    
UNLOAD | NOUNLOAD No    
NORECOVERY | STANDBY Yes    
NO_TRUNCATE Yes    

Support for Restore Arguments

Argument Supported Exceptions Comments
DATABASE Yes    
LOG Yes    
FROM (URL) Yes   The FROM URL argument is used to specify the URL path for the backup file.
WITH OPTIONS:
CREDENTIAL Yes   WITH CREDENTIAL is only supported when using RESTORE FROM URL option to restore from Windows Azure Blob Storage service.
PARTIAL Yes    
RECOVERY | NORECOVERY | STANDBY Yes    
LOADHISTORY Yes    
MOVE Yes    
REPLACE Yes    
RESTART Yes    
RESTRICTED_USER Yes    
FILE Yes    
PASSWORD Yes    
MEDIANAME Yes    
MEDIAPASSWORD Yes    
BLOCKSIZE Yes    
BUFFERCOUNT No    
MAXTRANSFERSIZE No    
CHECKSUM | NO_CHECKSUM Yes    
STOP_ON_ERROR | CONTINUE_AFTER_ERROR Yes    
FILESTREAM Yes    
STATS Yes    
REWIND | NOREWIND No    
UNLOAD | NOUNLOAD No    
KEEP_REPLICATION Yes    
KEEP_CDC Yes    
ENABLE_BROKER | ERROR_BROKER_CONVERSATIONS | NEW_BROKER Yes    
STOPAT | STOPATMARK | STOPBEFOREMARK Yes    

Using Backup Task in SQL Server Management Studio

The Backup task in SQL Server Management Studio has been enhanced to include URL as one of the destination options, and other supporting objects required to backup to Windows Azure storage like the SQL Credential.
The following steps describe the changes made to the Back Up Database task to allow for backing up to Windows Azure storage.

  1. Start SQL Server Management Studio and connect to the SQL Server instance. Select a database you want to backup, and right click on Tasks, and select Back Up… This opens the Back Up Database dialog box.
  2. On the general page the URL option is used to create a backup to Windows Azure storage. When you select this option, you see other options enabled on this page:
    a. File Name: Name of the backup file.
    b. SQL Credential: You can either specify an existing SQL Server Credential, or can create a new one by clicking on the Create next to the SQL Credential box.
    c. Azure storage container: The name of the Windows Azure storage container to store the backup files.
    d. URL prefix: This is built automatically using the information specified in the fields described in the previous steps. If you do edit this value manually, make sure it matches with the other information you provided previously. For example if you modify the storage URL, make sure the SQL Credential is set to authenticate to the same storage account.

SQL Server Backup to URL Using Maintenance Plan Wizard

The Maintenance Plan Wizard in SQL Server Management Studio has been enhanced to include URL as one of the destination options, and other supporting objects required to backup to Windows Azure storage like the SQL Credential.

Restoring from Windows Azure storage Using SQL Server Management Studio

If you are restoring a database using SSMS 2014, URL is included as the device to restore from. Following steps describe the changes in the Restore task to allow restoring from Windows Azure storage:

  1. When you select Devices in the General page of the Restore task in SQL Server Management Studio, this takes you to the Select backup devices dialog box which includes URL as a backup media type.
  2. When you select URL and click Add, this opens the Connect to Azure storage dialog. Specify the SQL Credential information to authenticate to Windows Azure storage.
  3. SQL Server then connects to Windows Azure storage using the SQL Credential information you provided and opens the Locate Backup File in Windows Azure dialog. The backup files residing in the storage are displayed on this page. Select the file you want to use to restore and click OK. This takes you back to the Select Backup Devices dialog, and Clicking OK on this dialog takes you back to the main Restore dialog where you will be able complete the restore.
« All posts

Managing the Lock Priority of Online Operations in SQL Server 2014

The ONLINE = ON option now contains a WAIT_AT_LOW_PRIORITY option which permits you to specify how long the rebuild process should wait for the necessary locks. The WAIT_AT_LOW_PRIORITY option also allows you to configure the termination of blocking processes related to that rebuild statement.
Use the MAXDOP option to limit the number of processors used in parallel plan execution. The maximum is 64 processors.

ALTER TABLE added parameters in SQL Server 2014:

MAXDOP can be one of the following values:
1
Suppresses parallel plan generation.
>1
Restricts the maximum number of processors used in a parallel index operation to the specified number.
0 (default)
Uses the actual number of processors or fewer based on the current system workload.

ALTER INDEX added parameters in SQL Server 2014:
ONLINE = { ON | OFF }

Specifies whether underlying tables and associated indexes are available for queries and data modification during the index operation. The default is OFF. REBUILD can be performed as an ONLINE operation.
ON
Long-term table locks are not held for the duration of the index operation. During the main phase of the index operation, only an Intent Share (IS) lock is held on the source table. This enables queries or updates to the underlying table and indexes to continue. At the start of the operation, a Shared (S) lock is held on the source object for a very short time. At the end of the operation, for a short time, an S (Shared) lock is acquired on the source if a non-clustered index is being created; or an SCH-M (Schema Modification) lock is acquired when a clustered index is created or dropped online and when a clustered or non-clustered index is being rebuilt. ONLINE cannot be set to ON when an index is being created on a local temporary table. Only single-threaded heap rebuild operation is allowed.
To execute the DDL for SWITCH or online index rebuild, all active blocking transactions running on a particular table must be completed. When executing, the SWITCH or rebuild operation prevents new transaction from starting and might significantly affect the workload throughput and temporarily delay access to the underlying table.
OFF
Table locks are applied for the duration of the index operation. An offline index operation that creates, rebuilds, or drops a clustered index, or rebuilds or drops a non-clustered index, acquires a Schema modification (Sch-M) lock on the table. This prevents all user access to the underlying table for the duration of the operation. An offline index operation that creates a non-clustered index acquires a Shared (S) lock on the table. This prevents updates to the underlying table but allows read operations, such as SELECT statements. Multi-threaded heap rebuild operations are allowed.

WAIT_AT_LOW_PRIORITY

An online index rebuild has to wait for blocking operations on this table. WAIT_AT_LOW_PRIORITY indicates that the online index rebuild operation will wait for low priority locks, allowing other operations to proceed while the online index build operation is waiting. Omitting the WAIT AT LOW PRIORITY option is equivalent to WAIT_AT_LOW_PRIORITY (MAX_DURATION = 0 minutes, ABORT_AFTER_WAIT = NONE).

MAX_DURATION = time [MINUTES ]

The wait time (an integer value specified in minutes) that the online index rebuild locks will wait with low priority when executing the DDL command. If the operation is blocked for the MAX_DURATION time, one of the ABORT_AFTER_WAIT actions will be executed. MAX_DURATION time is always in minutes, and the word MINUTES can be omitted.

ABORT_AFTER_WAIT = [NONE | SELF | BLOCKERS } ]

NONE
Continue waiting for the lock with normal (regular) priority.
SELF
Exit the online index rebuild DDL operation currently being executed without taking any action.
BLOCKERS
Kill all user transactions that block the online index rebuild DDL operation so that the operation can continue. The BLOCKERS option requires the login to have ALTER ANY CONNECTION permission.

New Wait Types in SQL Server 2014 in the sys.dm_os_wait_stats DMV:

Wait Type Description
LCK_M_BU_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Bulk Update (BU) lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_BU_LOW_PRIORITY Occurs when a task is waiting to acquire a Bulk Update (BU) lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IS_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Intent Shared (IS) lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IS_LOW_PRIORITY Occurs when a task is waiting to acquire an Intent Shared (IS) lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IU_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Intent Update (IU) lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IU_LOW_PRIORITY Occurs when a task is waiting to acquire an Intent Update (IU) lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IX_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_IX_LOW_PRIORITY Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_NL_ABORT_BLOCKERS Occurs when a task is waiting to acquire a NULL lock with Abort Blockers on the current key value, and an Insert Range lock with Abort Blockers between the current and previous key. A NULL lock on the key is an instant release lock. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_NL_LOW_PRIORITY Occurs when a task is waiting to acquire a NULL lock with Low Priority on the current key value, and an Insert Range lock with Low Priority between the current and previous key. A NULL lock on the key is an instant release lock. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_S_ABORT_BLOCKERS Occurs when a task is waiting to acquire a shared lock with Abort Blockers on the current key value, and an Insert Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_S_LOW_PRIORITY Occurs when a task is waiting to acquire a shared lock with Low Priority on the current key value, and an Insert Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_U_ABORT_BLOCKERS Task is waiting to acquire an Update lock with Abort Blockers on the current key value, and an Insert Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_U_LOW_PRIORITY Task is waiting to acquire an Update lock with Low Priority on the current key value, and an Insert Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
CK_M_RIn_X_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Exclusive lock with Abort Blockers on the current key value, and an Insert Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RIn_X_LOW_PRIORITY Occurs when a task is waiting to acquire an Exclusive lock with Low Priority on the current key value, and an Insert Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RS_S_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Shared lock with Abort Blockers on the current key value, and a Shared Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RS_S_LOW_PRIORITY Occurs when a task is waiting to acquire a Shared lock with Low Priority on the current key value, and a Shared Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RS_U_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Update lock with Abort Blockers on the current key value, and an Update Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RS_U_LOW_PRIORITY Occurs when a task is waiting to acquire an Update lock with Low Priority on the current key value, and an Update Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_S_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Shared lock with Abort Blockers on the current key value, and an Exclusive Range with Abort Blockers lock between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_S_LOW_PRIORITY Occurs when a task is waiting to acquire a Shared lock with Low Priority on the current key value, and an Exclusive Range with Low Priority lock between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_U_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Update lock with Abort Blockers on the current key value, and an Exclusive range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_U_LOW_PRIORITY Occurs when a task is waiting to acquire an Update lock with Low Priority on the current key value, and an Exclusive range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_X_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Exclusive lock with Abort Blockers on the current key value, and an Exclusive Range lock with Abort Blockers between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_RX_X_LOW_PRIORITY Occurs when a task is waiting to acquire an Exclusive lock with Low Priority on the current key value, and an Exclusive Range lock with Low Priority between the current and previous key. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_S_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Shared lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_S_LOW_PRIORITY Occurs when a task is waiting to acquire a Shared lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SCH_M_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Schema Modify lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SCH_M_LOW_PRIORITY Occurs when a task is waiting to acquire a Schema Modify lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SCH_S_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Schema Share lock
with Abort Blockers. (Related to the low priority wait option of ALTER TABLE
and ALTER INDEX.)
LCK_M_SCH_S_LOW_PRIORITY Occurs when a task is waiting to acquire a Schema Share lock
with Low Priority. (Related to the low priority wait option of ALTER TABLE
and ALTER INDEX.)
LCK_M_SIU_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Shared With Intent Update lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SIU_LOW_PRIORITY Occurs when a task is waiting to acquire a Shared With Intent Update lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SIX_ABORT_BLOCKERS Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_SIX_LOW_PRIORITY Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_U_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Update lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_U_LOW_PRIORITY Occurs when a task is waiting to acquire an Update lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_UIX_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Update With Intent Exclusive lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_UIX_LOW_PRIORITY Occurs when a task is waiting to acquire an Update With Intent Exclusive lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_X_ABORT_BLOCKERS Occurs when a task is waiting to acquire an Exclusive lock with Abort Blockers. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
LCK_M_X_LOW_PRIORITY Occurs when a task is waiting to acquire an Exclusive lock with Low Priority. (Related to the low priority wait option of ALTER TABLE and ALTER INDEX.)
PWAIT_HADR_OFFLINE_COMPLETED An AlwaysOn drop availability group operation is waiting for the target availability group to go offline before destroying Windows Server Failover Clustering objects.
PWAIT_HADR_ONLINE_COMPLETED An AlwaysOn create or failover availability group operation is waiting for the target availability group to come online.
WAIT_XTP_CKPT_CLOSE Occurs when waiting for a checkpoint to complete.
WAIT_XTP_CKPT_ENABLED Occurs when checkpointing is disabled, and waiting for checkpointing to be enabled.
WAIT_XTP_CKPT_STATE_LOCK Occurs when synchronizing checking of checkpoint state.
WAIT_XTP_GUEST Occurs when the database memory allocator needs to stop receiving low-memory notifications.
WAIT_XTP_HOST_WAIT Occurs when waits are triggered by the database engine and implemented by the host.
WAIT_XTP_OFFLINE_CKPT_LOG_IO Occurs when offline checkpoint is waiting for a log read IO to complete.
WAIT_XTP_OFFLINE_CKPT_NEW_LOG Occurs when offline checkpoint is waiting for new log records to scan.
WAIT_XTP_PROCEDURE_ENTRY Occurs when a drop procedure is waiting for all current executions of that procedure to complete.
WAIT_XTP_TASK_SHUTDOWN Occurs when waiting for an In-Memory OLTP thread to complete.
WAIT_XTP_TRAN_COMMIT Occurs when execution of a natively compiled stored procedure is waiting for an XTP transaction to commit (waiting for transactions dependent on for instance).
WAIT_XTP_TRAN_DEPENDENCY Occurs when waiting for transaction dependencies.
XTPPROC_CACHE_ACCESS Occurs when for accessing all natively compiled stored procedure cache objects.
XTPPROC_PARTITIONED_STACK_CREATE Occurs when allocating per-NUMA node natively compiled stored procedure cache structures (must be done single threaded) for a given procedure.
« All posts

Efficiently monitoring DB Backups

Efficiently monitoring your database backups is crucial for your system, as without these reports you won’t be able to restore your system in case of data loss or data corruption. You also wouldn’t be to follow changes in your data over time. For these reasons alone, you must make sure that all of your databases are backed up regularly, each by a service that is best matched to the database’s role in the system and to your organizational needs.

SQL Server Maintenance Plans or third party software are currently the most popular ways to create backup plans. Third party software is also used for monitoring the database.

The following are the most common backup devices:

  • local hard drive
  • backup tape
  • network device

In order to verify that the backup finished successfully most DBAs set an automatic email to be sent to them in case of a failure in the job and they regularly check the automatic emails sent to them so that they know when a backup process fails. Though this is the most common practice, it is not the best practice because a backup process might be halted for any number of reasons. In such a case there would be neither a successful finish notification nor a failure notification. Consequently, it is vital that DBAs verify the success of their backups independently, without relying on automatic reports.

As yourself these questions to make sure that your database backup finished successfully:

  • Do my database backups match my backup plans and policies?
  • Where are my backups and where do I backup each database to?
  • What is my backup type and what was the recovery mode of my database during the backup process?
  • Who ran the backup? Was it my service or maybe an SQL injection stealing my data?
  • What were the times, durations and sizes of each backup?
  • Where can I get alerts when the backup size is abnormal or when the duration is out of the ordinary?
  • Did I run the backup during the middle of the day (FULL BACKUP) within working hours?
  • If I run several backup operators (Backup local, Network Path, Backup Tape, Virtual SNAPSHOT), are any of them blocked? Do they block each other?
  • Did I backup the Log Database using a different operator? We need all log files in order to be able to restore successfully.

Normally, in order to check all of the above you’d need to query MSDB. The MSDB system database is the primary repository for storage of SQL Agent, backup management, Service Broker, Database Mail, Log Shipping, restore manager, and maintenance plan metadata. Here are a few tables that document the backup process:

  • backupset: provides information concerning the most-granular details of the backup process
  • backupmediafamily: provides metadata for the physical backup files as they relate to backup sets
  • backupfile: this system view provides the most-granular information for the physical backup files

Using the following script you can get basic results regarding your backups
–This is an example of getting information about Database Backups for all databases during the previous 3 days:


SELECT
CONVERT(CHAR(100), SERVERPROPERTY('Servername')) AS Server,
msdb.dbo.backupset.database_name,
msdb.dbo.backupset.backup_start_date,
msdb.dbo.backupset.backup_finish_date,
msdb.dbo.backupset.expiration_date,
CASE msdb..backupset.type
WHEN 'D' THEN 'Database'
WHEN 'L' THEN 'Log'
END AS backup_type,
msdb.dbo.backupset.backup_size,
msdb.dbo.backupmediafamily.logical_device_name,
msdb.dbo.backupmediafamily.physical_device_name,
msdb.dbo.backupset.name AS backupset_name,
msdb.dbo.backupset.description
FROM msdb.dbo.backupmediafamily
INNER JOIN msdb.dbo.backupset ON msdb.dbo.backupmediafamily.media_set_id = msdb.dbo.backupset.media_set_id
WHERE (CONVERT(datetime, msdb.dbo.backupset.backup_start_date, 102) >= GETDATE() - 3)

Of course, the code above was a specific example for a specific case – if you’d like to get more concrete information or filter your information to specific database(s), backup type, back-up device or day(s), etc. you’ll have to alter your query accordingly.

Using AIMBETTER you can get all of this information and relevant alerts.

A few screenshots that illustrate the problems we listed are as below:

Optional filters

optional-filters

Detailed results

Detailed-results

Alerts on missed backups

Alerts-on-missed-backups

AimBetter enables you to see all the backup information you request in a simple way and on one screen so that you won’t have to remember queries by heart in order to efficiently monitor your backups, which makes it a more efficient and more convenient alternative to today’s outdated (albeit most common) backup practice.

« All posts

Deadlocks, detection and handling using AimBetter

A deadlock in SQL Server occurs when two or more sessions inside of the database engine end up waiting for access to locked resources held by each other. In a deadlock situation, none of the sessions can continue to execute until one of those sessions releases its locks, so allowing the other session(s) access to the locked resource. Multiple processes persistently blocking each other, in an irresolvable state, will eventually result in a halt to processing inside the database engine. A common misconception is that DBAs need to intervene to kill one of the processes involved in a deadlock. In fact, SQL Server is designed to detect and resolve deadlocks automatically, through the use the Lock Monitor, a background process that is initiated when the SQL Server Instance is started. You can monitor your locks using AimBetter in a much easier way, immediately see your deadlock victims and also see your deadlock survivors in the drilldown screen. 1 2 Using AimBetter, you may see the codes causing deadlocks and handle them in a much simpler way than using trace flags and XML Deadlock Graphs, also, unlike the conventional way, you can easily see historical deadlocks and analyze your system for long term trends, therefore you could see the effects of your deadlocks handling policy.

SQL Profiler XML Deadlock Graph event

In SQL Server 2005 and up, the Deadlock Graph event in SQL Trace captures the deadlock graph information, without writing it to the SQL Server Error Log. The Deadlock Graph event is part of the Locks event category and can be added to a SQL Server Profiler trace by selecting the event in Profiler’s Trace Properties dialog, as shown in the following screenshot. 3 SQL Profiler can be configured to save the deadlock graphs separately, into XDL files, as shown in the following screenshot. 4 AimBetter would tell you the same information as the profiler but in a much easier way, all you need to do is to click on SQL Tools and then on DeadLocks, then you see all the details. 5

Handling Deadlocks to Prevent Errors

In most cases, the same issues that cause severe blocking in the database, such as poor database design, lack of indexing, poorly designed queries, inappropriate isolation level and so on, are also the common causes of deadlocking. In most cases, by fixing such issues, we can prevent deadlocks from occurring. Unfortunately, by the time deadlocks become a problem, it may not be possible to make the necessary design changes to correct them. Therefore, an important part of application and database design is defensive programming; a technique that anticipates and handles exceptions as a part of the general code base for an application or database. Defensive programming to handle deadlock exceptions can be implemented in two different ways:

  • database-side, through the use of T-SQL TRY…CATCH blocks
  • application-side, through the use of application TRY…CATCH blocks.

Since AimBetter lets you view and edit the code you won’t need the try and catch blocks, you’ll find the deadlock causes immediately: 6

Summary

This article has covered how to capture and interpret deadlock graph information in SQL Server to troubleshoot deadlocking. Most often, deadlocks are the result of a design problem in the database or code that can be fixed to prevent the deadlock from occurring. However, when changes to the database are not possible to resolve the deadlock, adding appropriate error handling in the application code reduces the impact caused by a deadlock occurring. The information included in this article should allow rapid and efficient troubleshooting of most deadlocks in SQL Server. AimBetter is the best tool in the market that helps you handle deadlocks and doesn’t require you to use complex code in order to troubleshoot deadlocks, additionally it monitors all of your database operations for you automatically, therefore, it saves you a lot of time and effort and is the best practice for your db maintenance.

« All posts

Locks and long queries

General Description

Database locking is a varied, evolving, complicated, and technical topic.

A lock is used when multiple users need to access a database concurrently. This prevents data from being corrupted or invalidated when multiple users try to read while others write to the database. Any single user can only modify those database records (that is, items in the database) to which they have applied a lock that gives them exclusive access to the record until the lock is released. Locking not only provides exclusivity to writes but also prevents (or controls) reading of unfinished modifications (AKA uncommitted data).
Locks can occur for the following types of items: Tables, Data Rows, Data Blocks, Cached Items, Connections and entire systems.

Most of what we’re calling transactional locking relates to the ability of a database management system (DBMS) to ensure reliable transactions that adhere to these ACID properties. ACID is an acronym that stands for Atomicity, Consistency, Isolation, and Durability. However, all of these properties are related and must be considered together. They are more like different views of the same object than independent things.

Factor in slowness

In multi-user systems, locks are a significant factor to slowness because of access requests on items that are being read from or written to by other users and are already locked, that forces them to wait for each item to be unlocked. It is critical to understand that locks will often remain after a statement has finished executing. That is, a transaction may be busy with different, subsequent activity but still hold locks on a table due to an earlier statement. Transactions may even be idle. This is especially dangerous if the application allows user think time within database transactions.

There are a number of problems that can be caused by database locking. They can generally be broken down into 4 categories: Lock Contention, Long Term Blocking, Database Deadlocks, and System Deadlocks.

In order to avoid these problems as much as possible they should be monitored regularly and handled individually. The AimBetter system monitors your database in real time and allows you to see locks and long queries running on your databases at any time.

1

Difficulty in identifying past issues

When we don’t regularly monitor locks and long queries we cannot identify problems in running queries and procedures in until we catch them just in time of occurrence and even then it is possible that while we run our tests the problem ends and we fail to catch it in time.

AimBetter records all the events in your system and keeps them for as long as you set, so when you come to handle deadlocks and long queries from AimBetter you can view such events that occurred in the past and deal with each case individually.

Here is an example of long locks queries stored in history:

2

Remote control out of the office

AimBetter is a complete Web system that allows you to connect to it via any Internet-connected device and monitor your system from anywhere, even when you’re not in the office.

Analysis and pattern recognition

AimBetter lets you see all the long queries and locks currently active on your system and allows you to copy them to the SQL Server Management Studio, make changes and simulate them in order to explore ways to resolve:

3

Users committing exceptional heavy transactions

AimBetter allows you to identify users and applications that run exceptionally long transactions and deal each case accordingly. The user name you’ll see a column Session:

4

Collision between applications and users activity

AimBetter system allows you to see all users running long queries or locking elements and all applications competing for the same resources. You can identify them in accordance with Session and Program column screen.

Summary

You can AimBetter system using real-time monitor locks and long queries and know about such events have occurred in history and use this information to optimize your systems, improve performance, to significantly reduce the frequency of locks and shorten the time of long-running queries.

« All posts

How to check free disk space in the system and its importance

Having enough free disk space on your servers is critical for routine processes such as saving data, backups and other processes running on the SQL server. Consequently, it is important to know how to check whether you have enough free disk space on your server or whether more is needed.

The most common way to check whether there is enough free disk space in the server is with the SQL Server Management Studio which functions as follows:
The stored procedure sys.xp_fixeddrives (SQL Server 2005 and up) checks free disk space. With this information you will still need to know whether your free disk space is greater than the required minimum for any process. You can figure this out using the following stored procedure:

CREATE PROCEDURE dbo.spExec_SufficientDiskSpace @MinMBFree int, @Drive char(1) AS
/*
----------------------------------------------------------------------------
-- Object Name: dbo.spExec_SufficientDiskSpace
-- Dependent Objects: master.sys.xp_fixeddrives
-- Called By: Admin Scripts
--------------------------------------------------------------------------------------
*/
SET NOCOUNT ON
-- 1 - Declare variables
DECLARE @MBfree int

-- 2 - Initialize variables
SET @MBfree = 0

-- 3 - Create temp tables
CREATE TABLE #tbl_xp_fixeddrives
(Drive varchar(2) NOT NULL,
[MB free] int NOT NULL)

-- 4 - Populate #tbl_xp_fixeddrives
INSERT INTO #tbl_xp_fixeddrives(Drive, [MB free])
EXEC master.sys.xp_fixeddrives

-- 5 - Initialize the @MBfree value
SELECT @MBfree = [MB free]
FROM #tbl_xp_fixeddrives
WHERE Drive = @Drive

-- 6 - Determine if sufficient free space is available
IF @MBfree > @MinMBFree
BEGIN
RETURN
END
ELSE
BEGIN
RAISERROR ('*** ERROR *** - Insufficient disk space.', 16, 1)
END

-- 7 - DROP TABLE #tbl_xp_fixeddrives
DROP TABLE #tbl_xp_fixeddrives

SET NOCOUNT OFF
GO

Using this method of checking, you would have to add to every process that requires disk space with an exec command that runs the stored procedure with the relevant parameter values. @MinMBFree would contain the minimal free disk space needed for the process and @Drive would contain the relevant drive letter of the drive used by the process. In case you would like to receive an email a notification when there is not enough free disk space, you’d have to embed the SQL code in HTML in order to enable emailing.

In contrast to this traditional, time-consuming approach of checking available disk space, the AimBetter system monitors the free disk space of all of the disks in all of the defined servers and alerts the AimBetter team about any decrease of free disk space below the threshold we’ve set, whether on the system level, the server level or on the specific disk level. With AimBetter you can also compare the trends of free disk space on a daily, weekly or monthly basis and therefore know whether the pattern is routine or exceptional so that you can respond accordingly.

SCREEN1

The AimBetter system samples every predefined server every minute so that whenever you are logged into the system you can view the free disk space of any disk in any server monitored by it with no need to manually run any scripts whatsoever.

SCREEN2

In addition to informing you know how much disk space is used by any monitored disk, AimBetter also tells you how busy your disk is percentagewise (in other words, how much disk space is currently used by READ/WRITE processes).
Lastly, the AimBetter system can also send email alerts according to your settings so that you can know about any exceptions to your thresholds immediately without the need to manually check your free disk space before any disk space-consuming process takes place. The system can also provide alerts about exceptional decreases in free disk space as indicated by your settings.

If free disk space is a critical factor or problem for your company, AimBetter can provide a convenient, comprehensive solution by monitoring your disk space, highlighting usage trends and alerting you about any decrease below your thresholds and/or exceptional decreases as indicated by your settings.