Some Witty Tagline Goes Here

I'm working on some changes, please stand by. Click here for my popular Passive FTP document.

March 1, 2010

SCSI-3 Persistent Reservations, Part I

Filed under: Linux, Networking, Storage — admin @ 2:50 pm

I’ve recently started working extensively with iSCSI based SAN storage, having previously only had a background in Fibre Channel based storage.  One unexpected benefit from using iSCSI based storage is the ease of capturing SCSI traffic for analysis.  ISCSI, making use of the TCP/IP stack and Ethernet for the physical layer makes traffic capturing as trivial as installing Wireshark.  On the other hand, capturing Fibre Channel traffic for analysis typically requires a Fiber tap and expensive pieces of hardware.

I have had experience with SCSI-3 Persistent Reservations in the past, but while recently troubleshooting a SCSI-3 PR issue, I decided to capture sample SCSI-3 Persistent Reservation traffic with Wireshark for reference analysis.  I built a CentOS 5u3 virtual machine, installed and configured the iscsi_initiator_utils package, and then installed the sg3_utils package.  sg3_utils consists of a number of extremely useful tools which allow you to directly manipulate devices via SCSI commands.  The tool I wanted to use is called sg_persist and is used to send PROUT and PRIN commands (more on that in a moment).

Before we go any further, a little background is required…

The SCSI protocol standards, maintained by the t10 sub-committee of the International Committee for Information Technology Standards (INCITS), is split into a large number of specifications covering various aspects of SCSI.  The over-arching standard is known as the SCSI Architecture Model, or SAM, now in its 5th generation (SAM-5).  The SAM ties together all of the SCSI standards and provides the requirements which the myriad SCSI specifications and standards must meet.

Of primary concern to us now is the SPC, or SCSI Primary Commands standard, which defines SCSI commands common to all classes of SCSI devices.  A given SCSI device will conform, in the least, to the SPC and a standard specific to that class of device.  For example, a basic disk drive will conform to the SPC and the SBC (SCSI Block Commands) specifications.

Device reservations are handled in one of two ways–the older RESERVE/RELEASE method specified in the SCSI-2 specs and the newer persistent reservation method of SCSI-3.  Reservations allow a SCSI device (typically a SAN-based storage array) to maintain a list of initiators which can and cannot issue commands to a particular device.  Reservations, whether the older method or PR, are what allows more than one server to access to a shared set of storage without stepping on one another.

SCSI-3 Persistent Reservations offer some advantages over the older RESERVE/RELEASE method–primarily by allowing the reservation data to be preserved across server reboots, initiator failures, etc.  The reservation will be held by the array for the LUN until it is released or preempted.

One important thing to keep in mind is that the function of persistent reservations are to prevent a node from writing to a disk when it does not hold the reservation.  The reservation system will not prevent another node from preempting the existing reservation and then writing to the disk.  It is the responsibility of the server-side application making use of the persistent reservations to ensure that cluster nodes act appropriately when dealing with reservations.

In part II we will look at the two SPC commands dealing with persistent reservations–PRIN (Persistent Reserve In) and PROUT (Persistent Reserve Out) and their associated service actions.

March 13, 2008

Why Have More Companies Not Embraced 64-bit?

Filed under: IT Industry, Linux — admin @ 3:47 pm

It seems to me that 64-bit computing is the wave of the future with very little effort to adopt, yet from where I stand, not as many companies are going the 64-bit route as I would have thought.

Nearly every processor sold, if not every processor sold, is of the AMD64 or EM64T (IA-32E) architectures.  Memory prices are continuing to drop making it more and more common to see x86 (or technically x86_64) based systems with 32GB, 64GB, or even 96GB of RAM.

Through Physical Address Extensions, (PAE), 32-bit processors have long been able to address more than 4GB of physical memory, given proper OS support.  The addition of just 4 bits of memory additional addressing allows a 32-bit processor to support up to 64GB of RAM.

In Red Hat Enterprise Linux (hereafter referred to as RHEL), support was added for up to 64GB of memory in RHEL4 via the hugemem kernel.  The RHEL4 Release Notes state that the hugemem kernel provides a 4GB per-process address space and a 4GB kernel space.  It is also noted, though, that running the hugemem kernel will have a performance impact as the kernel needs to move from one address lookup table to another when switching from kernel to user space and vice-versa.  It is not stated in the release notes, but I have heard conjecture that the performance impact could be up to 30%.

RHEL5, the latest major release of Red Hat Enterprise Linux actually removes support for the hugemem kernel.  32-bit RHEL5 will support at most 16GB of memory.  See the  RHEL Comparison Chart for details.  I cannot find specific references as to why hugemem was removed in RHEL5 but I have heard that the performance impact of hugemem was a hassle to deal with from a support perspective.  (Not to mention the assertion that 32-bit is dead!).

So, if a user is running 32-bit RHEL4 with greater than 16GB of memory their upgrade path to 32-bit RHEL5 is limited by the 16GB maximum in RHEL5.  One would have to do a fresh 64-bit install of RHEL5 to take advantage of the increased memory.

64-bit on the other hand, can address a full 2TB of memory.  There is also no longer the distinction between LOWMEM and HIGHMEM for the kernel.  The elimination of the 1GB/3GB split (or the 4GB/4GB translation with hugemem) increases the stability of the kernel when dealing with memory intensive loads.  I’ve seen many cases where 32-bit systems with 8, 12, or even 16GB of RAM have fallen over because the kernel can no longer assemble contiguous blocks of memory fast enough from its limited 1GB address space, while HIGHMEM sits with many GB of useable memory pages.

In cases like this, a migration to 64-bit has nearly always resolved  the issue with kernel memory starvation.  (In a couple of cases there was a runaway app that consumed  every available page of memory on the system, so there was no difference between 32 or 64 bit.

The moral of the story?  Go 64-bit with any new server implementations.  Begin putting plans into place now to migrate legacy 32-bit systems to 64-bit in the near future.  Having a solid, actionable plan will go a long way to ensure a smooth transition.


Website Design & Maintenance by Erika Stokes