Storage system with LUN virtualization

Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output command process

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S036000

Reexamination Certificate

active

06816917

ABSTRACT:

BACKGROUND OF THE INVENTION
Storage systems are commonly configured into Storage Area Networks (SANs) to facilitate connection of multiple server hosts on a network, enabling shared access to various connected storage resources.
A common protocol for transferring data between a host processor and one or more storage devices is the Small Computer Systems Interface (SCSI) protocol for example under UNIX™ and Windows NT™ operating systems. A host bus adapter (HBA) plugs into a bus slot on the server's internal bus and connects a SCSI cable to the storage device or subsystem, thereby creating a connection between server and storage devices. The host bus adapter enables a host or server to function as an initiator that begins transfer of data to and from a target device. Traditionally the SCSI protocol supports a combination of initiators and targets on a common bus, although configurations are conventionally limited to one initiator due to a lack of shared device management capability of available operating systems and physical limits of cabling.
Attempts to share target devices such as SCSI tape drives among multiple initiators expose several difficulties in conventional SAN configurations. Typically, configurations with multiple shared targets are designed primarily for inclusion of only a single initiator. Attempts to expand the number of initiators in a configuration by various techniques, such as addition of a bridge or router, fail to solve problems of availability, data integrity, and performance.
Routers handle multiple initiator operations by queuing commands. Queuing enables processing of each initiator's commands but creates timing problems imposed by the initiator's assumption of target ownership. An initiator that fails to receive a response from a target within timing specifications typically responds to the timing violation by initiating error recovery operations. In turn, the error recovery operations may affect data transfer of another initiator. Near simultaneous data transfer requests by multiple initiators can instigate multiple recovery loops and multiple failed backup/restore operations that result in loss of availability of the target resource to a user or subscriber.
Queuing can also cause data integrity errors in a multiple-initiator configuration. An initiator that begins a data transfer command sequence to a tape drive operates on a presumption of specific state information concerning the drive, including media position. These presumptions are invalidated when queuing interleaves commands from different initiators. Command interleaving has the potential to change the drive's state, causing data transfer failure and possibly data corruption on the tape.
Queuing can also negatively impact performance. Interleaved commands that change drive state can disrupt performance by delays incurred while returning to the appropriate state. Even interleaved commands that do not change device state, such as inquiry and log sense commands, can potentially impact performance for tape drives having an optimization to operate in a faster mode for an uninterrupted sequence of data transfer commands.
Difficulties raised by multiple initiators in a Storage Area Network are addressed using various storage device management methods. Conventional management methods include access controls, switch zoning, SCSI reserve/release commands from initiators, and inquiry caching. Other methods include custom target reset handling in a router, dual initiator identifiers, imposing a requirement for homogeneous backup applications, and usage of management processes to manually protect a tape drive from tape resource requests by other activity during a backup/restore window. None of these methods is a general solution to all difficulties involved in usage of multiple initiators in a SAN. Predominantly, the improvement techniques were created and designed to solve other problems but have been found to have some utility in improving multiple initiator difficulties in some applications. Some methods, for example inquiry caching, address a common cause of availability problems, but fail to address other multiple-initiator difficulties. Even combinations of the various techniques are ineffective in handling the multiple initiator difficulties and generally only have utility in solving problems in special cases.
Management access controls enable enterprises to restrict management service access to a specific set of end points, for example IP addresses, device ports, or switch World Wide Numbers (WWNs). Access controls are typically implemented in router firmware and restrict access to devices behind the router to specified initiators. Access controls can disable front-panel access to switches, and manage device and switch connections. Device Connection Controls (DCCs) such as WWN Access Control Lists (ACLs) or Port ACLs enable binding of individual device ports to a set of one or more switch ports. Device ports are specified by WWN and typically represent Host Bus Adaptors (HBAs), also called servers. DCCs secure server-to-fabric connections to normal operations and management functions. DCCs bind a specific WWN to a specific switch port or set of ports to prevent a port in another physical location from assuming the identify of a WWN, controlling shared switch environments by enabling only an authorized set of WWNs to access particular ports in a fabric. Switch Connection Controls (SCCs) restrict fabric connections to a WWN-designated set of switches that are mutually authenticated for switch-to-switch connectivity, for example using digital certificates and unique public/private keying.
Access controls are generally useful to limit tape drive access to backup servers, blocking access to all other servers on the SAN. Access controls fail to address availability, data integrity, and performance issues because SANs can contain multiple backup servers and thus have multiple initiators.
Switch zoning is typically a feature implemented in switch firmware that is commonly used to restrict access to a router and library devices connected to the router to initiators at specified switch ports. Switch zoning is a SAN-partitioning technique that narrows traffic through a storage-networking device so that specific ports on a switch or hub can only access other specific ports. Switch zoning uses masking to the node port level for nodes that are accessible by a switch. Logical Unit Numbers (LUNs) attached to a port node can be masked from hosts that do not access that port. Switch zoning cannot mask individual LUNs arranged behind a port. Instead all hosts connected to the same port can access all LUNs address through that port. In essence, switch zoning converts the physical topology of a network to a logical representation consisting of multiple separate networks.
Fabric switches require any node that attaches to a switch to log in to the switch and register the node's World Wide Number (WWN) in the Simple Name Server (SNS) function of the switch, assigning a unique address to the WWN. Host drivers can detect targets through SNS lookup rather than surveying the entire network. The SNS can be zoned by WWN or by port. WWN zoning facilitates dynamic changes to suit conditions. For example, a tape library can be moved to different zones at various times to restrict access during backup. Also a node can be moved to a different port address without changing zones using WWN zoning.
Unfortunately, switch zoning can lead to a security breach by unauthorized usage of a WWN. Another difficulty is that switch zoning supports initators and targets that attach to a switch and does not assist security beyond port level of a storage subsystem. Switch zoning cannot mask LUNs from initiators that access the same storage port.
Switch zoning does not address issues of availability, data integrity, and performance because multiple initiators can still be zoned to access the library. Switch zoning may also restrict flexibility of the SAN by limiting the backup servers that can access backup devices. Swit

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Storage system with LUN virtualization does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Storage system with LUN virtualization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Storage system with LUN virtualization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3348039

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.