TSC-VSP Storage Platform Guide RAID Controllers OS Support iSCSI</title
1.
2. 1 – What is not covered by this TOI 2 – Controller types and functionality 3 – Controller firmware and patches 4 – Operating system connectivity 5 – Disk size and geometry 6 – RAID levels, building and recovery 7 – iSCSI concepts 8 – iSCSI requirements and building 9 – Abbreviations, issues & useful links
45. Building RAID - LSI1020/1030 Select Controller Select RAID Properties Add Primary Disk Add Members Exit Configuration Disk Now Resyncing
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
Notas do Editor
Slide 34 - Sun SAS/SATA Platforms Now you understand the technology lets talk about the implementation of SAS and SATA in the Sun product range. What you will see from this table is that many platforms share a common or similar SAS or SATA controller. This is deliberate and makes driver development and maintenance more manageable. From the left, we have the product marketing name, internal product code, device storage type, controller, RAID levels supported and the drive form factor. DISCLAIMER: Some platforms are still under development so specifications may have changed since the platform documentation was created. Blade servers as an example change design very frequently due to market requirements. At the bottom of the slide the legend details marketing name and ASIC name for the Nvidia class MCP's otherwise known as Media Communications Processors. Media Communications Processors are the central bus generators for Nvidia chip sets. These extended bridge chips generate most platform buses which include the SATA controller. What is important to note with the LSI controllers is the end letter or to be exact the lack of a letter. Typically, the PCI express version of the controller ends with an “e”. The PCI and PCI-X version of the HBA does not have a letter but is more commonly referred to as the “x” version.
Slide 34 - Sun SAS/SATA Platforms Now you understand the technology lets talk about the implementation of SAS and SATA in the Sun product range. What you will see from this table is that many platforms share a common or similar SAS or SATA controller. This is deliberate and makes driver development and maintenance more manageable. From the left, we have the product marketing name, internal product code, device storage type, controller, RAID levels supported and the drive form factor. DISCLAIMER: Some platforms are still under development so specifications may have changed since the platform documentation was created. Blade servers as an example change design very frequently due to market requirements. At the bottom of the slide the legend details marketing name and ASIC name for the Nvidia class MCP's otherwise known as Media Communications Processors. Media Communications Processors are the central bus generators for Nvidia chip sets. These extended bridge chips generate most platform buses which include the SATA controller. What is important to note with the LSI controllers is the end letter or to be exact the lack of a letter. Typically, the PCI express version of the controller ends with an “e”. The PCI and PCI-X version of the HBA does not have a letter but is more commonly referred to as the “x” version.
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 36 – LSI 1064e The LSI 1064e is similar to the 1064 but interfaces with the host computer using the serial PCI express protocol. The 1064e can be connected to 1, 4 or 8 PCI express lanes from the NVIDIA or NEC PCI express bridge chip bus generator. Speeds and model number or the ARM CPU differ slightly but the overall performance is similar. The PCI express implementation is point to point which makes up in performance for the higher latency serial bus link. It is important to note that the firmware must be updated when available as recognition of new disks and their capacities is required, as well as generic bug fixes.
Slide 39 – LSI 1068x and e The LSI 1068 variant of the LSI HBA is basically a 1064 controller with 8 PHYs rather than 4 in the form of 2 transport modules. The core logic ARM cpu is as in the 1064 controller but the 1068 adds a second 4 port transport module in the mix. Specificaitons are similar for both the PCI-X digital bus and PCI express serial bus versions. As before, the v440 family of platforms is first to implement this new HBA with the v445 server. Raidctl is still used to create and administer RAID arrays but remember to patch Raidctl as it is common that the base operating system's revision of raidctl often does not support the latest LSI ASIC.
Slide 40 – LSI 1078x and e The LSI 1078 HBA is being marketed as a ROC design meaning RAID on chip. This controller is an option on the V445 platform and adds new support for RAID 5. System administrators using this controller will have to use raidctl with the switch -r 5 to generate a RAID 5 disk array.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 41 – Nvidia NF2050 and 2200 The Nvidia HBA is not actually a single ASIC. The SATA controller is actually built into the Nforce 2200 media communications processor which is also responsible for generating PCI, PCIe, USB and legacy buses. The Nforce 2050 companion chip is similar in design to the 2200 but with reduced functionality that provides another 4 PHYs and a fixed PCI express lane configuration. RAID 0 and 1 are available with this HBA however the RAID levels provided by this HBA are more of a software implementation rather than a hardware RAID solution. When the NVRAID BIOS is enabled, this RAID array looks for and executes a special boot sector which includes detailed disk member information. This boot block contains the RAID configuration which is later read by the special Nvidia storage driver. Without the storage driver, RAID will not correctly work on the platform and the individual disks will be seen rather than the group array.
Slide 42 – Nvidia NF3050/3400 The Nforce 3400 media communications processor and its companion the Nforce 3050 I/O are the new chipset found in Opteron AM2 and Socket F platforms. The chips contain an updated set of features and more functionality over the original NF2200 and NF2050 MCP's RAID 5 is added as an array option for this controller.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 43 – Marvell 88SX6081 The Marvell 88SX6081 controller is a low cost ASIC built into the Thumper x4500 platform. The x4500 was originally ear marked to use the LSI 1068 however time to market and cost of 6 full SAS/SATA ASIC's meant that the Marvell SATA only controller was a better choice for Thumber. This SATA controller does not incorporate any hardware RAID functions. The controller was selected for Thumper because Thumper uses ZFS for RAID functions.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 45 – Solaris SATA Driver Solaris 10 Update 2 included SATA driver 1.3 which replaces the traditional ata driver for connecting to SATA disks. Improvements include correct device number addressing, Improved DMA. Support for SATA features like native command queueing. Support for the full SATA 2 specification.
Slide 46 – Solaris “raidctl” Usage As you can see from this slide, the new raidctl command includes support for a new switch -r. The -r switch allows the user to define the RAID level to be created. Depending on platform specifics, -r 0 can be used to create a stripe. -r 1 can be used to create a mirror and -r 5 can be used to create a distributed parity array. Firmware of the HBA can be upgraded by using the -F switch and then by specifying the firmware file and location.
iSCSI - The Internet Small Computer Systems Interface (iSCSI) protocol defines the rules and processes to transmit and receive block storage applications over TCP/IP networks by encapsulating SCSI commands into TCP and transporting them over the network via IP. iSCSI describes: * Transport protocol for SCSI which operates on top of TCP * New mechanism for encapsulating SCSI commands on an IP network * Protocol for a new generation of data storage systems that natively use TCP/IP
An architecture of a pure SCSI is based on the client/server model. A client, for example, server or workstation, initiates requests for data reading or recording from a target - server, for example, a data storage system. Commands which are sent by the client and processed by the server are put into the Command Descriptor Block (CDB). The server executes a command which completion is indicated by a special signal alert. Encapsulation and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network is the main function of the iSCSI, which is due to be implemented in the medium untypical of SCSI, potentially unreliable medium of IP networks. The Diagram depicts a model of the iSCSI protocol levels which allows us to get an idea of an encapsulation order of SCSI commands for their delivery through a physical carrier. The iSCSI protocol controls data block transfer and confirms that I/O operations are truly completed. In its turn, it is provided via one or several TCP connections.
i Benefits of IP storage * IP storage leverages the large installed base of Ethernet-TCP/IP networks and enables storage to be accessed over LAN, MAN, or WAN environments, without needing to alter storage applications. * It also lets IT managers use the existing Ethernet/IP knowledge base and management tools. * It provides for consolidation of data storage systems · Data backup · Server clusterization · Replication · Recovery in emergency conditions * To transfer data to storage devices with the iSCSI interface it's possible to use not only data carriers, communicators and routers of existent LAN/WAN but also usual network cards on the client's side. * The conception of building the World Wide Storage Area Network excellently fits in the development of modern IP Storage technologies. * Maximize storage resources to be available to more applications; * Use existing storage applications (backup, disaster recovery, and mirroring) without modification; and * Manage IP-based storage networks with existing tools and IT expertise.
How does it work? How iSCSI works iSCSI defines the rules and processes to transmit and receive block storage applications over TCP/IP networks. At the physical layer, iSCSI supports a Gigabit Ethernet interface so that systems supporting iSCSI interfaces can be directly connected to standard Gigabit Ethernet switches and/or IP routers. The iSCSI protocol sits above the physical and data-link layers and interfaces to the operating system's standard SCSI Access Method command set. iSCSI enables SCSI-3 commands to be encapsulated in TCP/IP packets and delivered reliably over IP networks. iSCSI can be supported over any physical media that supports TCP/IP as a transport, but today's iSCSI implementations are on Gigabit Ethernet. The iSCSI protocol runs on the host initiator and the receiving target device. iSCSI can run in software over a standard Gigabit Ethernet network interface card (NIC) or can be optimized in hardware for better performance on an iSCSI host bus adapter (HBA). iSCSI also enables the access of block-level storage that resides on Fibre Channel SANs over an IP network via iSCSI-to-Fibre Channel gateways such as storage routers and switches. In the diagram, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI protocol. IP routers and Ethernet switches are used for network connections.
i Limitations of ISCSI * In IP, packets are delivered without a strict order, it is also in charge of data recovery, which takes more resources. At the same time, in SCSI, as a channel interface, all packets must be delivered one after another without delay, and breach of the order may result in data losses. (iSCSI has managed to solve this problem to some degree requiring a longer packet's head). The head includes additional information which speeds up packet assembling by a great margin. * Considerable expenses of processor power on the client's side which uses such card. According to the developers, the software iSCSI realization can reach data rates of Gigabit Ethernet at a significant, about 100%, CPU load. That is why it is recommended using special network cards which support mechanisms of CPU unload before TCP stack processing. * latency issues - Although there are a lot of means developed to reduce influence of parameters which cause delays in processing of IP packets, the iSCSI technology is positioned for middle-level systems.
Address and Naming Conventions As the iSCSI devices are participants of an IP network they have individual Network Entities. Such Network Entity can have one or several iSCSI nodes. An iSCSI node is an identifier of SCSI devices (in a network entity) available through the network. Each iSCSI node has a unique iSCSI name (up to 255 bytes) which is formed according to the rules adopted for Internet nodes. For example, fqn.com.ustar.storage.itdepartment.161. Such name has an easy-to-perceive form and can be processed by the Domain Name System (DNS). An iSCSI name provides a correct identification of an iSCSI device irrespective of its physical location. At the same time in course of handling data transfer between devices it's more convenient to use a combination of an IP address and a TCP port which are provided by a Network Portal. The iSCSI protocol together with iSCSI names provides a support for aliases which are reflected in the administration systems for better identification and management by system administrators.
Session Management The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command. The Login Phase of the iSCSI is identical to the Fibre Channel Port Login process (PLOGI). It is used to adjust various parameters between two network entities and confirm an access right of an initiator. If the iSCSI Login Phase is completed successfully the target confirms the login for the initiator; otherwise, the login is not confirmed and a TCP connection breaks. As soon as the login is confirmed the iSCSI session turns to the FULL Feature Phase. If more than one TCP connection was established the iSCSI requires that each command/response pair goes through one TCP connection. Thus, each separate read or write command will be carried out without a necessity to trace each request for passing different flows. However, different transactions can be delivered through different TCP connections within one session. At the end of a transaction the initiator sends/receives last data and the target sends a response which confirms that data are transferred successfully. The iSCSI logout command is used to complete a session - it delivers information on reasons of its completion. It can also send information on what connection should be interrupted in case of a connection error, in order to close troublesome TCP connections.
Error Handling Because of a high probability of errors in data delivery in some IP networks, especially WAN, where the iSCSI can work, the protocol provides a great deal of measures for handling errors. So that error handling and recovery can work correctly both the initiator and the target must be able to buffer commands before they are confirmed. Each terminal must have a possibility to recover selectively a lost or damaged PDU within a transaction for recovery of data transfer. Here is the hierarchy of the error handling and recovery after failures in the iSCSI: 1. The lowest level - identification of an error and data recovery on the SCSI task level, for example, repeated transfer of a lost or damaged PDU. 2. Next level - a TCP connection which transfers a SCSI task can have errors. In this case there is an attempt to recover the connection. 3. At last, the iSCSI session can be damaged. Termination and recovery of a session are usually not required if recovery is implemented correctly on other levels, but the opposite can happen. Such situation requires that all TCP connections be closed, all tasks, underfulfilled SCSI commands be completed, and the session be restarted via the repeated login.
Security As the iSCSI can be used in networks where data can be accessed illegally, the specification allows fpr different security methods. Such encoding means as IPSec which use lower levels do not require additional matching because they are transparent for higher levels, and for the iSCSI as well. Various solutions can be used for authentication, for example, Kerberos or Private Keys Exchange, an iSNS server can be used as a repository of keys.
Use 'iscsitadm' to set up iSCSI target devices. You'll need to provide an equivalently sized ZFS or UFS file system as the backing store for the iSCSI daemon. Use 'iscsiadm' to identify your iSCSI targets, which will discover and use the iSCSI target device.
How does it work? How iSCSI works iSCSI defines the rules and processes to transmit and receive block storage applications over TCP/IP networks. At the physical layer, iSCSI supports a Gigabit Ethernet interface so that systems supporting iSCSI interfaces can be directly connected to standard Gigabit Ethernet switches and/or IP routers. The iSCSI protocol sits above the physical and data-link layers and interfaces to the operating system's standard SCSI Access Method command set. iSCSI enables SCSI-3 commands to be encapsulated in TCP/IP packets and delivered reliably over IP networks. iSCSI can be supported over any physical media that supports TCP/IP as a transport, but today's iSCSI implementations are on Gigabit Ethernet. The iSCSI protocol runs on the host initiator and the receiving target device. iSCSI can run in software over a standard Gigabit Ethernet network interface card (NIC) or can be optimized in hardware for better performance on an iSCSI host bus adapter (HBA). iSCSI also enables the access of block-level storage that resides on Fibre Channel SANs over an IP network via iSCSI-to-Fibre Channel gateways such as storage routers and switches. Fig. 1. IP network with iSCSI devices used Here, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI protocol. IP routers and Ethernet switches are used for network connections.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.