Additional Features

Additional Features refer to optional testing that Server Certified Devices and Systems can attain. The Additional Features designation replaces the "Additional Qualifications" displayed for previous versions of the Windows Server operating system. A Server device or system can only achieve the Additional Features designation if the device or system has earned the Certified for Windows Server 2012 or later operating system version status. There are many Additional Features possible for Server Devices, and several Additional Features possible for various Windows Server operating system versions:

Devices

Graphics

Display Only This driver type enables IHVs to write a WDDM based kernel mode driver that is capable of driving display only devices. The OS handles the 2D or 3D rendering using a software simulated GPU. The device can function as the primary graphics boot device. An example is the Microsoft provided Basic Display Driver.
Render Only This driver type enables IHVs to write a WDDM driver that supports only rendering functionality. Render Only graphics drivers can't be the primary graphics boot device, and do not display to the local monitor. An example would be a GPU and driver used for vector processing and complex mathematical operations.
Full Feature This is the full version of the WDDM graphics driver that supports hardware accelerated 2D & 3D operations. This driver is fully capable of handling all render, display and video functions and the device can function as the primary boot device.

Networking

LAN Cards
Data Center Bridging (DCB) Data Center Bridging (DCB) is a suite of Institute of Electrical and Electronics Engineers (IEEE) standards that enable Converged Fabrics in the data center, where storage, data networking, cluster IPC and management traffic all share the same Ethernet network infrastructure. DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE). Priority-based flow control is essential if the upper layer protocol, such as Fiber Channel, assumes a lossless underlying transport.
FibreChannel Interface ANSI developed the FC Standard in 1988 as a practical and expandable method of using fiber optic cabling to transfer data among desktop computers, workstations, mainframes, supercomputers, storage devices, and display devices. ANSI later changed the standard to support copper cabling; today, some kinds of FC use two-pair copper wire to connect the outer four pins of a nine-pin type connector.
FibreChannel-over-Ethernet Interface Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use Ethernet networks while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. FCoE maps Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE is meant to integrate with existing Fibre Channel networks and management software. FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will not work across routed IP networks. Since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required enhancements to the Ethernet standard to support a priority-based flow control mechanism (to reduce frame loss from congestion). The IEEE standards body added priorities in the data center bridging Task Group.
Internet Protocol Security (IPSec) Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP) communications by authenticating and encrypting each IP packet of a communication session. IPsec includes protocols for establishing mutual authentication between agents at the beginning of the session and negotiation of cryptographic keys to be used during the session. IPsec can be used in protecting data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host). IPsec is an end-to-end security scheme operating in the Internet Layer of the Internet Protocol Suite, while some other Internet security systems in widespread use, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS) and Secure Shell (SSH), operate in the upper layers of the TCP/IP model. Hence, IPsec protects any application traffic across an IP network. Applications do not need to be specifically designed to use IPsec.
iSCSI Interface iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure. iSCSI was submitted as draft standard in March 2000
Kernel Mode Remote Direct Memory Access (kRDMA) Kernel RDMA uses the Network Direct Kernel Provider Interface (NDKPI), which is an extension to NDIS that allows IHVs to provide kernel-mode Remote Direct Memory Access (RDMA) support in a network adapter. To expose the adapter's RDMA functionality, the IHV must implement the NDKPI interface as defined in the NDKPI Reference. A NIC vendor implements RDMA as a combination of software, firmware, and hardware. The hardware and firmware portion is a network adapter that provides NDK/RDMA functionality. This type of adapter is also called an RDMA-enabled NIC (RNIC). The software portion is an NDK-capable miniport driver, which implements the NDKPI interface. NDK providers must support Network Direct connectivity via both IPv4 and IPv6 addresses assigned to NDK-capable miniport adapters.
Receive Segment Coalescing (RSC) RSC is a stateless offload technology that helps reduce CPU utilization for network processing on the receive side by offloading tasks from the CPU to an RSC-capable network adapter. CPU saturation due to networking-related processing can limit server scalability. This problem in turn reduces the transaction rate, raw throughput, and efficiency. RSC enables an RSC-capable network interface card to do the following: Parse multiple TCP/IP packets and strip the headers from the packets while preserving the payload of each packet, join the combined payloads of the multiple packets into one packet, and send the single packet, which contains the payload of multiple packets, to the network stack for subsequent delivery to applications. The network interface card performs these tasks based on rules that are defined by the network stack subject to the hardware capabilities of the specific network adapter. This ability to receive multiple TCP segments as one large segment significantly reduces the per-packet processing overhead of the network stack. Because of this, RSC significantly improves the receive-side performance of the operating system (by reducing the CPU overhead) under network I/O intensive workloads.
Receive Side Scaling (RSS) Receive side scaling (RSS) is a network driver technology that enables the efficient distribution of network receive processing across multiple physical cores, not hyper-threaded, in multiprocessor systems. To process received data efficiently, a miniport driver's receive interrupt service function schedules a deferred procedure call (DPC). Without RSS, a typical DPC indicates all received data within the DPC call. Therefore, all of the receive processing that is associated with the interrupt runs on the CPU where the receive interrupt occurs. With RSS, the NIC and miniport driver provide the ability to schedule receive DPCs on other processors. Also, the RSS design ensures that the processing that is associated with a given connection stays on an assigned CPU.
Single Root I/O Virtualization (SR-IOV) The single root I/O virtualization (SR-IOV) interface is an extension to the PCI Express (PCIe) specification. SR-IOV allows a device, such as a network adapter, to separate access to its resources among various PCIe hardware functions. These functions consist of the following types;
A PCIe Physical Function (PF). This function is the primary function of the device and advertises the device's SR-IOV capabilities. The PF is associated with the Hyper-V parent partition in a virtualized environment.
One or more PCIe Virtual Functions (VFs). Each VF is associated with the device's PF. A VF shares one or more physical resources of the device, such as a memory and a network port, with the PF and other VFs on the device. Each VF is associated with a Hyper-V child partition in a virtualized environment. Each PF and VF is assigned a unique PCI Express Requester ID (RID) that allows an I/O memory management unit (IOMMU) to differentiate between different traffic streams and apply memory and interrupt translations between the PF and VFs. This allows traffic streams to be delivered directly to the appropriate Hyper-V parent or child partition. As a result, non-privileged data traffic flows from the PF to VF without affecting other VFs. SR-IOV enables network traffic to bypass the software switch layer of the Hyper-V virtualization stack. Because the VF is assigned to a child partition, the network traffic flows directly between the VF and child partition. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in non-virtualized environments.
Virtual Machine Queue (VMQ) Virtual machine queue (VMQ) is a feature available to computers with the Hyper-V server role installed, that have VMQ-capable network hardware. VMQ uses hardware packet filtering to deliver packet data from an external virtual machine network directly to virtual machines, which reduces the overhead of routing packets and copying them to the virtual machine. When VMQ is enabled, a dedicated queue is established on the physical network adapter for each virtual network adapter that has requested a queue. As packets arrive for a virtual network adapter, the physical network adapter places them in that network adapter's queue. When packets are indicated up, all the packet data in the queue is delivered directly to the virtual network adapter. Packets arriving for virtual network adapters that don't have a dedicated queue, as well as all multicast and broadcast packets, are delivered to the virtual network in the default queue. The virtual network handles routing of these packets to the appropriate virtual network adapters as it normally would.

Storage

Disk Drives
ATA Interface Parallel ATA (PATA), originally AT Attachment, is an interface standard for the connection of storage devices such as hard disks, floppy drives, and optical disc drives in computers. The standard is maintained by X3/INCITS committee. It uses the underlying AT Attachment (ATA) and AT Attachment Packet Interface (ATAPI) standards. Parallel ATA cables have a maximum allowable length of only 18 in (457 mm), and only two devices are supported. As well, PATA does not support hot removal and add of drives. Because of these limitations, Parallel ATA has largely been replaced by Serial ATA (SATA) in newer systems.
Boot 2.2TB+ Disk Volume Controllers supporting a boot device with a capacity greater than 2.2 terabytes must comply with the following requirements:
Small Computer System Interface (SCSI) and SCSI-compatible storage controllers must comply with section 14, "SCSI Driver Model", of UEFI specification version 2.3.1.
The Internet Small Computer System Interface (iSCSI) boot initiator must comply with section 15, "iSCSI Boot", of UEFI specification version 2.3.
The storage controller must support T10 SBC3 Read Capacity (16) command in the UEFI device driver and the Windows device driver. If Advanced Technology Attachment (ATA) or an Advanced Technology Attachment with Packet Interface (ATAPI) storage controller or disk drive is used, the controller firmware or driver must implement SCSI ATA Translation according T10 SAT3 specifications.
The storage controller must report the exact size of the boot disk drive in the EFI shell and in the Windows operating system.
Boot Device Option ROMs in host controllers and adapters for any interface type, including RAID controllers, that provide boot support must fully support extended Int13h functions (functions 4xh) as defined in BIOS Enhanced Disk Drive Services - 3 [T13-D1572], Revision 3 or later. Logical block addressing is the only addressing mechanism supported.
It is recommended that controllers also support booting using the Extensible Firmware Interface (EFI) and implement device paths as defined in EDD-3.
SD/eMMC/NAND flash controllers do not have Option ROM, so EFI support is required.
FibreChannel Interface ANSI developed the FC Standard in 1988 as a practical and expandable method of using fiber optic cabling to transfer data among desktop computers, workstations, mainframes, supercomputers, storage devices, and display devices. ANSI later changed the standard to support copper cabling; today, some kinds of FC use two-pair copper wire to connect the outer four pins of a nine-pin type connector.
iSCSI Interface iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure. iSCSI was submitted as draft standard in March 2000
Multi-Path I/O Microsoft MPIO architecture supports iSCSI, Fibre Channel and serial attached storage (SAS) SAN connectivity by establishing multiple sessions or connections to the storage array. Multi-pathing solutions use redundant physical path components - adapters, cables, and switches - to create logical paths between the server and the storage device. In the event that one or more of these components fails, causing the path to fail, multi-pathing logic uses an alternate path for I/O so that applications can still access their data. Each network interface card (in the iSCSI case) or HBA should be connected by using redundant switch infrastructures to provide continued access to storage in the event of a failure in a storage fabric component.
Offloaded Data Transfer (ODX) Microsoft-developed data transfer technology - offloaded data transfer (ODX). Instead of using buffered read and buffered write operations, Windows ODX starts the copy operation with an offload read and retrieves a token representing the data from the storage device, then uses an offload write command with the token to request data movement from the source disk to the destination disk. The copy manager of the storage devices performs the data movement according to the token. Note that this feature will only work on storage devices with SPC4 and SBC3 specification implementation.
SAS Interface Serial Attached SCSI (SAS) is a point-to-point serial protocol that moves data to and from computer storage devices such as hard drives and tape drives. SAS replaces the older Parallel SCSI (Small Computer System Interface), bus technology that first appeared in the mid-1980s. SAS uses the standard SCSI command set. The T10 technical committee of the International Committee for Information Technology Standards (INCITS) develops and maintains the SAS protocol; the SCSI Trade Association (SCSITA) promotes the technology. A typical Serial Attached SCSI system consists of the following basic components:
An Initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard (as is the case with many server-oriented motherboards) or as an add-on host bus adapter.
A Target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices. A target device could be a hard disk or a disk array system.
A Service Delivery Subsystem: the part of an I/O system that transmits information between an initiator and a target. Typically cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem.
Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to a single initiator port.
The SAS bus operates point-to-point while the SCSI bus is "multi-drop" (electrically parallel), reducing contention. SAS has no termination issues and does not require terminator packs like parallel SCSI. SAS eliminates clock skew. SAS allows up to 65,535 devices through the use of expanders, while Parallel SCSI has a limit of 8 or 16 devices on a single channel. SAS allows a higher transfer speeds than most parallel SCSI standards. SAS devices feature dual ports, allowing for redundant backplanes/multipath I/O
SATA Interface Serial ATA (SATA) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial ATA replaces Parallel ATA or PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command-set as legacy ATA devices.
SCSI Interface Small Computer System Interface (SCSI) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols and electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices.
Thin Provisioning (TP) Thin provisioning is the act of using virtualization technology to give the appearance of having more physical resources than are actually available, on a just-enough and just-in-time basis.. If a system always has enough resource to simultaneously support all of the virtualized resources, then it is not thin provisioned. The efficiency of thin or thick/fat provisioning is a function of the use case, not the technology. Thick provisioning is typically more efficient when the amount of resource used is very close to the amount of resource allocated. Thin provisioning is more efficient where the amount of resource used is much smaller than allocated so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used. Just in time allocation is not the same as thin provisioning. Most file systems back files just in time but are not thin provisioned. Over-allocation is not the same as thin provisioning; resources can be over-allocated / oversubscribed without using virtualization technology.
Trim Trim is the ability to reclaim storage that is no longer needed, in compliance w/ Windows certification requirements and industry standards, and allows Windows to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally.
Storage Arrays
FibreChannel Interface ANSI developed the FC Standard in 1988 as a practical and expandable method of using fiber optic cabling to transfer data among desktop computers, workstations, mainframes, supercomputers, storage devices, and display devices. ANSI later changed the standard to support copper cabling; today, some kinds of FC use two-pair copper wire to connect the outer four pins of a nine-pin type connector.
Fibre Channel over Ethernet Interface Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use Ethernet networks while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. FCoE maps Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE is meant to integrate with existing Fibre Channel networks and management software. FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will not work across routed IP networks. Since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required enhancements to the Ethernet standard to support a priority-based flow control mechanism (to reduce frame loss from congestion). The IEEE standards body added priorities in the data center bridging Task Group.
iSCSI Interface iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure. iSCSI was submitted as draft standard in March 2000
Multi-Path I/O Microsoft MPIO architecture supports iSCSI, Fibre Channel and serial attached storage (SAS) SAN connectivity by establishing multiple sessions or connections to the storage array. Multi-pathing solutions use redundant physical path components - adapters, cables, and switches - to create logical paths between the server and the storage device. In the event that one or more of these components fails, causing the path to fail, multi-pathing logic uses an alternate path for I/O so that applications can still access their data. Each network interface card (in the iSCSI case) or HBA should be connected by using redundant switch infrastructures to provide continued access to storage in the event of a failure in a storage fabric component.
Offloaded Data Transfer (ODX) Microsoft-developed data transfer technology - offloaded data transfer (ODX). Instead of using buffered read and buffered write operations, Windows ODX starts the copy operation with an offload read and retrieves a token representing the data from the storage device, then uses an offload write command with the token to request data movement from the source disk to the destination disk. The copy manager of the storage devices performs the data movement according to the token. Note that this feature will only work on storage devices with SPC4 and SBC3 specification implementation.
RAID RAID (redundant array of independent disks) is a storage technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the specific level of redundancy and performance required. Windows Server Certification Requirements specify that RAID controllers and RAID arrays must support, at a minimum, one of: RAID1, RAID 5, RAID6 or RAID 1/0, with RAID levels greater than RAID 0 providing protection against unrecoverable (sector) read errors, as well as whole disk failures.
SAS Interface Serial Attached SCSI (SAS) is a point-to-point serial protocol that moves data to and from computer storage devices such as hard drives and tape drives. SAS replaces the older Parallel SCSI (Small Computer System Interface), bus technology that first appeared in the mid-1980s. SAS uses the standard SCSI command set. The T10 technical committee of the International Committee for Information Technology Standards (INCITS) develops and maintains the SAS protocol; the SCSI Trade Association (SCSITA) promotes the technology. A typical Serial Attached SCSI system consists of the following basic components:
An Initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard (as is the case with many server-oriented motherboards) or as an add-on host bus adapter.
A Target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices. A target device could be a hard disk or a disk array system.
A Service Delivery Subsystem: the part of an I/O system that transmits information between an initiator and a target. Typically cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem.
Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to a single initiator port.
The SAS bus operates point-to-point while the SCSI bus is "multi-drop" (electrically parallel), reducing contention. SAS has no termination issues and does not require terminator packs like parallel SCSI. SAS eliminates clock skew. SAS allows up to 65,535 devices through the use of expanders, while Parallel SCSI has a limit of 8 or 16 devices on a single channel. SAS allows a higher transfer speeds than most parallel SCSI standards. SAS devices feature dual ports, allowing for redundant backplanes/multipath I/O
SATA Interface Serial ATA (SATA) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial ATA replaces Parallel ATA or PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command-set as legacy ATA devices.
SCSI Interface Small Computer System Interface (SCSI) is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols and electrical and optical interfaces. SCSI is most commonly used for hard disks and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices.
Thin Provisioning (TP) Thin provisioning is the act of using virtualization technology to give the appearance of having more physical resources than are actually available, on a just-enough and just-in-time basis.. If a system always has enough resource to simultaneously support all of the virtualized resources, then it is not thin provisioned. The efficiency of thin or thick/fat provisioning is a function of the use case, not the technology. Thick provisioning is typically more efficient when the amount of resource used is very close to the amount of resource allocated. Thin provisioning is more efficient where the amount of resource used is much smaller than allocated so that the benefit of providing only the resource needed exceeds the cost of the virtualization technology used. Just in time allocation is not the same as thin provisioning. Most file systems back files just in time but are not thin provisioned. Over-allocation is not the same as thin provisioning; resources can be over-allocated / oversubscribed without using virtualization technology.
Storage Adapters and Controllers
ATA Interface Parallel ATA (PATA), originally AT Attachment, is an interface standard for the connection of storage devices such as hard disks, floppy drives, and optical disc drives in computers. The standard is maintained by X3/INCITS committee. It uses the underlying AT Attachment (ATA) and AT Attachment Packet Interface (ATAPI) standards. Parallel ATA cables have a maximum allowable length of only 18 in (457 mm), and only two devices are supported. As well, PATA does not support hot removal and add of drives. Because of these limitations, Parallel ATA has largely been replaced by Serial ATA (SATA) in newer systems.
Boot 2.2TB+ Disk Volume Controllers supporting a boot device with a capacity greater than 2.2 terabytes must comply with the following requirements:
Small Computer System Interface (SCSI) and SCSI-compatible storage controllers must comply with section 14, "SCSI Driver Model", of UEFI specification version 2.3.1.
The Internet Small Computer System Interface (iSCSI) boot initiator must comply with section 15, "iSCSI Boot", of UEFI specification version 2.3.
The storage controller must support T10 SBC3 Read Capacity (16) command in the UEFI device driver and the Windows device driver. If Advanced Technology Attachment (ATA) or an Advanced Technology Attachment with Packet Interface (ATAPI) storage controller or disk drive is used, the controller firmware or driver must implement SCSI ATA Translation according T10 SAT3 specifications.
The storage controller must report the exact size of the boot disk drive in the EFI shell and in the Windows operating system.
Boot Device Option ROMs in host controllers and adapters for any interface type, including RAID controllers, that provide boot support must fully support extended Int13h functions (functions 4xh) as defined in BIOS Enhanced Disk Drive Services - 3 [T13-D1572], Revision 3 or later. Logical block addressing is the only addressing mechanism supported.
It is recommended that controllers also support booting using the Extensible Firmware Interface (EFI) and implement device paths as defined in EDD-3.
SD/eMMC/NAND flash controllers do not have Option ROM, so EFI support is required.
FibreChannel Interface ANSI developed the FC Standard in 1988 as a practical and expandable method of using fiber optic cabling to transfer data among desktop computers, workstations, mainframes, supercomputers, storage devices, and display devices. ANSI later changed the standard to support copper cabling; today, some kinds of FC use two-pair copper wire to connect the outer four pins of a nine-pin type connector.
FibreChannel-over-Ethernet Interface Fibre Channel over Ethernet (FCoE) is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use Ethernet networks while preserving the Fibre Channel protocol. The specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. FCoE maps Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme. The FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE is meant to integrate with existing Fibre Channel networks and management software. FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, and will not work across routed IP networks. Since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required enhancements to the Ethernet standard to support a priority-based flow control mechanism (to reduce frame loss from congestion). The IEEE standards body added priorities in the data center bridging Task Group.
iSCSI Interface iSCSI is an acronym for Internet Small Computer System Interface, an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval. The protocol allows clients (called initiators) to send SCSI commands (CDBs) to SCSI storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (such as database and web servers) with the illusion of locally attached disks. Unlike Fibre Channel, which requires special-purpose cabling, iSCSI can be run over long distances using existing network infrastructure. iSCSI was submitted as draft standard in March 2000
N-Port ID Virtualization N-Port ID Virtualization, or NPIV, is a Fibre Channel facility that allows multiple N-Port IDs to share a single physical N-Port. N-Port sharing allows multiple Fibre Channel initiators to utilize a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are used. NPIV is defined by the Technical Committee T11 within the INCITS standards body. NPIV allows end users to effectively virtualize the Fibre Channel HBA functionality such that each Virtual Machine (VM) running on a server can share a pool of HBAs, yet have independent access to its own protected storage. This sharing enables administrators to leverage standard SAN management tools and best practices, such as fabric zoning and LUN mapping/masking, and enables the full use of fabric-based quality-of-service and accounting capabilities. It also provides efficient utilization of the HBAs in the server while providing a high level of data protection.
RAID Adapter RAID (redundant array of independent disks) is a storage technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the specific level of redundancy and performance required. Windows Server Certification Requirements specify that RAID controllers and RAID systems must support, at a minimum, one of: RAID1, RAID 5, RAID6 or RAID 1/0, with RAID levels greater than RAID 0 providing protection against unrecoverable (sector) read errors, as well as whole disk failure.
SAS Interface Serial Attached SCSI (SAS) is a point-to-point serial protocol that moves data to and from computer storage devices such as hard drives and tape drives. SAS replaces the older Parallel SCSI (Small Computer System Interface), bus technology that first appeared in the mid-1980s. SAS uses the standard SCSI command set. The T10 technical committee of the International Committee for Information Technology Standards (INCITS) develops and maintains the SAS protocol; the SCSI Trade Association (SCSITA) promotes the technology. A typical Serial Attached SCSI system consists of the following basic components:
An Initiator: a device that originates device-service and task-management requests for processing by a target device and receives responses for the same requests from other target devices. Initiators may be provided as an on-board component on the motherboard (as is the case with many server-oriented motherboards) or as an add-on host bus adapter.
A Target: a device containing logical units and target ports that receives device service and task management requests for processing and sends responses for the same requests to initiator devices. A target device could be a hard disk or a disk array system.
A Service Delivery Subsystem: the part of an I/O system that transmits information between an initiator and a target. Typically cables connecting an initiator and target with or without expanders and backplanes constitute a service delivery subsystem.
Expanders: devices that form part of a service delivery subsystem and facilitate communication between SAS devices. Expanders facilitate the connection of multiple SAS End devices to a single initiator port.
The SAS bus operates point-to-point while the SCSI bus is "multi-drop" (electrically parallel), reducing contention. SAS has no termination issues and does not require terminator packs like parallel SCSI. SAS eliminates clock skew. SAS allows up to 65,535 devices through the use of expanders, while Parallel SCSI has a limit of 8 or 16 devices on a single channel. SAS allows a higher transfer speeds than most parallel SCSI standards. SAS devices feature dual ports, allowing for redundant backplanes/multipath I/O
SATA Interface Serial ATA (SATA) is a computer bus interface that connects host bus adapters to mass storage devices such as hard disk drives and optical drives. Serial ATA replaces Parallel ATA or PATA, offering several advantages over the older interface: reduced cable size and cost (seven conductors instead of 40), native hot swapping, faster data transfer through higher signaling rates, and more efficient transfer through an (optional) I/O queuing protocol. SATA host adapters and devices communicate via a high-speed serial cable over two pairs of conductors. To ensure backward compatibility with legacy ATA software and applications, SATA uses the same basic ATA and ATAPI command-set as legacy ATA devices.

Servers

Dynamic Partitioning (DP) Some high-end highly-scalable server systems contain partition units of memory, processors, and IO which can be grouped together by the server's management console into partitions. Each partition is, in effect, an independent server, and the system is capable of hosting several such partitions, each running an independent operating system. Such servers are referred to as partitionable. Some partitionable servers are dynamically partitionable, which means partition units can be re-assigned to various partitions without requiring a system shutdown.

Windows Server 2008 R2 Datacenter and Windows Server 2008 R2 for Itanium-Based Systems support both hot-add for processors, memory, and IO partition units, and hot-replace of such units on supporting hardware. Hot-add allows for increasing the resources available to a partition facing increasing resource demands. Hot-replace allows for supporting systems to swap-out partition units in the event of hardware failure, while the system stays up and running, and providing services to users.
Enhanced Power Management The Enhanced Power Management feature identifies servers which support the next generation power management technology available with Windows Server 2008 R2 and later versions of the Windows Server operating system. The software infrastructure and management interfaces in Windows Server 2008 R2 and later versions of the Windows Server operating system that help improve the power efficiency of the server platform and enable remote monitoring of power consumption and remote control of the power profile. There are three major requirements for a system to qualify for this Additional Feature;
  1. The server system provides a system power meter and system power budget capability in hardware
  2. The server system supports the new power metering and budgeting ACPI interface (ACPI V4.0) specification (Windows Server 2012 only)
  3. The server system enables control of processor performance states by the Operating System
These features in Windows Server 2008 R2 and later versions will provide cost-savings associated with reducing power consumption on each server. They will also help with capacity planning by making power consumption and power budget information available to administrators. This helps enable more efficient allocation of power and cooling infrastructure in the data center. System Center Operations Manager (SCOM) provides a Management Pack that takes advantage of all of these new features in Windows Server. Any server that qualifies for the Enhanced Power Management qualifier has native support for the features in this Management Pack.
Fault Tolerance (FT) Fault tolerant hardware contains redundant components such as extra power supplies, fans, and even memory and processors which can take over in the event of a hardware failure. These features can help improve the server reliability.
Merchandise pictures and descriptions are provided by the manufacturers of the merchandise. Microsoft makes no representations or warranties regarding the merchandise, manufacturers or compatibility of the merchandise depicted or described. Check system requirements before you purchase any merchandise or download any software described on this site. Use of all software is governed by the end user license agreement, if any, which accompanies or is included with the software.