Appendix A. Configuring and deploying infiniband.
附录a .配置和部署infiniband。
High-speed communication network, such as infiniband.
高速通信网络,如infiniband。
A high-speed communication network, such as infiniband.
一个高速通信网络,比如infiniband。
This configuration can also run across an InfiniBand network.
该配置也可跨infiniband网络运行。
This appendix shows you the high-level steps to deploy InfiniBand.
该附录向您展示部署InfiniBand的高级步骤。
This appendix describes the high level steps of deploying InfiniBand.
本附录阐述了部署InfiniBand的总体步骤。
The HCAs, InfiniBand cables, and InfiniBand switch form a subnet.
InfiniBand电缆和 InfiniBand交换机形成一个子网。
InfiniBand is a switched fabric communication link used for high-speed communication.
InfiniBand是一个用于高速通信的交换式光纤通信链路。
Complete the following steps to set up the InfiniBand network interface on each LPAR.
完成以下步骤在每一个LPAR创建InfiniBand网络接口。
Setting up and configuring the cluster requires expertise in UNIX (r), InfiniBand, and SAN storage.
设置和配置这种集群需要UNIX (R)、InfiniBand和SAN存储器专业技术。
To foster performance, the interface nodes are connected to redundant storage nodes using InfiniBand network.
为了提高性能,接口节点使用InfiniBand网络连接冗余的存储节点。
Assign each LPAR a unique GUID index within each physical computer to distinguish the different LPARs using InfiniBand.
在每一个物理计算机上给每一个LPAR分配一个唯一的GUID索引,以区分使用InfiniBand的不同LPAR。
Deploy the InfiniBand switch with an active subnet manager from the hardware management console, as shown in Listing 10.
从硬件管理控制台使用一个活跃的子网管理器部署InfiniBand交换机,如清单10所示。
You can validate the state of the InfiniBand on each node by running the ibv_devinfo or ibstatus commands as a root user.
您可以作为根用户运行ibv_devinfo或ibstatus命令,来在每个节点上验证InfiniBand的状态。
Optionally validate the state of the InfiniBand on each LPAR by running the ibstat -v command from a user with root privileges.
可以在每一个LPAR上以拥有root权限的用户身份运行命令ibstat - v验证InfiniBand的状态。
Select Medium (50%) capability as the allocation of the InfiniBand resources for each LPAR, which is the middle setting from Step 1.
选择Medium(50%)容量作为每一个LPAR的InfiniBand资源分配率,这是步骤1的中间设置值。
RMM will also work over InfiniBand and Shared memory, and can operate in either multicast (one-to-many) or unicast (one-to-one) mode.
RMM也将在InfiniBand和共享内存之上运行,能够以多点广播(一对多)和单点广播(一对一)的模式进行操作。
Of course, compared to the 100 to 110 microseonds when using TCP over gigabit Ethernet, InfiniBand still provides a significant improvement.
当然,与通过千兆位以太网使用TCP时的 100 至 110微秒相比,InfiniBand已经有了很大提升。
These are constructed from multiple standalone systems connected by a high-speed interconnect (such as 10g Ethernet, Fibre Channel, or Infiniband).
这些系统是利用多个高速互连的单一系统构造的(如10G以太网、FibreChannel或Infiniband)。
For information on how to configure the 10 gigabit Ethernet, refer to Appendix A, or refer to Appendix B to learn more about deploying an InfiniBand network.
关于如何配置 10 千兆以太网的信息,参见附录A或参见 附录B了解有关部署InfiniBand网络的更多信息。
Testing has also measured very low latency of 30 microseconds for 120-byte messages delivered at 10,000 messages per second on InfiniBand or 61 microseconds on Ethernet.
测试还表明具有非常低的延迟,在InfiniBand上每秒提交一万条120字节的消息所需延迟为30毫秒,在Ethernet上为61毫秒。
Complete the following steps from the hardware management console's graphical interface to allocate the InfiniBand resources for each LPAR so that each LPAR is assigned 50%.
在硬件管理控制台的图形化界面上完成以下步骤,为每一个LPAR分配InfiniBand资源,这样每一个lpar会分配50%的资源。
This interconnect has significantly higher bandwidth and lower latency than previous interconnects, such as the Infiniband interconnect that is standard on many other supercomputers.
比起其他超级计算机中所使用的无限带宽(Infiniband)连接技术,这种新型的互连方式明显拥有更高的带宽和更低的延迟时间。
The Clustrix dual quad-core appliance includes two 1gbps Ethernet front-end ports and two 20gbps InfiniBand back-end ports, along with 32gb of RAM and seven 160gb solid state drives.
Clustrix双核及四核设备包含两个1gbps以太网口和两个20gbps的InfiniBand背板端口,同时还装配32gbRAM和7个160gb固态硬盘。
The Clustrix dual quad-core appliance includes two 1gbps Ethernet front-end ports and two 20gbps InfiniBand back-end ports, along with 32gb of RAM and seven 160gb solid state drives.
Clustrix双核及四核设备包含两个1gbps以太网口和两个20gbps的InfiniBand背板端口,同时还装配32gbRAM和7个160gb固态硬盘。
应用推荐