HMC view of LPARs on the 9119-fha.
9119-fha上lpar的hmc视图。
They will go to the VIO client LPARs.
它们将提供给VIO客户机 LPAR。
The LPARs' shared memory profile settings
LPAR 的共享内存配置设置
The LPARs are assigned "logical" shared memory.
给LPAR分配 “逻辑”共享内存。
My environment consists of close to 100 AIX LPARs.
我的环境由接近 100 个AIXLPAR 组成。
Log in to the LPARs and search for runaway processes.
登录LPAR并搜索失控的进程。
Multiple LPARs on a server connected to the IB fabric.
一个连接到IB构造的服务器上的多个lpar。
None of my AIX LPARs had any dedicated physical hardware.
我的AIXLPAR都没有任何专用的物理硬件。
But how do I tell if the LPARs really need all that memory?
但是,如何查明LPAR是否确实需要所有内存呢?
In this environment, we would need to create just five LPARs.
在这种情况下,我们将只需要创建五个LPAR。
Before converting them to Shared memory LPARS, I did some Numbers.
在把它们转换为共享内存lpar之前,我做了一些研究。
However, LPARs can be used to complement other availability strategies.
然而,LPAR可以用于弥补其他可用性策略。
These LPARs span the range of environments and application types at UPMC.
这些LPAR涵盖UPMC 中的各种环境和应用程序类型。
Basically, this shifts memory dynamically from idle LPARs to active ones.
基本而言,这将内存从空闲lpar动态转移到活动LPAR。
The pSeries test machine used was a p570+ configured into multiple LPARs.
我们使用的pSeries测试机器是一台配置为多个 LPAR 的p570+。
Production, test, and development LPARs are all mixed on all Power servers.
生产、测试和开发LPAR在所有Power服务器上混合部署。
Modify the LPARs properties to map the new virtual adapters to LPAR adapters.
通过修改lpar属性把新的虚拟适配器分配给lpar适配器。
Refer to the output below, from one of the LPARs before and after the upgrade.
下面是升级之前和之后一个LPAR的输出。
LPARs can moved to different physical servers to help balance workload demands.
可以把LPAR转移到不同的物理服务器以帮助平衡工作负载需求。
Unused memory can be used to build more LPARs or allocated to those who need it.
可以使用未使用的内存建立更多LPAR或把它们分配给需要它们的LPAR。
Likewise, the VIOS used to serve DS8300 storage to client LPARs, will also remain.
同样地,用来向客户机LPAR提供DS8300存储的VIOS也将继续存在。
If you set the image to read-only, you can present it to several LPARs simultaneously.
如果把映像设置为只读的,就可以同时向多个LPAR提供它。
The next step was to migrate my existing dedicated-memory LPARs to shared-memory LPARs.
下一步是把现有的专有内存LPAR迁移到共享内存 LPAR。
The VIO client LPARs will go to the other VIO server for their disk and network resources.
VIO客户机lpar会通过另一个VIO服务器访问磁盘和网络资源。
The working set for both LPARs came to roughly 9.3GB, which would fit nicely into the pool.
这两个LPAR 的工作集大约为 9.3GB,这与池的大小相适应。
Only a little over 160mb of data exists for around 100 LPARs with a sample rate of one hour.
对于大约100个LPAR和一小时的取样速率,数据只有160MB多一点儿。
In this step, you configure the DHCP server to provide network boot information to new LPARs.
在这一步骤中,需要配置DHCP服务器以为新的LPAR提供网络引导信息。
The install data for each of the LPARs in the cell is included in the same consistency group.
计算单元中的每个LPAR的安装数据包括在相同的一致性分组中。
In addition to these workloads, several other, smaller LPARs were running on this system as well.
除了这些工作负载外,这个系统上还运行其他一些较小的LPAR。
In this method, disks are assigned to the VIO servers and mapped directly to the VIO client LPARs.
这种方法把磁盘分配给VIO服务器,然后直接映射到VIO客户机LPAR。
应用推荐