HMC view of LPARs on the 9119-fha.
9119-fha上lpar的hmc视图。
They will go to the VIO client LPARs.
它们将提供给VIO客户机 LPAR。
The LPARs' shared memory profile settings
LPAR 的共享内存配置设置
The LPARs are assigned "logical" shared memory.
给LPAR分配 “逻辑”共享内存。
My environment consists of close to 100 AIX LPARs.
我的环境由接近 100 个AIXLPAR 组成。
Log in to the LPARs and search for runaway processes.
登录LPAR并搜索失控的进程。
Multiple LPARs on a server connected to the IB fabric.
一个连接到IB构造的服务器上的多个lpar。
These were the rootvg disks that belonged to my client LPARs.
这些是属于客户机LPAR的rootvg磁盘。
Configured on the blade were a VIOS (IVM) and two AIX 6.1 LPARs.
这台刀片服务器上配置了一个VIOS (ivm)和两个AIX 6.1LPAR。
Before converting them to Shared memory LPARS, I did some Numbers.
在把它们转换为共享内存lpar之前,我做了一些研究。
However, LPARs can be used to complement other availability strategies.
然而,LPAR可以用于弥补其他可用性策略。
These LPARs span the range of environments and application types at UPMC.
这些LPAR涵盖UPMC 中的各种环境和应用程序类型。
Basically, this shifts memory dynamically from idle LPARs to active ones.
基本而言,这将内存从空闲lpar动态转移到活动LPAR。
The pSeries test machine used was a p570+ configured into multiple LPARs.
我们使用的pSeries测试机器是一台配置为多个 LPAR 的p570+。
Modify the LPARs properties to map the new virtual adapters to LPAR adapters.
通过修改lpar属性把新的虚拟适配器分配给lpar适配器。
Refer to the output below, from one of the LPARs before and after the upgrade.
下面是升级之前和之后一个LPAR的输出。
LPARs can moved to different physical servers to help balance workload demands.
可以把LPAR转移到不同的物理服务器以帮助平衡工作负载需求。
Unused memory can be used to build more LPARs or allocated to those who need it.
可以使用未使用的内存建立更多LPAR或把它们分配给需要它们的LPAR。
Likewise, the VIOS used to serve DS8300 storage to client LPARs, will also remain.
同样地,用来向客户机LPAR提供DS8300存储的VIOS也将继续存在。
Using the previous steps in this article, your new LPARs load the Linux installer.
新的LPAR使用本文前面提到的步骤装载Linux安装程序。
The next step was to migrate my existing dedicated-memory LPARs to shared-memory LPARs.
下一步是把现有的专有内存LPAR迁移到共享内存 LPAR。
The VIO client LPARs will go to the other VIO server for their disk and network resources.
VIO客户机lpar会通过另一个VIO服务器访问磁盘和网络资源。
The working set for both LPARs came to roughly 9.3GB, which would fit nicely into the pool.
这两个LPAR 的工作集大约为 9.3GB,这与池的大小相适应。
Only a little over 160mb of data exists for around 100 LPARs with a sample rate of one hour.
对于大约100个LPAR和一小时的取样速率,数据只有160MB多一点儿。
In this step, you configure the DHCP server to provide network boot information to new LPARs.
在这一步骤中,需要配置DHCP服务器以为新的LPAR提供网络引导信息。
The install data for each of the LPARs in the cell is included in the same consistency group.
计算单元中的每个LPAR的安装数据包括在相同的一致性分组中。
You perform some workload analysis and find that not all of the LPARs are used at the same time.
执行一些工作负载分析,发现不会同时使用所有LPAR。
In addition to these workloads, several other, smaller LPARs were running on this system as well.
除了这些工作负载外,这个系统上还运行其他一些较小的LPAR。
In this method, disks are assigned to the VIO servers and mapped directly to the VIO client LPARs.
这种方法把磁盘分配给VIO服务器,然后直接映射到VIO客户机LPAR。
Memory savings can be used to add more LPARs or additional workloads to the system to do more work.
内存节约可用于为系统添加更多LPAR,或者额外的工作负载,以便完成更多工作。
应用推荐