Overall average disk-read bandwidth is 344MB/second over the entire Throughput test.
在整个吞吐量测试中,磁盘读取的平均总带宽为每秒 344MB。
The performance test of TPC-H consists of two tests: the Power test and the Throughput test.
TPC-H性能测试包括两部分:能力(Power)测试和吞吐量(Throughput)测试。
The Throughput test executes multiple concurrent streams of database queries, with each stream executing the queries in sequential order.
吞吐量测试将执行多条并发的数据库查询流,而每条查询流同样以连续的次序执行查询。
To set a throughput baseline for a test case, and if the case has been tested in a previous release, we often use its previous result as our baseline.
要为测试用例设置吞吐量基准,如果在以前的发行版中已经对这一用例做过测试,我们常常使用以前的结果作为我们的基准。
Figure 4 shows the throughput chart in the stress test report.
图4显示了压力测试报告中的吞吐量图表。
For each number of available cores, a step-up test was performed to establish the maximum possible throughput.
对于每一种可用内核数,我们会执行一个递增测试来确定最大可能吞吐量。
Based on the test result from the previous release, we set the throughput baseline at 11580 transactions per hour and estimated a 10% performance improvement in the new release.
根据来自以前的发行版的测试结果,我们将吞吐量基准设置为每小时11580个事务,并估计在新的发行版中性能会有10%的提高。
This section contains throughput results when running a number of test cases.
这一部分包含在运行许多测试用例时获得的吞吐量结果。
Implement an early, end-to-end technical prototype, test its performance and throughput, and refine capacity estimates.
实现一个早期的端对端的技术原型,测试它的性能和吞吐量并细化容量估计。
Figure 3 shows the network stream throughput and CPU utilization for the bidirectional scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图3显示双向可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 6 shows the network stream throughput and CPU utilization for the bidirectional scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图6显示双向可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 12 shows the network stream throughput and CPU utilization for the test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图12显示测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
The throughput shown is the sum throughput of all utilized Ethernet adapters for each test run; CPU use shown is the system average for the duration of the each test run.
这里显示的吞吐量是每个测试使用的所有以太网适配器的吞吐量总和;cpu利用率是测试期间的系统平均值。
The throughput shown is the sum throughput of all utilized Ethernet adapters for each test run; CPU utilization shown is the system average for the duration of the each test run.
这里显示的吞吐量是每个测试使用的所有以太网适配器的吞吐量总和;cpu利用率是测试期间的系统平均值。
The throughput shown is the sum throughput of all utilized Ethernet adapters for each test run, and CPU utilization shown is the system average for the duration of the each test run.
这里显示的吞吐量是每个测试使用的所有以太网适配器的吞吐量总和;cpu利用率是测试期间的系统平均值。
The throughput shown is the sum throughput of all used Ethernet adapters for each test run; CPU use shown is the system average for the duration of the each test run.
这里显示的吞吐量是每个测试使用的所有以太网适配器的吞吐量总和;cpu利用率是测试期间的系统平均值。
We can use our test report to identify throughput degradation easily, but finding its root cause requires thorough investigation.
使用我们的测试报告就可以很容易地识别吞吐量降低问题,但是要找出导致这一问题的根源,则需要进行彻底分析研究。
Figure 1 shows the network stream throughput and CPU utilization for the netserver scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图1显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 4 shows the network stream throughput and CPU utilization for the netserver scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图4显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 10 shows the network stream throughput and CPU utilization for the netserver scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图10显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 11 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图11显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 5 shows the network stream throughput and CPU utilization for the netperf scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图5显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 8 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图8显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 7 shows the network stream throughput and CPU utilization for the netserver scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图7显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 2 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图2显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Under some circumstances, such as slow network throughput, the Ajax call is not completed after pausing for a specific time and will lead to the failure of the test cases.
在有些情况下,比如说网络吞吐量很慢时,Ajax调用在暂停一段特定的时间之后没有完成,会导致测试用例失败。
Table 1 shows some CommonStore throughput Numbers in a test environment with low-end, rather aged PC servers.
表1显示了一些CommonStore吞吐量数字,这些数字是在一个使用低端的、相当陈旧的PC服务器的测试环境中测到的。
In our experience, the main causes of throughput degradation are code issues, database issues, and test data or method issues.
根据我们的经验,导致吞吐量降低的主要原因是代码问题、数据库问题、以及测试数据或者方法问题。
In the event of a failure, you simply quiesce the test workload and let the DB2 standby assume the load of the failed server without consideration for throughput reduction.
当有故障发生时,只需暂停测试工作负载,让DB 2承担出故障的服务器的负载,这样就不必担心吞吐量降低。
In the event of a failure, you simply quiesce the test workload and let the DB2 standby assume the load of the failed server without consideration for throughput reduction.
当有故障发生时,只需暂停测试工作负载,让DB 2承担出故障的服务器的负载,这样就不必担心吞吐量降低。
应用推荐