在整个吞吐量测试中,磁盘读取的平均总带宽为每秒 344MB。
Overall average disk-read bandwidth is 344MB/second over the entire Throughput test.
吞吐量测试将执行多条并发的数据库查询流,而每条查询流同样以连续的次序执行查询。
The Throughput test executes multiple concurrent streams of database queries, with each stream executing the queries in sequential order.
在吞吐量测试过程中,8500e解释了发自Ixia,到其6个端口的,作为DoS攻击的海量数据,并立即阻塞了所有流量。
During the throughput testing, the 8500e interpreted a massive amount of data sent by the Ixia over six ports as a DoS attack, and immediately blocked all traffic.
我前面提到了吞吐量和页面速率如何为您将要运行的许多性能测试提供一致的度量。
I mentioned throughput and how page rate provides a consistent measurement for many of the performance tests that you will run.
要为测试用例设置吞吐量基准,如果在以前的发行版中已经对这一用例做过测试,我们常常使用以前的结果作为我们的基准。
To set a throughput baseline for a test case, and if the case has been tested in a previous release, we often use its previous result as our baseline.
图4显示了压力测试报告中的吞吐量图表。
Figure 4 shows the throughput chart in the stress test report.
测试表明情况并非如此,使用此配置仍然可以获得相同的吞吐量。
Tests indicate that this is not the case; the same throughputs can still be reached with this configuration.
对于每一种可用内核数,我们会执行一个递增测试来确定最大可能吞吐量。
For each number of available cores, a step-up test was performed to establish the maximum possible throughput.
根据来自以前的发行版的测试结果,我们将吞吐量基准设置为每小时11580个事务,并估计在新的发行版中性能会有10%的提高。
Based on the test result from the previous release, we set the throughput baseline at 11580 transactions per hour and estimated a 10% performance improvement in the new release.
这一部分包含在运行许多测试用例时获得的吞吐量结果。
This section contains throughput results when running a number of test cases.
实现一个早期的端对端的技术原型,测试它的性能和吞吐量并细化容量估计。
Implement an early, end-to-end technical prototype, test its performance and throughput, and refine capacity estimates.
图3显示双向可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 3 shows the network stream throughput and CPU utilization for the bidirectional scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图6显示双向可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 6 shows the network stream throughput and CPU utilization for the bidirectional scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图12显示测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 12 shows the network stream throughput and CPU utilization for the test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
当有故障发生时,只需暂停测试工作负载,让DB 2承担出故障的服务器的负载,这样就不必担心吞吐量降低。
In the event of a failure, you simply quiesce the test workload and let the DB2 standby assume the load of the failed server without consideration for throughput reduction.
完整测试使用15种不同的消息大小,对于每种消息大小执行3分钟的测试,并度量网络流吞吐量和系统cpu利用率。
The full test run measured network stream throughput and system CPU utilization for 15 different send message sizes for 3 minutes per message size.
吞吐量逐渐降低,这是您在SVT测试中可能碰到的一个系统问题。
Gradual throughput degradation is a systemic problem you may encounter in SVT testing.
性能测试:吞吐量或者相应时间。
您可以发现,I/O速率(图3 所示的黄线)在测试中一直保持高吞吐量。
You can see that the I/O rate (yellow line in Figure 3) remains consistently high throughout the test.
使用我们的测试报告就可以很容易地识别吞吐量降低问题,但是要找出导致这一问题的根源,则需要进行彻底分析研究。
We can use our test report to identify throughput degradation easily, but finding its root cause requires thorough investigation.
图1显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 1 shows the network stream throughput and CPU utilization for the netserver scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图4显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 4 shows the network stream throughput and CPU utilization for the netserver scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图10显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 10 shows the network stream throughput and CPU utilization for the netserver scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图11显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 11 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图5显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 5 shows the network stream throughput and CPU utilization for the netperf scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图8显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 8 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图7显示netserver可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 7 shows the network stream throughput and CPU utilization for the netserver scalability test runs while using the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
图2显示netperf可伸缩性测试的网络流吞吐量和系统cpu利用率,分别使用SUT中1、2和4个节点上的系统板载以太网适配器。
Figure 2 shows the network stream throughput and CPU utilization for the netperf scalability test runs while utilizing the system board Ethernet adapters on 1, 2, and 4 nodes of the SUT.
在有些情况下,比如说网络吞吐量很慢时,Ajax调用在暂停一段特定的时间之后没有完成,会导致测试用例失败。
Under some circumstances, such as slow network throughput, the Ajax call is not completed after pausing for a specific time and will lead to the failure of the test cases.
在有些情况下,比如说网络吞吐量很慢时,Ajax调用在暂停一段特定的时间之后没有完成,会导致测试用例失败。
Under some circumstances, such as slow network throughput, the Ajax call is not completed after pausing for a specific time and will lead to the failure of the test cases.
应用推荐