All modern CPUs must utilize local memory cache to minimize latency of fetching instructions and data from memory.
所有现代的CPU必须使用本地存储的缓存,将获取指令和数据的延迟降到最低。
Matching work partitions to their data partitions will increase the probability of a cache hit within the server and decrease remote accesses to the database, which introduces quite a bit of latency.
将工作分区与它们的数据分区进行匹配,这将增加服务器缓存的命中概率,并减少对数据库的远程访问,后者会引起较大的延迟。
Due to the latency difference between main memory and on-chip memory cache, POWER7 was designed with three levels of on-chip cache (see Figure 1).
由于主内存和芯片级内存缓存之间的延迟差别,POWER 7设计了三种级别的芯片级缓存机制(见图1)。
This increases the latency of the task's memory access until its data is in the cache of the new CPU.
这就增加了任务的内存访问延迟,这些时间用来将其数据移入新cpu的内存中。
No, you can't without resorting to 'tricks' such as a tunnel, which maybe OK for testing but will kill any real benefit of using a super-fast cache with the added latency/overhead.
不,你不可以不诉诸“招数”如隧道,这也许可以为测试而将杀死任何真正的好处使用超高速缓存与增加的延迟/开销。
Also, to reach further massive performance jump, things like twice the frequency, doubled secondary cache and improved ALU latency could take more priority.
另外,为了进一步大幅提升性能,频率提高一倍、二级高速缓存增加一倍、缩短算术逻辑单元(alu)的延迟等方面恐怕更要紧。
Also, to reach further massive performance jump, things like twice the frequency, doubled secondary cache and improved ALU latency could take more priority.
另外,为了进一步大幅提升性能,频率提高一倍、二级高速缓存增加一倍、缩短算术逻辑单元(alu)的延迟等方面恐怕更要紧。
应用推荐