鉴于这些趋势,预计内存延迟将成为压倒性的计算机性能的瓶颈。
Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance.
通过延迟这些引用或者减少对一些类的引用,您有可能会节约内存和启动时间。
You can potentially save memory and startup time by deferring these references or reducing the reference set of classes.
用于应用程序管理堆外内存区域和避免垃圾收集(GC)导致的延迟的机制就是一个此类的扩展。
One such extension is a mechanism for applications to manage areas of memory outside of the heap and avoid delays that are caused by garbage collection (GC).
如果用户希望访问较大的文档中前面的数个字节或数千字节,则延迟构建功能将改善该应用程序的内存需求情况。
If a user wants to access first couple of bytes or kilo bytes in a larger document, then the deferred building capability will enhance the memory requirement of that application.
内存结构的滞后初始化可以节省CPU时间并延迟其他插件的激活。
Lazy initialization of memory structures will save CPU time and may defer the activation of other plug-ins.
使用延迟的随机写操作,当内存中的页面数量超过指定的数量,则将所有后续的页面写入到磁盘。
With random write behind, the number of pages in memory exceeds a specified amount and all subsequent pages are written to disk.
相比基于磁盘和网络的访问,基于内存和CPU的访问能提供更低的延迟和更高的吞吐量。
Memory and CPU based access can provide much lower latency and greater throughput than disk and network based access.
每周定期查看数据,并监控反映cpu利用率、磁盘使用、邮件发送延迟(如果运行邮件的话)、内存利用率和网络利用率的统计数据。
Review your data on a week-to-week basis and monitor stats that reflect CPU utilization, disk usage, mail delivery latency (if you are running mail), memory utilization, and network utilization.
经验表明,内存数据库可以提供出色的性能和很低的延迟。
Experience shows that memory databases provide superior performance and low latency.
在使用延迟的顺序写操作时,在syncd守护进程运行之前,页面并不留在内存中,这可能会导致实际的瓶颈。
With sequential write behind, pages do not stay in memory until the syncd daemon runs, which can cause real bottlenecks.
吞吐量,延迟与内存使用被进一步优化,从而改进应用程序的性能。
"Throughput, latency, and memory usage have been further optimized to improve performance for your applications." Security enhancements.
这就增加了任务的内存访问延迟,这些时间用来将其数据移入新cpu的内存中。
This increases the latency of the task's memory access until its data is in the cache of the new CPU.
直到不久之前,人们还认为这种两个系统具有相同或者按一定比例延迟的情况只能在很小的限定范围内存在。
Until recently, this was thought to occur only for a very small subset of parameters in which the delays are identical or have a certain ratio.
由于主内存和芯片级内存缓存之间的延迟差别,POWER 7设计了三种级别的芯片级缓存机制(见图1)。
Due to the latency difference between main memory and on-chip memory cache, POWER7 was designed with three levels of on-chip cache (see Figure 1).
CMP紧密耦合的本质使处理器与内存之间的物理距离很短,因此可提供最小的内存访问延迟和更高的性能。
The tightly-coupled nature of the CMP allows very short physical distances between processors and memory and, therefore, minimal memory access latency and higher performance.
例如,每个处理器拥有自己的内存,访问共享内存时具有不同的访问延迟。
For example, each processor has its own memory but also access to Shared memory with a different access latency.
对于较大的文档(大约 100K 或者更大),延迟DOM 的性能要好于非延迟 DOM,但是要使用更多的内存。
For larger documents (~100K and higher) the deferred DOM offers better performance than non-deferred DOM, but uses more memory.
如果在堆的顶部分配的内存块不在缓存中,执行会在内存内容装入缓存的过程中出现延迟。
If you allocate a block of memory on the heap that is not already in the cache, execution will stall while the contents of that memory are brought into the cache.
当延迟时间变长时,通常表示JVM要出现问题了(例如,内存耗尽)。
When delays start to get long, this is a common indicator that the JVM is about to have a problem (for example, running out of memory).
每个处理器可同等地访问共享内存(具有相同的内存空间访问延迟)。
Each processor has equal access to the Shared memory (the same access latency to the memory space).
延迟加载:它的目的是优化数据服务器的内存利用率,当程序启动后优先加载那些需要加入到内存的对象,不需要加载的推后加载。
Lazy loading: The purpose of lazy loading is to optimize memory utilization of database servers by prioritizing components that need to be loaded into memory when a program is started.
将延迟式启动、较低的CPU和内存优先级与后台磁盘优先级结合后,大大减少了与用户登录间的相互冲突。
The combination of the delayed start, low CPU and memory priority, and background disk priority greatly reduce interference with a user's logon.
控制器的计算标准内存的DDR2SDRAM,并允许的可能性,方案延迟。
Controller is calculated for the standard the memory DDR2 SDRAM and allows the possibility of programming latency.
片外存储系统的访存延迟主要由DRAM延迟决定,带宽则是由内存总线的数据传输率所决定。
Off-chip memory latency is mainly determined by DRAM latency, and memory bandwidth is determined by data transfer rate through the memory bus.
为了降低防危技术对实时系统响应时间的影响,本文还在经典的伙伴系统内存管理算法基础上提出了延迟合并伙伴系统。
In order to reduce the influence by the safety technology on response time of real-time systems, the combination-delaying buddy system is presented based on the classical buddy system.
用户内存被清除时的延迟将因系统和内存大小的物理配置。
The delay for users while memory is cleared will vary by the physical configuration of the system and memory size.
如果你建立自己的系统,我们推荐用低CAS延迟的内存。
If you're building your own system, we recommend using parts with low CAS latency.
如果你建立自己的系统,我们推荐用低CAS延迟的内存。
If you're building your own system, we recommend using parts with low CAS latency.
应用推荐