处理器通过内存总线连接到物理内存。
The processor is connected to physical memory by the memory bus.
通过使用更多的内存总线,并令每条总线上处理器更少,NUMA减少了系统共享内存总线的冲突。
NUMA reduces the contention for a system's Shared memory bus by having more memory buses and fewer processors on each bus.
片外存储系统的访存延迟主要由DRAM延迟决定,带宽则是由内存总线的数据传输率所决定。
Off-chip memory latency is mainly determined by DRAM latency, and memory bandwidth is determined by data transfer rate through the memory bus.
分布式仿真系统可采用的联接方式主要有物理共享内存总线、消息传递网络和复制共享内存网络三种。
Physical Shared memory bus, message translating LAN and copy Shared memory network are the main interconnecting technology using in distributed real-time simulation.
NUMA通过限制任何一条内存总线上的CPU数量并依靠高速互连来连接各个节点,从而缓解了这些瓶颈状况。
NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection.
NUMA解决的是系统中特定的处理器(取决于它们在总线上的位置)访问内存中特定区域所需时间比其他处理器更长的问题。
NUMA addresses the problem that arises when certain processors in a system, depending upon where they are on the bus, take longer to reach certain regions of memory than other processors.
增加的数据总线带宽实现了对32位架构上通常可用的4gb以上可寻址内存空间的支持。
The increased data bus width enables support for addressable memory space above the 4gb generally available on 32bit architectures.
探路者号上有一个“数据总线”,可以理解为一块共享内存,用于不同组件之间传递信息。
Pathfinder contained an "information bus", which you can think of as a Shared memory area used for passing information between different components of the spacecraft.
每个总线成员内都有许多消息传递引擎,可以管理队列等运行时资源并能够在文件、内存或数据库内存储消息。
Within each bus member, there is a number of messaging engines that manage runtime resources like queues and are capable of storing messages in a file, memory, or database.
每个pod具有自己的处理器和内存,并通过一条高速缓存一致性互连总线(cache - coherent interconnect bus)连接到较大的系统。
Each pod has its own processors and memory, and is connected to the larger system through a cache-coherent interconnect bus.
因为大多数设备是分开的CPU总线,慢得多的发送数据跨比写信给CPU寄存器或内存(缓存)。
Because most devices are separated from the CPU by a bus, which is much slower to send data across than it is to write to CPU registers or (cached) memory.
当访问内存位置或寄存器时,在地址总线上的真实的地址。
The actual address that is placed on the address bus when accessing a memory location or register.
当访问内存位置或寄存器时,在地址总线上的真实的地址。
Thee actual address that is placed on the address bus when accessing a memory location or register.
这种结构提高了总线效率,并且减小了内存中解码帧缓冲器和通道中FIFO的面积。
The new method improves the efficiency of bus and reduces the size of frame buffer in memory and FIFO in DMA channel.
正如%准备%使用,使用的内存交换和磁盘总线重置,报告可以为您提供度量标准。
Reports give you metrics like percent ready, percent used, memory swap used and disk bus resets.
据我们观察所得,只有提高频率的国旗模型,并使之多样化的,其余加速器由频率的工作,系统总线进入内存和其数量。
We observe only raising the frequencies of flag models and diversifying the remaining accelerators by the frequencies of work, the system bus of access to the memory and its volume.
地址总线为数据传输指明内存位置(地址)。
The address bus specifies the memory locations (addresses) for the data transfers.
特定的内存位置和数据值必须被写入由主机的总线管理器选择,不可能仅仅是一个任意值。
The particular memory location and data value that must be written are selected by the host's bus manager, it can't just be any an arbitrary value.
借助pci地址映射机制,本课题基于CPCI总线实现了主备间的内存互访,并依此实现了主备间的一致性。
With PCI address mapping mechanism, this research realized the inter-access between host and backup based on CPCI bus and the coherence between host and backup is thus realized.
显示所有IR q号和内存地址,就象PCI总线上的卡看到的一样,而不是内核看到的内容。
Show all IRQ Numbers and addresses as seen by the CARDS on the PCI bus instead of as seen by the kernel.
显示所有IR q号和内存地址,就象PCI总线上的卡看到的一样,而不是内核看到的内容。
Show all IRQ Numbers and addresses as seen by the CARDS on the PCI bus instead of as seen by the kernel.
应用推荐