The processor is connected to physical memory by the memory bus.
处理器通过内存总线连接到物理内存。
NUMA reduces the contention for a system's Shared memory bus by having more memory buses and fewer processors on each bus.
通过使用更多的内存总线,并令每条总线上处理器更少,NUMA减少了系统共享内存总线的冲突。
Off-chip memory latency is mainly determined by DRAM latency, and memory bandwidth is determined by data transfer rate through the memory bus.
片外存储系统的访存延迟主要由DRAM延迟决定,带宽则是由内存总线的数据传输率所决定。
NUMA alleviates these bottlenecks by limiting the number of CPUs on any one memory bus and connecting the various nodes by means of a high speed interconnection.
NUMA通过限制任何一条内存总线上的CPU数量并依靠高速互连来连接各个节点,从而缓解了这些瓶颈状况。
Physical Shared memory bus, message translating LAN and copy Shared memory network are the main interconnecting technology using in distributed real-time simulation.
分布式仿真系统可采用的联接方式主要有物理共享内存总线、消息传递网络和复制共享内存网络三种。
This allows for a mobile development platform where the compressed root file system fits on a standard Universal Serial Bus (USB) memory stick.
这样就有了一个可移动的开发平台,压缩后的根文件系统完全可以放在一个标准usb记忆棒中。
NUMA addresses the problem that arises when certain processors in a system, depending upon where they are on the bus, take longer to reach certain regions of memory than other processors.
NUMA解决的是系统中特定的处理器(取决于它们在总线上的位置)访问内存中特定区域所需时间比其他处理器更长的问题。
Within each bus member, there is a number of messaging engines that manage runtime resources like queues and are capable of storing messages in a file, memory, or database.
每个总线成员内都有许多消息传递引擎,可以管理队列等运行时资源并能够在文件、内存或数据库内存储消息。
The address bus is used by the processor to select aspecific memory location or register within a particular peripheral.
地址总线被处理器用来选择在特定外设中的存储器地址或寄存器。
The increased data bus width enables support for addressable memory space above the 4gb generally available on 32bit architectures.
增加的数据总线带宽实现了对32位架构上通常可用的4gb以上可寻址内存空间的支持。
Your write system call will be interrupted by the bus error signal SIGBUS, because you performed a bad memory access.
此时write系统调用会被进程接收到的SIGBUS信号中断,因为当前进程访问了非法内存地址。
Pathfinder contained an "information bus", which you can think of as a Shared memory area used for passing information between different components of the spacecraft.
探路者号上有一个“数据总线”,可以理解为一块共享内存,用于不同组件之间传递信息。
Traditionally, a workstation's throughput depends on its bus and memory architecture, as well as its CPU speed.
传统上讲,工作站的吞吐量与其总线和存储器体系结构以及CPU的速度有关。
Each pod has its own processors and memory, and is connected to the larger system through a cache-coherent interconnect bus.
每个pod具有自己的处理器和内存,并通过一条高速缓存一致性互连总线(cache - coherent interconnect bus)连接到较大的系统。
A memory address consists of binary data being output on an appropriate bus which we call the address bus.
一个存储器地址是由输出到适宜的总线上的二进制数据所组成。这个总线我们称为地址总线。
The actual address that is placed on the address bus when accessing a memory location or register.
当访问内存位置或寄存器时,在地址总线上的真实的地址。
Thee actual address that is placed on the address bus when accessing a memory location or register.
当访问内存位置或寄存器时,在地址总线上的真实的地址。
It boots from the microSD card with no flash memory and hosts new interfaces, including a DB-9 serial connector, integrated 4-port Universal serial Bus (USB) hub, and integrated Ethernet port.
它没有闪存从microsd卡引导,但是有很多新接口,包括一个DB - 9系列的连接器、集成4个端口的UniversalSerialBus (usb)总线、以及集成的Ethernet端口。
Because most devices are separated from the CPU by a bus, which is much slower to send data across than it is to write to CPU registers or (cached) memory.
因为大多数设备是分开的CPU总线,慢得多的发送数据跨比写信给CPU寄存器或内存(缓存)。
As an external interface of the processor, system bus component affects the efficiency of memory system directly.
作为处理器的片外接口,系统总线部件直接影响着存储系统的效能。
Other devices are not memory mapped on the processor bus.
其他的设备没有被映射到处理机总线上。
The system takes advantage of both, computers and can realize high rate communication in tight coupling style, by using bus period stealing and distributional memory sharing.
该系统能发挥两种微机的优势,利用总线周期窃用和分散型共享存储器技术,实现紧耦合方式的高速通信。
Thus, it is important to study protocols and implementation of system bus to hide memory latency and increase memory access rate.
因此研究系统总线协议及其实现技术对于隐藏访存延迟和提高访存速度具有重要意义。
The new method improves the efficiency of bus and reduces the size of frame buffer in memory and FIFO in DMA channel.
这种结构提高了总线效率,并且减小了内存中解码帧缓冲器和通道中FIFO的面积。
The memory structure, constitution of data communication channel and system bus are analyzed, and the algorithm allocating, algorithm mapping and scheduling on the multiprocessor are discussed.
对系统的存储器结构、数据通信通道组成和系统总线结构进行了分析; 讨论了算法划分、算法的多处理器映射及调度;
Port 0 is also the multiplexed low-order address and data bus during access to external program and data memory.
端口0也是复低位地址和在利用外部程序和数据存储器的数据总线。
This is necessary to avoid loading redundant data and therefore to use the video memory bandwidth and that of the AGP bus as efficiently as possible.
另外,为了尽可能高效的利用显存的带宽和AGP总线带宽,应该极力避免载入冗余数据。
This is necessary to avoid loading redundant data and therefore to use the video memory bandwidth and that of the AGP bus as efficiently as possible.
另外,为了尽可能高效的利用显存的带宽和AGP总线带宽,应该极力避免载入冗余数据。
应用推荐