If this number increases quickly between successive SHOW STATUS commands, you should look at increasing your thread cache.
如果这个数字在连续执行showSTATUS命令时快速增加,就应该尝试增大线程缓存。
Generally, it does this by maintaining awareness or state of the mapped entity class instances it is responsible for by keeping a first-level cache of instances, valid for a single thread.
通常,它通过保留对单个线程有效的一级缓存实例,维护它负责的映射实体类实例的可识别性或状态,从而做到这一点。
This instance is used for all messages that are sent into the partition, making members of this class thread safe and providing the opportunity to cache required resources.
这个实例被用于发送到该分区中的所有消息,使这个类的成员线程安全并提供缓存必要资源的机会。
When a thread is running on a CPU and gets interrupted, it usually gets placed back on the same CPU because the processor's cache might still have lines belonging to the thread.
当某个线程在一个CPU上运行并发生了中断,通常会将它放回到相同的CPU上运行,因为这个处理器的缓存中仍然保存了属于该线程的相关信息。
Since it is thread safe by design, you might cache it in a public static final variable, or wrap it in a singleton pattern after creation for later access.
因为根据设计它是线程安全的,创建它之后可以把它缓存在一个公共静态最终变量中,或者包装在单实例模式中以供以后访问。
Uncontested unlock operations return a locked mutex to the thread lock cache.
非争用的解锁操作将一个锁定的互斥锁返回给线程锁定缓存。
When a thread exits a synchronized block as part of releasing the associated monitor, the JMM requires that the local processor cache be flushed to main memory.
当线程为释放相关监视器而退出一个同步块时,J MM要求本地处理器缓冲刷新到主存中。
You could consider these "thread-local" copies of variables to be similar to a cache, helping the thread avoid checking main memory each time it needs to access the variable's value.
您可以将变量的这些“线程局部”副本看作是与缓存类似,在每次线程需要访问变量的值时帮助它避免检查主存储器。
The JVM itself has internal locks used to serialize access to critical JVM resources such as the thread list and the global lock cache.
JVM自身拥有内部锁,用于序列化对关键JVM资源(如线程列表和全局锁缓存)的访问。
Linux attempts to execute a thread on the same processor for cache performance reasons.
出于缓存性能的原因,Linux尝试在同一个处理程序中执行线程。
Future access to num_proc1 by the parent thread results in the reading in of the data from the cache line.
将来父线程对num_proc1的存取会导致从该高速缓存线路的数据读入。
If the variables in your applications are not going to change, then a thread-local cache makes sense.
如果应用程序中的变量将不发生变化,那么一个线程局部缓存比较行得通。
The cache will store items for a fixed time and will spin a thread to prune the items as needed, as shown in Figure 2.
高速缓存将这些对象存储一个固定时间并将旋转一个线程来在需要时清除这些项,如图2所示。
The global lock cache feeds the per-thread lock caches.
全局锁缓存将其内容提供给单线程锁缓存。
To help minimize mutex allocation and locking time, the JVM manages a global lock cache and a per-thread lock cache where each cache contains unallocated pthread_mutexes.
为了帮助实现互斥锁分配和锁定时间的最小化,JVM管理一个全局锁缓存和一个单线程锁缓存,其中每个缓存包含了未分配的 pthread_mutex。
One thing to think about - you say you want to use the cache because it is thread safe.
一件事思考——你说你想使用缓存,因为它是线程安全的。
When receiving media data, it uses asynchronism receiving mode, multi-cache and multi-thread to improve the system performance.
在接收时,使用异步接收方式、多重缓冲和多线程技术来提高系统的性能。
You should be aware that the cache object itself is thread safe, but if you cache a non-thread safe object in the cache that won't make the cached object itself automatically thread safe.
您应该意识到缓存对象本身是线程安全的,但是如果你在缓存中缓存不是线程安全对象,不会让缓存对象本身自动线程安全的。
A kernel thread-based asynchronous cache writing method is also proposed to improve the effectiveness of cache writing.
并提出了基于内核线程的异步缓存写入方法,提高写缓存的效率。
Under heavy system loads, specifying which processor should run a specific thread can improve performance by reducing the number of times the processor cache is reloaded.
在高系统负载的情况下,指定该由那个处理器执行特定的执行绪,可以减少重新载入处理器快取的次数,因而增进效能。
Under heavy system loads, specifying which processor should run a specific thread can improve performance by reducing the number of times the processor cache is reloaded.
在高系统负载的情况下,指定该由那个处理器执行特定的执行绪,可以减少重新载入处理器快取的次数,因而增进效能。
应用推荐