如果这个数字在连续执行showSTATUS命令时快速增加,就应该尝试增大线程缓存。
If this number increases quickly between successive SHOW STATUS commands, you should look at increasing your thread cache.
第四行以超出锁的次数、用户线程或者缓存数来开始(通常所有的0会是这样)。
The fourth line starts with the number of times we have exceeded locks, user threads or buffers (all 0 and should generally be so).
有关更多信息,请参见下面的使用全局连接工厂和线程级别连接缓存。
For more information, see Use of a global connection factory and thread level connection caching below.
清单5显示如何确定是否缓存了足够的线程。
Listing 5 shows how to determine if you have enough threads cached.
因为根据设计它是线程安全的,创建它之后可以把它缓存在一个公共静态最终变量中,或者包装在单实例模式中以供以后访问。
Since it is thread safe by design, you might cache it in a public static final variable, or wrap it in a singleton pattern after creation for later access.
在一个连接变化很快的繁忙服务器上,对线程进行缓存便于以后使用可以加快最初的连接。
On a busy server where connections are torn up and down quickly, caching threads for use later speeds up the initial connection.
高速缓存将这些对象存储一个固定时间并将旋转一个线程来在需要时清除这些项,如图2所示。
The cache will store items for a fixed time and will spin a thread to prune the items as needed, as shown in Figure 2.
如果应用程序中的变量将不发生变化,那么一个线程局部缓存比较行得通。
If the variables in your applications are not going to change, then a thread-local cache makes sense.
通常,它通过保留对单个线程有效的一级缓存实例,维护它负责的映射实体类实例的可识别性或状态,从而做到这一点。
Generally, it does this by maintaining awareness or state of the mapped entity class instances it is responsible for by keeping a first-level cache of instances, valid for a single thread.
对代理和调用程序对象进行缓存,配合执行程序中的线程池,可以提高资源的使用效率。
The caching of proxy and invoker objects supported efficient use of resources, along with pooling of threads in the executor.
这个实例被用于发送到该分区中的所有消息,使这个类的成员线程安全并提供缓存必要资源的机会。
This instance is used for all messages that are sent into the partition, making members of this class thread safe and providing the opportunity to cache required resources.
您可以将变量的这些“线程局部”副本看作是与缓存类似,在每次线程需要访问变量的值时帮助它避免检查主存储器。
You could consider these "thread-local" copies of variables to be similar to a cache, helping the thread avoid checking main memory each time it needs to access the variable's value.
这种存取方式是由处理器的L1缓存来服务的,数据会立刻被写入或读入而不会受到其它存储器或线程的影响。
Such an access is serviced by the processor's L1 cache and the data is read or written all at once; it cannot be affected halfway by other processors or threads.
与表的缓存类似,对于线程来说也有一个缓存。
非争用的解锁操作将一个锁定的互斥锁返回给线程锁定缓存。
Uncontested unlock operations return a locked mutex to the thread lock cache.
该代码可能实现了自己的线程、缓存、连接池甚至(但愿不会如此)安全基础设施。
The code might have implemented its own threading, caching, connection pooling, or even (heaven forbid) security infrastructure.
这些线程根据每个持久性单元保存在线程的缓存池内。
The threads are maintained in a cached pool of threads per persistence unit.
当某个线程在一个CPU上运行并发生了中断,通常会将它放回到相同的CPU上运行,因为这个处理器的缓存中仍然保存了属于该线程的相关信息。
When a thread is running on a CPU and gets interrupted, it usually gets placed back on the same CPU because the processor's cache might still have lines belonging to the thread.
这些线程通常都是轻量级的,用于执行缓存清理和对象清理之类的事务。
They are traditionally lightweight and used for executing tasks such as cache cleanup and object cleanup.
JVM自身拥有内部锁,用于序列化对关键JVM资源(如线程列表和全局锁缓存)的访问。
The JVM itself has internal locks used to serialize access to critical JVM resources such as the thread list and the global lock cache.
获得连接后,应该缓存连接,并在每次相同的线程执行evaluate方法来执行工作单元时进行重用。
Once obtained, the connection should be cached and reused each time the same thread executes the evaluate method to perform a unit of work.
出于缓存性能的原因,Linux尝试在同一个处理程序中执行线程。
Linux attempts to execute a thread on the same processor for cache performance reasons.
比较缓存线程池性能的对比测试显示,新的非阻塞同步队列实现提供了几乎是当前实现3倍的速度。
Benchmark tests comparing cached thread pool performance show that the new nonblocking synchronous queue implementation offers close to three times the speed over the current implementation.
与使用基于JMS的流程导航不同,线程是通过工作管理器创建的,其中服务器关联性支持缓存对象重用。
Rather than using JMS based process navigation, threads are created through the work manager, where server affinity enables the reuse of cached objects. To enable this feature.
全局锁缓存将其内容提供给单线程锁缓存。
这些应用程序通常使用可以显著改进性能的技巧,例如分区、多线程以及通过缓存进行写入。
Such applications typically use tricks that can greatly improve performance, such as partitioning, multi-threading, and write through caching.
使用全局连接工厂和线程级别连接缓存。
Using a global connection factory and thread-level connection caching.
将来父线程对num_proc1的存取会导致从该高速缓存线路的数据读入。
Future access to num_proc1 by the parent thread results in the reading in of the data from the cache line.
对于非线程化的db 2 fmp进程,这个参数代表缓存的进程数量。
For non-threaded db2fmp processes, this parameter represents the number of processes cached.
尽管一般情况下还是不允许使用线程,这些后台实例可维持在内存缓存中,并通过max -concurrent - requests参数的设置,一次可以为多个请求服务。
Although threading is still disallowed in general, these backend instances can maintain in-memory caches and serve multiple requests at once through the max-concurrent-requests parameter setting.
应用推荐