设置每个线程断点。
Any per-thread cleanup should be performed.
每线程清除要执行。
The global lock cache feeds the per-thread lock caches.
全局锁缓存将其内容提供给单线程锁缓存。
The process is spawning a new thread. Any per-thread initialization should be performed.
进程交换(注入)一个新的线程,每线程初始化要执行。
In this way, we can think of a ThreadLocal as allowing us to create a per-thread-singleton.
这样,我们可以认为ThreadLocal允许我们创建每线程单子。
This initial call ensures that the per-thread initialization can complete without deadlock.
此初始调用确保在不死锁的情况下,完成每个线程初始化。
Using ThreadLocal makes sense when you need to store variable instances on a per-thread basis.
当您需要以线程为单位存储变量实例时,使用ThreadLocal 很有意义。
Per-thread method activation stacks are represented using the host operating system's stack and thread model.
每线程方法的激活栈使用主机操作系统的堆栈和线程模型。
You can use ThreadLocal variables to store any sort of per-request context information using the per-thread-singleton technique described earlier.
您可以通过前面讲述的每线程单子技术用ThreadLocal变量来存储各种每请求(per - request)上下文信息。
By using a ThreadLocal in our Singleton, as shown in Listing 3, we can allow any class in our program to easily acquire a reference to a per-thread Connection.
如清单3所示,通过使用“单子”中的ThreadLocal,我们就能让我们的程序中的任何类容易地获取每线程Connection的一个引用。
Also, because the references to the per-thread values are stored in the owning thread object, when the thread gets garbage collected, so can its per-thread values.
而且,因为每线程值的引用被存储在自已的Thread对象中,所以当对Thread进行垃圾回收时,也能对该Thread的每线程值进行垃圾回收。
Other applications for ThreadLocal in which pooling would not be a useful alternative include storing or accumulating per-thread context information for later retrieval.
其它适合使用ThreadLocal但用池却不能成为很好的替代技术的应用程序包括存储或累积每线程上下文信息以备稍后检索之用这样的应用程序。
To help minimize mutex allocation and locking time, the JVM manages a global lock cache and a per-thread lock cache where each cache contains unallocated pthread_mutexes.
为了帮助实现互斥锁分配和锁定时间的最小化,JVM管理一个全局锁缓存和一个单线程锁缓存,其中每个缓存包含了未分配的 pthread_mutex。
HP-UX 11.31 has per-thread locks and as such there are significant performance enhancements with the latest version of HP-UX — up to 30% more performance compared to 11iv2.
HP -UX 11.31具有每线程锁,因此HP - UX的最新版本的性能显著提高了——与11iv2相比,最多能够提高30%。
This depends on your ability to recognize the methods and classes in question, not to recognize the thread itself, per se.
从本质上讲,这取决于您识别有问题的方法和类的能力,而不是识别线程本身。
This breaks the one thread per request model, as the thread for a request never gets freed up.
这打破了每个请求使用一个线程的模型,因为用于一个请求的线程一直没有被释放。
For example, one job per process consumes many more resources than one job per thread.
例如,与每个线程一个作业相比,每个处理一个作业需要使用更多的资源。
Thus it is relatively straightforward to implement the classic one thread per connection model.
因此实现典型的每个连接一个线程的模型便非常简单。
Because some processing can take a long time, we need multiple processing threads per production line to make sure that a thread is always available to work on the latest measurement.
由于某些处理可能要占用较长的时间,因而需要为每条生产线使用多个处理线程,确保总有一个线程能来处理最新的度量结果。
On AIX, this uses at least 256KB per thread.
在AIX 上,每个线程至少需要使用 256KB。
Although the amount of memory used per thread is quite small, for an application with several hundred threads, the total memory use for thread stacks can be large.
虽然每个线程使用的内存量相当少,但对于拥有几百个线程的应用程序,线程栈的内存使用总量会达到很高。
This approach is a little more limited, because it does not give you stateless performance — you are in fact using a separate thread per user.
这种方式有一点儿受限制,因为它不提供无状态性能——实际上是为每个用户使用一个单独的线程。
These handles may be available globally, per process, or per thread.
这些句柄可能全局,每进程,或每线程。
An alternative is to tie up the thread per request so that it can be returned to the thread pool once the request processing is done.
另一种方法是为每个请求绑定一个线程,因此当请求被处理后,线程将返回线程池。
Normally.net allocates up to the minimum thread count in threads as soon as needed. From them on, no more than 2 threads per second are created until you reach the maximum thread count.
一般来说。NET会尽快在线程池中分配最少数量的线程,接着每秒钟创建最多2个线程,直到达到最大线程数量。
A global thread (per server process) is responsible for creating and managing each user connection thread.
一个全局线程(每个服务器进程一个)负责创建和管理用户连接线程。
The following command will load 500k records to the "UserGrid" grid using 10 threads with a rate of 200 requests per thread.
下面的命令使用了10个线程,以每个线程200个请求将500k的记录加载到“UseGrid”网格中。
This in turn results in one thread per user on Tomcat, while the NIO server handles the same load with a constant number of threads.
然后它导致了在Tomcat上为每个用户分配一个线程,而NIO 服务器用固定数量的线程来处理相同的负载。
OpenSessionInView interceptor and filter which allow the usage of the same session per thread across different components.
OpenSessionInView拦截器和过滤器允许每个线程跨不同组件使用同一会话。
Instead of allocating one thread per open socket, we place all requests into a generic queue serviced by a set of RequestHandlerThread instances.
不是为每个打开的socket分配一个线程,相反,我们把所有请求放到一个由一组RequestHandlerThread实例所服务的通用队列中。
应用推荐