RTSJ的原理是所有的线程队列都是FIFO并且是基于优先级的。
The spirit of the RTSJ is that all queues of threads are FIFO and priority based.
在多处理器系统中,Linux试图模拟分派到可用处理器的单个全局RT线程队列的行为。
On multiprocessor systems, Linux tries to emulate the behavior of a single global queue of RT threads that are dispatched to available processors.
任务队列、活动线程链表、空闲线程队列都作为线程池的成员变量,由线程池维护。
Queue to store the task to be performed. You can call the thread pool member function to add the task to the queue.
决定(以及编写)一个任意的跳转究竟会如何与线性线程队列交互将涉及到不少工作,不过这些工作中没有什么特别神秘的地方。
There would be a bit of work involved in deciding (and programming) exactly how an arbitrary jump would interact with a linear thread queue, but nothing particularly mysterious about the work.
几乎在每个服务器应用程序中都会出现线程池和工作队列问题。
In nearly every server application, the question of thread pools and work queues comes up.
另一个常见的线程模型是为某一类型的任务分配一个后台线程与任务队列。
Another common threading model is to have a single background thread and task queue for tasks of a certain type.
但是,如果多个线程都试图把数据添加到队列中,会发生什么?
But what happens if multiple threads intend to append data to the queue?
因为上面介绍的模式非常有效,所以可以通过连接附加线程池和队列来进行扩展,这是相当简单的。
Because the pattern demonstrated above is so effective, it is relatively simple to extend it by chaining additional thread pools with queues.
普通情况下,每个线程只对其自己的队列进行读写。
运行队列是由运行线程所组成的列表,按照线程优先级的值进行排序。
A run queue is a list of runnable threads, sorted by thread priority value.
尽管这个基本模式比较简单,但可以通过将队列和线程池连接在一起,以便将这个模式用于解决各种各样的问题。
While this basic pattern is relatively simple, it can be used to solve a wide number of problems by chaining queues and thread pools together.
多个控制线程可以同时在队列中添加数据或删除数据,所以需要用互斥锁对象管理同步。
Multiple threads of control can simultaneously try to push data to the queue or remove data, so you need a mutex object to manage the synchronization.
如果事件在此队列中堆积,但并没有在协作线程池前面的队列中堆积,则请增加为源适配器控制器配置的线程数量。
If events are building up in this queue, but not in the queue in front of the collaboration thread pool, then increase the number of threads configured for the source adapter controller.
如果需要,则通知等待线程选取刚刚进入队列的对象。
If necessary, a waiting thread is signaled to pick up the newly enqueued object.
一个从非空运行队列的前端分派的线程具有最高的优先级。
A thread is dispatched from the front of the nonempty run-queue with the highest priority.
不过在队列为空时,它就会从其它线程的队列中“窃取”一些元素。
However, if its queue is empty then it will steal items from another thread's queue.
它最简单的使用模式便是作为一个线程安全的队列,并且在队列为空时阻塞消费者。
In its simplest mode it ACTS as a thread-safe queue in which consumers are blocked while the queue is empty.
请通过参考资料进一步了解POSIX线程和并发队列算法。
Be sure to check the Resources section for details on POSIX threads and concurrent queue algorithms.
然后,线程池读取该队列,获取测量数据并完成跟踪流程。
The queue is then read by a thread pool, which acquires the measurement and completes the trace process.
这些队列用于在线程不可用时临时接收请求。
These queues are used to temporarily hold requests when worker threads aren't available.
如果编码不正确,那么可能丢失通知,导致线程保持空闲状态,尽管队列中有工作要处理。
If coded incorrectly, it is possible for notifications to be lost, resulting in threads remaining in an idle state even though there is work in the queue to be processed.
相反,应该考虑创建少数几个调度优先级高的写线程,把应该确保添加到队列中的数据交给这些线程。
Instead, consider creating a few writer threads with higher scheduling priorities, and hand over data to those threads that should always be pushed into the queue.
逻辑工作线程从队列中渠道任务,进行处理,并通过使用一些类提供的函数将结果返回。
The logical workers take the job from the queue, and process it and send back the result by using some of the functions provided by the class.
通过从队列的前端分派线程和在队列的后端放置过期的线程,程序在一个优先级中轮替执行。
By dispatching from a queue's head and placing expired threads at a queue's tail, execution happens in a round-robin fashion within a priority.
工作队列可以有比微线程更高的时延,并为任务延迟提供功能更丰富的API。
Work queues can have higher latency than tasklets but include a richer API for work deferral.
对于有大小限制的阻塞队列,如果队列满了,写线程也需要等待。
In a bounded blocking queue, the writer thread also needs to wait if the queue is full.
主线程会侦听缓冲队列,并服务其接收的请求。
A main thread will listen to the buffered queue and will service the requests it receives.
线程池工作线程从输入队列提取下一个WWEC,并运行它。
A thread pool worker thread pulls the next WWEC from the input queue and runs it.
对于阻塞队列,只有读线程需要在队列中没有数据时等待。
In a blocking queue, only the reader thread needs to wait when there is no data in the queue.
事实上,线程会对队列尝试另一个remove调用,并变成阻塞,直到下一个请求可用。
The thread will in fact attempt another remove call on the queue and will become blocked until the next available request.
应用推荐