该参数控制允许排队请求web容器的最大请求数。
This parameter controls the maximum number of requests a Web container can queue.
NET性能计数器中关于排队请求和平均等待时间的统计信息?
NET performance counters for stats regarding queued requests and average wait times?
确定排队是否是请求失败的原因。
在下一步中,路由器会将排队的请求与可用的服务策略相关,并可能会更改请求流,以满足性能目标。
In the next step, the router correlates the queued requests to the available service policies, and possibly changes the request flow to meet the performance goals.
然后,作业假脱机程序每次处理一个排队的请求。
These queued requests are then processed one at a time by the job spooler.
使用衍生进程和消息传递,您就可以对来自服务器的请求进行排队,以便存储、更新和检索信息。
Using a spawned process and messaging you could queue the requests from the server to store, update and retrieve information.
注意:如果创建新的代理超出了这一限制,那么连接会排队等待下一个可用的代理来服务请求。
Note: If creating a new agent exceeds this limit, then the connection is queued up and waits for the next available agent to service the request.
多个来源可以生成系统部署请求,作业假脱机程序接收它们(排队)。
System deployment requests can be generated by multiple sources and accepted (queued) by the job spooler.
请求者可以选择为请求排队但是不等待处理的完成。当执行其他工作时,要定时检查响应队列中的响应。
The requester can choose to enqueue a request but not wait for processing to complete, then periodically check for a response in the response queue while it performs other work.
服务请求者将请求排队放入WSDL文档中指定的端口,但是不需要等待。相反,在响应到达之前,它做其他的工作。
The service requestor enqueues a request to the port specified in the WSDL document but doesn’t wait; instead, it does other work until the response arrives.
实现AIO的驱动程序必须包含正确的锁操作(以及可能的排队),让这些请求不会相互干扰。
A driver which implements AIO will have to include proper locking (and, probably queueing) to keep these requests from interfering with each other.
第51个请求必须排队,直到50个请求中完成一个为止。
The 51st request will be queued until one of the 50 requests is finished.
一般来说,如果两个或多个用户以写模式并发访问文件,第一个用户保证能以写模式访问文件,其他用户的写模式访问请求将排队。
In general, if two or more users access the file in write mode concurrently, the first user is granted write mode access to the file and the remaining users write mode access requests are queued.
但是,这种措施只能用于ARFM正在让此服务策略的请求流排队的情况,而前面的示例不是这种情况。
However, this action can only be taken if ARFM was queuing traffic for the service policy, which was not the case in the previous example.
最后,正如前面提到的,ARFM的主要处理能力管理机制是,对那些可以推迟同时仍然满足其服务目标的请求进行排队。
Finally, as discussed earlier, ARFM's primary capacity management mechanism is to queue incoming requests that can be delayed and still meet their service goal.
完全基于这些研究请求的数量,将会有大量学者排队等候,更为密切地考查这一重要的文化与技术档案库。
Based on the sheer number of research requests, there are going to be plenty of scholars lined up to have a closer examination of this important cultural and technological archive.
当命令被写入时,DMA请求会被排队放入MFC,条件是 MFC具有可用的槽 —— 本例中自然如此,原因是本例没有进行任何并发的 DMA请求。
When the command is written, the DMA request is enqueued into the MFC provided it has available slots -- yours certainly does as you are not doing any other concurrent DMA requests.
这可能导致其他请求排队,直到那个长时间请求返回,让出并发位置。
This may result in the queuing of additional requests until the long-lived request returns, freeing up the concurrency slot.
在默认情况下,这个值设置为90%,这意味着在应用服务器CPU利用率即将超过90%之前ARFM不会让任何请求流排队。
By default, this is set to 90% meaning that ARFM will not queue any traffic until the application server CPU utilization would otherwise exceed 90%.
请求将排队进行部署。
如果检测到后端服务器可能不能处理其他请求,ARFM可能会对请求排队。
The ARFM may queue requests if it detects the backend server may not be able to handle additional requests.
如果还没有为系统配置适当的服务策略定义,应该考虑禁用ARFM的请求排队功能。
Until a system is properly configured with appropriate service policy definitions, consider disabling the traffic queuing function of ARFM.
排队的请求的响应时间会增加,因为在得到服务之前它们会在队列中花费一段时间。
The queued traffic sees an increased response time since it spends some amount of time in a queue prior to being serviced.
实际上,这会导致ARFM让更多请求排队,因为它实际可用的后端处理能力少了。
In effect, this will cause ARFM to queue more traffic because there is effectively less backend capacity available to it.
最后要做的一件事,就是激活poa,使客户机请求开始排队,并强制服务器输入其事件循环,以接收这些传入的请求。
The last thing we need to do is activate our POA to start queuing client requests and force the server to enter its event loop to receive those incoming requests.
调优连接池的目标是确保各线程都有一个数据库连接,并且请求不需要排队以等待访问数据库。
The goal of tuning the connection pool is to ensure that each thread that needs a connection to the database has one, and that requests are not queued up waiting to access the database.
请求排队通过每个步骤,完成一个步骤之后,再排队进入下一个步骤。
Requests line up to pass through each of these steps, and once the step has completed, the request lines up for the next step.
如果默认处理模式可用,aspnet _ isapi会将请求排队并将其分发到工作进程。
If the default process model is enabled, aspnet_isapi queues the request and assigns it to the worker process.
POA管理器是一种封装了 POA处理状态的对象,所以,我们使用 POA 管理 器,将发给 servant 的请求排队。
The POA manager is an object that encapsulates the processing state of the POA. So, we will use the POA manager to start the queuing of requests to our servants.
POA管理器是一种封装了 POA处理状态的对象,所以,我们使用 POA 管理 器,将发给 servant 的请求排队。
The POA manager is an object that encapsulates the processing state of the POA. So, we will use the POA manager to start the queuing of requests to our servants.
应用推荐