创建一个新的并行作业。
Create a parallel job and outline its contents.
创建并行作业并描述其内容。
Add two DB2 Connector stages to the parallel job.
向该并行作业添加两个DB 2Connector阶段。
Drag the DB2 Connector stage to the parallel job.
将DB 2Connector阶段拖放到该并行作业上。
Locate the XML Input stage and drag it to the parallel job.
找到XMLInput阶段并将其拖放到并行作业上。
Step 2: Create a DataStage parallel job and outline its contents.
步骤2:创建DataStage并行作业并概括其内容。
Add a DB2 Connector stage to the top right portion of the parallel job.
将DB 2Connector阶段添加到并行作业的右上部分。
Place these two connectors at opposite sides of the parallel job canvas.
将这两个连接器放到并行作业画布的另一面。
Locate the Transformer stage and drag this icon to the parallel job pane.
找到Transformer阶段并将该图标拖放到并行作业面板。
You now have a parallel job skeleton for the first part of your DataStage job.
现在,您的DataStage作业的第一部分已经具有一个并行作业骨架。
Add a Sequential File stage to the top left portion of the job design area for the new parallel job.
将SequentialFile阶段添加到新并行作业作业设计区的左上部分。
A configuration file defines nodes where processing and disk space is allocated for use in a parallel job.
配置文件定义平行作业中使用的已分配处理和磁盘空间所在的节点。
Experiments show that the method using these algorithms can effectively improve executive performance of the parallel job.
实验结果表明,这种负载平衡方法能够有效地提高并行作业的运行性能。
Verify that your parallel job design is similar to Figure 5, which shows the various stages linked together, as described in Step 6
检查您的并行作业设计是否类似于图5,它显示了链接在一起的各个阶段,如步骤 6所述
Combining the Parallel Job Manager (PJM), N-tier caching structures, and affinity routing provides a powerful, high-performance compute grid.
组合使用并行作业管理器(PJM)、n层缓存结构和关联路由,可以提供功能强大的高性能计算网格。
Compute Grid includes a feature called Parallel job Manager (PJM) which you can use to define rules for decomposing large jobs into many small jobs.
计算网格包括一个称为并行作业管理器(PJM)的特性,您可以使用该特性来定义将较大作业分解为许多较小作业的规则。
Each job within the four use-case scenarios is a parallel job designed to take full advantage of WebSphere DataStage's parallel processing capabilities.
在这四个用例场景中,每个作业都被设计为并行作业,以便充分利用WebSphereDataStage的并行处理能力。
Multiple instances of that application can be dispatched using the Parallel Job Manager (PJM), where each partition processes a different section of the data.
该应用程序的多个实例可以使用并行作业管理器(PJM)来进行分派,其中各个分区处理不同的数据部分。
At the same time, through the analysis of the parallel relation among the multiprocessor parallel job, the low bound of the optimal schedule has been provided.
同时,通过对多处理机任务之间的并行关系的分析,得到了一般最优调度的下界。
Parallel Job Manager tier, also part of Compute Grid, decomposes large jobs into smaller partitions and provides operational control over partitioned jobs executing across the cluster.
并行作业管理器层同样也是计算网格的组成部分,可以将较大的作业分解为较小的分区、并为跨集群执行的分区作业提供操作控制。
If you have several job databases, you need to do the same analysis for each CSLD job database and sum up the parallel threads.
如果有多个任务数据库,那么需要对每个CSLD任务数据库进行相同的分析,然后总计出并行线程数。
Instead, the administrator should have better operational control and stop the logical job, where the infrastructure would stop the many jobs running in parallel.
相反,管理员应该具有更好的操作控制,在停止逻辑作业时,基础结构将会停止正在并行运行的众多作业。
This parallel clustering system is called Pseudo remote threads because the threads are scheduled on the job dispatcher but the code within the threads is executed on a remote machine.
这种并行集群系统之所以被称为伪远程线程,是因为线程是在作业调度器上调度的,但线程内的代码却是在远程计算机上执行的。
The principles of grid computing, including the parallel execution of a job across a grid of endpoints, emerge as the next generation of enterprise applications.
网格计算(包括跨端点网格并行执行作业)的原则随着下一代企业应用程序而出现。
Additionally such rules can reduce the ability of the job to be completely run in parallel.
此外,这些规则会减少任务的可并行执行能力。
By default the job executes in parallel on all logical nodes declared in the configuration of InfoSphere DataStage.
默认情况下,在所有逻辑节点上并行执行的任务,在InfoSphere DataStage的配置中声明。
In summary, parallel processing, in the context of DataStage, is an internal characteristic of a job that is measured by the number of data or processing partitions that are utilized.
总之,在DataStage上下文中,平行处理是一个作业的内部特征,该作业通过被利用的数据或处理分区的数量来测量。
Start up the DataStage and Quality Stage Designer, open a project, and create a new parallel DataStage job.
启动DataStage和QualityStageDesigner,打开一个项目并创建一个新的并行DataStage作业。
MapReduce applications must have the characteristic of "Map" and "Reduce," meaning that the task or job can be divided into smaller pieces to be processed in parallel.
MapReduce应用程序必须具备“映射”和“缩减”的性质,也就是说任务或作业可以分割为小片段以进行并行处理。
So, it would be working parallel but their job would be to represent the consumer and that sounds like a good idea to me.
这样这个新的机构就能,但与其他机构不同的是他们的工作是保护消费者,对我来说这是个好点子。
应用推荐