Use putty to SSH to the Master node.
对主节点使用puttyto ssh。
This node doesn't have to be a master node.
这个节点不必是主节点。
Add the master node IP address to the master file.
在master文件中添加主节点的IP地址。
The Hadoop Master node instance must be provisioned first.
必须首先提供HadoopMaster节点实例。
Last, announce this node as the master node in the cluster: tyd.
最后,声明这个节点是集群中的主节点:tyd。
In this example, the Hadoop Master node IP address is 170.224.193.137.
在这个示例中,HadoopMaster节点的IP地址是170.224.193.137。
Repeat the first three steps above for initializing the master node.
重复前面初始化主节点的前三个步骤。
One of its most important features is that it does not have a master node.
其最重要的功能之一就是该协议无需主节点。
The master node assembles the results, further distributes work, and so on.
主节点收集结果,并继续分发任务,依此类推。
In a typical setup, the master node is where the applications are initiated.
在典型的设置中,应用程序在主节点上开始运行。
Figure 15 shows a summary of what has been configured for the Hadoop Master node.
图15显示了HadoopMaster节点的配置内容的摘要。
Meanwhile, maintenance can be performed on the master node to analyze why it went down.
同时,可以对主节点进行维护,分析它为什么会宕机。
In such a cluster, apart from the master node, you don't need to run a GUI on the slaves.
在这类集群中,除了主节点以外,不需要在从节点上运行GUI。
The worker node processes that smaller task, and passes the answer back to the master node.
工作者节点处理这些小任务,把结果返回给主节点。
Then you can define the master replicate using -m option to specify the master node as g_pkcdr.1.
然后可以使用- m选项定义主复制,将主节点指定为g_pkcdr . 1。
Heartbeat is configured to switch over to the backup node if the master node happens to go down.
Heartbeat的配置是当主节点偶然宕机时切换到备份节点。
Therefore, in this example the same data center chosen for the Master node (Markham, Canada) is used.
因此,这个示例中使用了为主节点(Markham,Canada)选择的相同数据中心。
It works in unattended mode so its operation is fully controlled by the master node (using Linux).
它以自动的方式工作,其操作由主节点(使用Linux)完全控制。
The master node must have connectivity with all the servers in the Enterprise Replication topology.
主节点必须和EnterpriseReplication拓扑中的所有服务器相连接。
In this example, the IP address that was assigned to this Hadoop Master node instance is 170.224.193.137.
在这个示例中,指定给这个HadoopMaster节点实例的IP地址是170.224.193.137。
Hadoop's Name node and Job tracker reside on the master node and might not need scaling at this stage.
Hadoop的命名节点和工作跟踪器贮存在主节点上,在这个阶段可能并不需要扩充。
In the map step, the document is taken by the master node and the problem is divided into subproblems.
在map步骤中,由主节点接收文档并将问题划分为多个子问题。
For all the other Settings, keep the defaults or choose the same values as you did for the Hadoop Master node.
对于所有其他设置,保留其默认值或者选择与HadoopMaster节点相同的值。
During failover, one of the children becomes the master node and all replication starts flowing from that node.
在故障恢复过程中,其中一个子节点会成为主节点,而所有复制将从该节点开始。
You should now have all the components necessary to handle a failure of the master node, if it should occur.
现在您就已经拥有了处理主节点故障(如果它会发生)所需要的所有组成部分。
The — master option (-m) requires you to specify the master node where the master table dictionary is located.
master选项(- m)要求指定主表字典所在的主节点。
There is a slight chance that each node will need to be rebooted while working on a job submitted by the master node.
各个节点在执行主节点所提交的作业时,需要重新引导的机率较小。
This command USES the master dictionary created while using the master node specified with the CDR define template command.
该命令使用在使用由cdrdefinetemplate命令指定的主节点时创建的主字典。
This example presents a simple ha cluster that consists of three nodes: a master node, a backup node, and a management node.
本例描述了一个简单的ha集群,由三个节点构成:一个主节点,一个备份节点,以及一个管理节点。
This example presents a simple ha cluster that consists of three nodes: a master node, a backup node, and a management node.
本例描述了一个简单的ha集群,由三个节点构成:一个主节点,一个备份节点,以及一个管理节点。
应用推荐