The mergeXML() method allows us to merge the data we obtained in a current extraction with an archive file of past extraction data.
mergeXML()方法允许我们把在当前抽取中获得的数据合并到 包含以前抽取数据的档案文件中。
Using this approach, you can simplify some of the data extraction requirements for your views and supporting applications.
使用这种方法,您可以为您的视图和支持的应用程序简化一些数据提取要求。
It has no effect on this data extraction operation.
它不会对此数据提取操作造成影响。
You can use three techniques in the sample files supporting the extraction of data by asset group.
在通过资产组来支持数据提取的示例文件中,你可以使用三个技巧。
The next step is to create an access definition, which is the template or roadmap for data extraction.
接下来的一步是创建一个访问定义,它是用于数据提取的模板以及路线图。
First, we cover creating an infrastructure or an object model that starts with a request, access definition, and column maps for data extraction.
首先,我们创建一个基础或者对象模型,从分析需求、访问定义,以及数据提取的列映射开始。
Both applications, however, use the same set of data extraction utilities.
但是,两个程序都使用相同系列的数据提取工具。
But recovery operations often require extraction of large amounts of data when DBAs don't have much time.
但是,恢复操作往往需要提取大量数据,而数据库管理员可能没有这么多的时间。
After the data extraction is completed successfully, go through the result output files for the status of the data movement, warnings, errors and other potential issues.
成功地完成数据提取之后,在产生的输出文件中查看数据转移的状态、警告、错误和其他潜在问题。
After completing the extraction of DDL and DATA, you will notice several new files created in the working directory.
在提取完ddl和数据之后,会在工作目录中看到几个新创建的文件。
After extraction of the DDL and DATA, you have three different ways of deploying the extracted objects in DB2.
在提取DDL和数据之后,可以以三种方法在DB 2中部署提取的对象。
Text extraction: Separation of translatable text from layout data.
文本提取:从布局数据中分离出可以翻译的文本。
Then there are the data extraction tools which allow you to extract data from tables, lists and other data structures on the webpage.
然后是数据提取工具,可以从网页上的表格、列表和其他数据结构中提取数据。
Now that the MTK is aware of the source database and the tables which need to have data extracted, you can generate data extraction scripts and data load scripts for DB2.
至此,MTK已经知道源数据库以及需要提取数据的表,现在就可以生成用于 DB2的数据提取脚本和数据加载脚本。
Data extraction accelerators can be re-used for baseline assessment and for development.
重复使用数据提取加速器以进行基准评估和开发。
After making sure all the prerequisites described above are in place, follow these steps to implement the data extraction
在确保上面描述的所有先决条件都满足以后,遵循以下步骤来实现数据提取
In Perl 6, the regular expression would be much simpler and the processing would be easier because this version distinguishes between grouping for matching and grouping for data extraction.
在Perl6中,正则表达式将会更简单,处理也会更加容易,因为这个版本在匹配分组和数据提取的分组之间作了区分。
This job can run only in sequential mode on the DataStage conductor node and it is suited for small data extraction.
此作业只能在DataStage指导者节点上以序列模式运行,并且它适合于小量数据提取。
We will demonstrate how data extraction from these pages were being handled by an earlier framework (Struts 1.x) and how migrating to the OGNL framework of Struts 2 helps minimize development effort.
我们还将演示如何通过此框架的以前版本(Struts 1. x)从这些页面提取数据以及迁移到Struts2的OGNL框架如何有助于最小化开发工作量。
We will define hierarchical-based web pages and how we can perform data extraction using the minimal code of the Struts 2 OGNL framework.
我们将定义一些基于层级的web页面,并展示如何使用最少的Struts2OGNL 框架代码执行数据提取。
As shown in Figure 18, specify the following parameters for the data extraction operation.
如图18所示,为数据提取操作指定以下参数。
It introduces the data loading, data extraction, and lookup features of the Teradata connector.
它介绍了Teradata连接器的数据加载、数据提取和查找特性。
Finally, InfoSphere Optim High Performance Unload provides a rapid data extraction tool that can significantly reduce restoration and migration times for large volumes of data.
最后,InfoSphereOptim HighPerformance Unload提供了一个快速提取数据的工具,可以大大减少大容量数据的还原和迁移时间。
In the early phase of the pilot project, the project team investigated the IDocs, BAPIs, and LDBs interfaces and concluded that none of these matched the specific data extraction requirements.
在试验阶段的早期,项目团队研究了IDocs、BAPI和ldb接口,发现这些接口都不满足数据提取需求。
Lotus Enterprise Integrator is a declarative, server-based tool for data movement, synchronization, and extraction.
LotusEnterpriseIntegrator是一个声明式的、基于服务器的工具,主要用于数据的移动、同步和提取。
The topic of this article is the selection and extraction of data from an Apache Derby database.
本文的主题是从ApacheDerby数据库中选择和提取数据。
Data acquisition: ETL (Extraction, Transformation, and Loading) processes to acquire, clean, transfer, and integrate data.
数据获取:用于获得、清洗、转换和集成数据的ETL(提取、转换和装入)过程。
We introduce the method of data extraction by means of an example.
我们用示例的方式来介绍数据抽取的方法。
If we were only performing the data extraction once, we would now be done.
如果我们只执行一次数据抽取,我们现在已经完成了。
While the example in the article focused on merely extracting weather information about Seattle, Washington, nearly all of the code presented here is reusable for any data extraction.
尽管本文中的示例仅集中于抽取有关华盛顿,西雅图天气的信息,但是这里出现的所有代码几乎都可以在任何数据抽取中重复使用。
应用推荐