本文介绍IC版图数据提取器中预处理模块的设计方法与特点。
This paper presents a design method and feature of pre-processing block in bioplar IC layout data extractor.
它还封装了前面版本中的数据提取器,在不同类型的版本中都可以使用它们。
It also encapsulates the data extractors of the previous version, and makes them available as different kinds of reports.
编码不应该太复杂,否则数据提取器易混淆或者可能作出错误的判断。
The coding should not be so complicated that the abstractor is easily confused or likely to make poor decisions.
我想在一个灵活的方式提取信息,即我想只有对所有的页面写一个小的数目的数据提取器(理想情况下,一个)。
I want to extract that information in a flexible manner, i. e. I want to only have to write a small number of data extractors for all of the pages (ideally, one).
接下来的段落,描述了怎样为DocExpress数据提取器和报告定义RationalPublishingEngine文件模板。
The following sections show how to define Rational Publishing Engine document templates for the DocExpress data extractors and reports.
和在客户端一样,封装的接口要求应用程序代码从收到的封装器对象中提取数据,并构造将要发送的封装器对象。
As on the client, the wrapped interface requires the application code to extract data from received wrapper objects and construct wrapper objects to be sent.
通常情况下,您的应用程序将具有一系列控制器,所有的控制器都将从模型中提取数据,然后在视图中显示。
Typically, your application will have a series of controllers, all of which will pull data from models and then display data in views.
信息是从入站事件所携带的数据中提取的,并包含在指标、计数器和秒表中,这些信息表示监视上下文所收集的业务度量。
The information is extracted from the data carried by inbound events, and is held in metrics, counters, and stopwatches, which represent the business measures that a monitoring context collects.
客户验证数据可以从服务器或者消息头中提取。
Client authentication data can be extracted from the server or header.
数据通过标准的提取、转换和加载(etl)连接器加载,然后可供商业智能和分析应用程序进行查询。
Data is loaded through standard extract, transform, and load (ETL) connectors, and is then available for queries from business intelligence and analytics applications.
然后,将这种数据流规范部署到整合服务器,并控制提取哪些数据、如何对其进行转换、以及如何将其应用于目标。
This data-flow specification is then deployed to the consolidation server and controls what data is extracted, how it is transformed, and how it is applied to the target.
例如,xml解析器可以从一个XML数据源提取数据。
For example, an XML parser extracts data from an XML data source.
鉴于十年以来的数据增长,OLTP服务器现在也在处理大规模数据,这就需要一个优化工具来执行提取、转换和加载(etl)操作。
Considering the data growth in the last decade, OLTP servers are also handling a massive amount of data that demands an optimal tool for performing extract, transform, and load (ETL) operations.
在客户机第一次连接到数据库服务器并从服务器端参数中提取信息之后,备用服务器信息被缓存在客户机上。
The alternate server information is cached at the client after first connecting to the database server and pulling the information from the server side parameter.
在第一个阶段中,整合服务器(实现数据整合模式的组件)收集(或“提取”)来自不同数据源的数据。
In the first phase the consolidation server — the component that implements the data consolidation pattern — gathers (or "extracts") the data from the sources.
因此,有几种方法可从服务器获得XML响应,并使用较为标准的代码提取数据,在客户机中使用这些数据。
So there are several ways to take an XML response from a server, and, with fairly standard code, pull the data out and use it in a client.
您将创建一个RSS到电子表格的转换器,该转换器从一个远程rss提要检索所有新的新闻,从提要中提取新闻元数据,并放置到电子表格的行和列中。
You'll create an RSS-to-spreadsheet converter, which retrieves all the news stories from a remote RSS feed, extracts story metadata from the feed, and places it in spreadsheet rows and columns.
这个应用程序的实际目的是从服务器上提取地震数据库中的信息。
The actual purpose of the application is to retrieve information from a database of earthquakes on the server.
当处理任务只需要从存储器中提取少量数据的时候,这种数据交换所需的能量是可以管理。
The energy required for thi exchange i manageable when the tak i mall—a proceor need to fetch le data from memory.
在SPSSPASW的情况下,将数据从数据库提取到服务器以便分析。
In the case of SPSS PASW, you pull data out of the database into a server to be analyzed.
重复使用数据提取加速器以进行基准评估和开发。
Data extraction accelerators can be re-used for baseline assessment and for development.
组织可以使用mashup编辑器通过外部和内部的feed从公司提取数据,然后创建新的应用程序或信息。
Organizations can use mashup editors to extract data from companies with external or internal feeds and create new applications or information.
图19显示了如何设置odbc连接器,以便从Oracle数据库中提取联系人信息。
Figure 19 shows the setup of the ODBC connector to extract the contact information from the Oracle database.
让我们考虑一个WebSphereCommerce用户,他想从一个WebSphere Commerce数据库服务器中提取目录资产数据,并且将它加载到另一个数据库中去。
Let's consider a WebSphere Commerce customer who wants to extract catalog asset data from one WebSphere Commerce database server and to load it to another.
如果您正在设置一个典型的PHP应用程序,您需要创建一个mysql _ query包装器来使用一个sql查询从数据库提取一个清单。
If you were setting up a typical PHP application, you'd create a mysql_query wrapper to extract a listing from the database using an SQL query.
它介绍了Teradata连接器的数据加载、数据提取和查找特性。
It introduces the data loading, data extraction, and lookup features of the Teradata connector.
比方说,可以使用流解析器XMLReader获取一个元素,将其导入DOM,然后用XPath提取数据。
You can, for instance, use XMLReader, a stream parser, to get an element, import it into the DOM and extract data using XPath.
这种类型的规则创建的实现方法是从WebSphereCommerce提取想要的数据并以一种直观的方式在JRules编辑器中显示数据。
This type of rule authoring is achieved by pulling the desired data from WebSphere Commerce and presenting them in the JRules editors in an intuitive manner.
Servlet过滤器是小型的Web组件,它们拦截请求和响应,以便查看、提取或以某种方式操作正在客户机和服务器之间交换的数据。
Servlet filters are small Web components that intercept requests and responses to view, extract, or in some way manipulate the data that is being exchanged between client and server.
非常灵活的DublinCore元数据编辑器工具(请参阅参考资料),能够从任何网页中提取 Dublin Core 元数据,并把结果转化成LOM或IMS 元数据。
The very neat Dublin Core metadata editor (see Resources) tool can extract Dublin Core metadata from arbitrary Web pages, and convert the result to LOM or IMS metadata.
应用推荐