An abstract class that represents the location where index files are stored.
表示索引文件存储位置的抽象类。
Optimization results in the merging of many of the Lucene index files into a single file.
优化会导致多个Lucene索引文件合并成一个单一文件。
This directory contains the search index files produced by the Lucene search engine as shown in Figure 10.
此目录包含Lucene搜索引擎产生的搜索索引文件,如图10中所示。
Run the index Data service under a user who has exclusive access to both the service process and the index files.
同时在能够独自访问服务流程和索引文件的用户下运行IndexDataService。
Each IBM Cognos environment can have multiple index data services, multiple index update services, and multiple index files.
每一个IBMCognos环境都可以有多个索引数据服务、多个索引更新服务和多个索引文件。
I've introduced you to the main index files in Lucene, hopefully allowing you to understand the physical storage structure of Lucene.
到目前为止我们介绍了Lucene中的主要的索引文件结构,希望能对你理解Lucene的物理的存储结构有所帮助。
The main advantage of sitemap index files is, of course, a means to partition sitemap files of very large Web sites into smaller chunks.
当然,站点索引文件的主要优势就是能把非常大的Web站点的站点文件分割成众多比较小的文件。
In the paper storage models based on relation database, object-oriented database, index files, compressed files are discussed in detail.
文中对基于关系数据库、面向对象数据库、索引文件、压缩文件的存储模型进行了详细地论述。
Lucene stores the input data in a data structure called an inverted index, which is stored on the file system or memory as a set of index files.
Lucene将输入数据存储在名为逆序索引的数据结构中,该数据结构以索引文件集的形式存储在文件系统或内存中。
In Lucene 2.3 there are substantial optimizations for Documents that use stored fields and term vectors, to save merging of these very large index files.
在Lucene 2.3中对拥有存储字段和Term向量的文档进行了大量的优化,以节省大索引合并的时间。
We started with a blind placement strategy of putting the database index files on SSD storage, but that generated only a modest improvement (less than 2x).
首先采取随意安排的策略将数据库索引文件放到SSD存储上,但是这样做只获得不多的性能提升(不超过2倍)。
This section describes how to serve static content, how to use different ways of setting up the paths to look for files, and how to set up index files.
这一章讨如何提供静态文件,如何使用不同方式设置查找文件的路径,如果设置索引文件。
In the back-end process, a spider or robot fetches the Web pages from the Internet, and then the indexing subsystem parses the Web pages and stores them into the index files.
在后端流程中,网络爬虫或者机器人从因特网上获取web页面,然后索引子系统解析这些Web页面并存入索引文件中。
When you need to index a large number of documents, you'll notice that the bottleneck of the indexing is the process of writing the documents into the index files on the disk.
当你需要索引大量的文件时,你会注意到索引过程的瓶颈是在往磁盘上写索引文件的过程中。
This starts the index process, loading the files.
这实际上会启动索引过程,加载文件。
Retrieves index updates from another repository and merges them into the files and directories in the current branch.
从另一个存储库检索索引更新,并将其合并到当前分支的文件或目录。
This instructs DB2 to pass the index keys to the sort program in memory, rather than having the keys written to and read once again from sort work files on DASD.
这将指示DB 2将索引键传递给内存中的排序程序,而不是再次将这些键写到DASD上的排序工作文件中,然后从中读取这些键。
It includes new heuristics to help indexer find header files in projects, and has added index support for implicit references and overloaded operators.
它包括新的启发式应用,以帮助索引器发现项目中的头文件,并增加了对隐式引用和重载操作符的索引支持。
This is helpful for index pages — those pages that consist mostly of links to files, such as download sections — or other pages such as a table of contents or a glossary.
这对索引页面—那些主要包含到文件的链接(譬如,下载部分)—或者其它页面(譬如,目录或词汇表)很有帮助。
Usually on a site, there are pages that are not useful for a search engine to index, such as files in the scripts directory, administrative pages, error pages, and so on.
站点上通常都含有那些搜索引擎没必要建立索引的页面,例如脚本目录中的文件、管理页面和错误页面等等。
Creating an index page, of all articles in a directory or in total, requires scanning the file system for available files.
创建一个索引页,将所有文章都放在一个目录或所有目录中,这样如果想要得到可用的文件就必须扫描该文件系统。
The views are simple files: the index and read views simply output the data assigned to them, applying minimal formatting, while the write view displays a form to be used when Posting to Blahg.
视图都是十分简单的文件:索引视图和读取视图只是应用最少的格式输出分配给它们的数据,而写入视图将显示发布到Blahg中时要使用的表单。
OmniFind provides several approaches for identifying which SCS spool files to index that are based on the selection criteria used by other system APIs and CL commands.
OmniFind提供了几种方法来确定要索引哪些SCSspool文件,这些方法是基于其它系统API和CL命令所使用的选取准则的。
The slave servers then use rsync to copy only those files in the Lucene index that have been changed.
之后,从服务器使用rsync来只复制Lucene索引中的那些已被更改的文件。
For a real world example, this article steps through the creation of a simple search engine written in PHP that can index and search files using Apache Lucene.
有关真实示例,本文逐步说明了使用PHP编写的简单搜索引擎的创建工作,此引擎可使用Apache Lucene建立文件索引和进行搜索。
Eventually when the form is submitted, the PHP script will create a Lucene search index and populate it with all the files in the directory that have a matching extension.
最后,提交表单时,php脚本将创建Lucene搜索索引,并使用目录中具有匹配扩展名的所有文件对其进行填充。
The example creates a simple search engine written in PHP that can index and search files using Apache Lucene.
此示例创建了使用PHP编写的简单搜索引擎,可以使用Apache Lucene建立文件索引和进行搜索。
Records changes to files and directories in Git's index.
记录Git索引中的文件和目录更改。
To demonstrate xinetd and how easy it is to turn a vanilla application into a daemon, let's write a Ruby script to return an index of text files it has access to.
为了演示xinetd如何把应用程序转换为守护进程,我们来编写一个ruby脚本,它返回它能够访问的文本文件的索引。
Git USES an internal index to track the state of the files and directories in a repository.
Git使用内部索引跟踪文件的状态和存储库中的目录。
应用推荐