MMFM constructs streamlines like in-core methods through mapping large data file into virtual RAM, and reduces page faults by organizing data in spatial blocks.
方法MMFM将大规模数据文件映像成虚拟RAM,利用空间分块组织数据,减少缺页请求;
Every time you move to a row, you probably do not want to retrieve the column with the large data file, because doing so would slow down your application's performance.
每次您移到一资料列,应该就不希望再撷取内含大型资料档的资料行,因为这样做会降低应用程序的效能。
While your personal computer only keeps one copy of a file, large-scale services like Facebook rely on what are called content delivery networks to manage data and distribution.
虽然你的个人电脑只能保存一个文件的一个副本,但像Facebook 这样的大型服务项目,则会依赖于所谓的内容分发网络来管理数据和分发。
You have a keyed physical file, but only want the index on SSD and not the data itself because the amount of data in the physical file is too large.
您有一个键控的物理文件,但只希望将索引放到 SSD,而不希望将数据本身放到其中,因为物理文件中的数据量非常大。
You have a keyed physical file, but only want the index on SSD and not the data itself because the amount of data in the physical file is too large.
您有一个键控的物理文件,但只希望将索引放到SSD,而不希望将数据本身放到其中,因为物理文件中的数据量非常大。
So you can download first few megabytes of a large file and start processing that data while you request the next chunk.
所以对于大型文件,你可以做到先下载起初的几兆数据,并在处理的同时下载后续的内容。
A file can be treated as a single large block of data or, alternatively, as a collection of many distinct records delimited from one another by separators.
可以将文件视为单个大数据块,或者视为由分隔符彼此分隔开的许多不同记录的集合。
Listing 4 illustrates loading a large file into the Dom simply to extract the data from a single attribute with DomXpath.
清单4中的例子将一个很大的文件加载到Dom中,只是为了用DomXpath从一个属性中提取数据。
Using the DB2 import command, we then populate this new table with data contained in the large del file: import from staffdata.del of del modified by chardel coldel, decpt. insert into newstaff.
然后我们使用DB 2import命令,用大型del文件中包含的数据填充这个新表:importfrom staffdata . del of del modifiedby chardel “ ” coldel, decpt . insert into newstaff。
The technique in this article has been successfully applied to a file transfer portlet that makes use of the GridFTP protocol for managing large datasets between two third-party data grid nodes.
本文所探讨的技术已经被成功应用到文件传输portlet,该protlet采用GridFTP协议管理两个第三方数据网格节点之间的大型数据集。
Transferring large files using the File nodes can be achieved by splitting the data into smaller sized chunks and transferring the data 'as is' chunk by chunk.
通过将数据分拆为较小的块并逐块地“按原样”传输数据,可以实现使用File节点传输大文件。
And, if you do need to do an urgent system recovery, you don't want to lose precious time finding large enough disks to restore to and then have to untangle data file systems from your mksysb backup.
而且,如果您的确需要进行紧急系统恢复,也不想将宝贵的时间都浪费在寻找系统恢复所需的足够磁盘空间以及再在mksysb备份中清理数据文件系统的过程。
Due to the large amount of data in the audit log table, a single INSERT statement will usually fail because the data per transaction exceeds the log file size of the database system.
因为审核日志表格里有大量的数据,单独的insert语句通常会失败,这是因为每个事务的数据超过了数据库系统里的日志文件的容量。
Reading data streams and writing results to streams is convenient, because the application does not have to read a large portion of the file, much less the whole file, into memory.
读取数据流并将结果写入流是很方便的,因为应用程序不一定要将文件的大部分(更不必说整个文件)读取到内存。
By the way, I'm saying "files" here, but the data source could really be anything - chunks of a very large file, rows returned from an SQL query, individual email messages from a mailbox file.
顺便说一下,我在这说“文件”,但数据源可以是任何东西——一个非常大的文件块,一个SQL查询返回的元组,一个邮箱文件中的单个邮件信息。
If you have a large data model, or you have data that needs to be hidden from users, you can load an instance from an external XML file (see Listing 5).
如果数据模型很大,或者需要向用户隐藏某些数据,可以从外部XML文件加载实例(如清单5所示)。
To quantify things, it is not at all unusual for XML documents that represent table-oriented data to be three times as large as a CSV or database representation, or even a flat file.
要量化事物,对于表示面向表格数据的XML文档是CSV或数据库表示或甚至是平面文件的三倍大是再平常不过的。
If you're processing large amounts of data, fscanf will probably prove valuable and more efficient than, say, using file followed by a split and sprintf command.
如果要处理大量数据,fscanf将能证明自己的价值并比使用file附带split和sprintf命令更有效率。
Using Memory Map File and Multi-thread dispatch technology, to manage and dispatch large terrain data and texture data.
使用内存映射文件技术及多线程调度技术,用于管理和调度大数据量地形数据和纹理数据。
In the large scale file systems based on the cluster technology, the efficient management of meta-data is critical.
在基于集群技术的大规模文件系统中,有效的元数据管理是系统实现的核心。
To effectively deal with large data, in this paper the method of dealing with large data using file buffering was introduced.
为有效的对大数据量数据进行处理,本文介绍了一种用文件缓冲的方式来处理大数据量数据的方法。
Generating the data exchange file using the EXPORT utility is often a lengthy process in the case of large amounts of data.
在数据量很大的情况下,使用EXPORT实用程序生成数据交换文件常常要花费很长时间。
But STL file has lots of disadvantages, such as error, limited precision, and large data volume.
但是STL文件错误较多,精度较低,数据量也较大。
The principle and structure of a large size three-dimensional laser-engraving machine has been introduced in this paper, discussed the format of data file, the design of hardware and software.
本文介绍了一种大幅面三维激光雕刻机的原理和构成,并详细讨论了数据文件的格式、系统的硬件电路和软件设计。
The NDA file contains data from three large pivotal phase 3 studies, all of which met their co-primary efficacy endpoints.
该新药申请文件包含三期临床的三大关键研究数据,所有这些都符合主要疗效终点。
This avoids file system operations for reading and writing session data, which improves performance when large amount of data is stored in PHP session.
这就避免了文件的阅读和写作部分数据,从而提高性能时,大量的数据是存储在PHP会议系统操作。
Replacing large number of software programming with direct write image file on hard disk, the method records image data on hard disk and directly stores them with file format.
该方法去掉了大量的软件编程代之以硬盘直写图像文件的模式,将图像数据记录到硬盘的同时直接以文件形式存放。
Text file into the easy language source routine program demonstrates a large number of text data into easy language method.
分文本文件导入易语言数据库源码例程程序演示了大量文本数据导入易语言数据库的方法。
Classify the log file data and prepare for later data analysis, in the case of a large number of log data and based on data preprocessing rules.
在大量日志数据情况下,依据数据预处理的规则,把日志文件的数据内容分类,为后面的数据分析做好准备。
Section is the unit to allocate disk space. It makes the distribution of data blocks on disk more continuous and enhances the efficiency of large file access.
存储服务器采用对象的方式维护数据块,使用区段的形式分配空间,使数据块在磁盘上分布更加连续,提升了大文件访问的效率。
应用推荐