Composing XML documents from large datasets.
通过大型数据集编写XML文档。
It would be especially slow if large datasets were exchanged.
如果有大量的数据交换更会特别的慢。
BIRCH algorithm is a clustering algorithm for very large datasets.
BIRCH算法是针对大规模数据集的聚类算法。
Data mining is the task of finding useful information in large datasets.
数据挖掘是一项从大型数据集中发现有用信息的任务。
It is the need of data processing in attribute reduction about large datasets.
属性约简是对大数据集进行数据处理的需要。
Deviation detection is defined as the task of finding unusual data records in large datasets.
偏差检测是一项在大型数据集中发现异常数据记录的任务。
Horizontal partitioning is an important tool for developers working with extremely large datasets.
对于使用超大数据集工作的开发者来说,水平的分区和分片都是很重要的工具。
This will work well for filtering an initial set of perhaps 100 items, but will fail on large datasets.
这对于筛选100个项左右的初始集能够正常工作,但对于大数据集将会失败。
Myth: XQuery will not scale to handle large datasets; XQuery will never be as fast as relational databases
神话:XQuery不能处理大型数据集,永远赶不上关系数据库的运行速度
Rules that fire frequently can be placed at the beginning, resulting in better performance on large datasets.
经常触发的规则可放在开头部分,在大型数据集中实现更高性能。
From this article, it's easy to see how Hadoop makes distributed computing simple for processing large datasets.
通过本文很容易看出Hadoop显著简化了处理大型数据集的分布式计算。
XQuery implementations that focus on XML database functions are best used for handling large datasets efficiently.
以XML数据库功能为中心的XQuery实现最适合高效地处理大型数据集。
Not only are parallel programs faster, they can also be used to solve problems on large datasets using non-local resources.
并行式程序不仅运行更快,而且能被用于解决分布式大数据集的问题。
The use of these systems for routine analysis requires scalable and robust software for data management of large datasets.
使用这些系统进行日常分析需要可扩展且鲁棒的软件进行大量数据集的数据管理。
This technique parses extremely large datasets slightly faster than the JSON-P technique, and generally has a smaller file size.
此技术在解析非常大数据集时比json - P技术略快,而且通常文件尺寸更小。
Data generators are used to populate tables with random test data, which is especially helpful when very large datasets are needed.
数据生成器使用随机的测试数据来填充数据表,当需要大量数据集的时候这个功能特别有用。
For very large datasets, it's hands down the fastest format, beating out even natively executed JSON in parse speed and overall load time.
对于非常大的数据集,它是最快的传输格式,甚至可以在解析速度和下载时间上击败本机执行的JSON。
However, when composing XML documents, it is common that large datasets need to be joined and restructured to match the desired document structure.
但是在编写XML文档时,通常要联接和重构大型数据集,以匹配所需的文档结构。
The primary usage of Xdmx is in large-scale display systems at universities and research institutions focusing on visualization of large datasets.
Xdmx的主要用途,是在专门研究大型数据集的可视化的大学和研究机构中,用作大型显示系统。
In contrast to previous prediction tools, our new software is especially useful for the analysis of large datasets in real time with high accuracy.
与以前的预测工具相比,我们的新软件对大数据集的分析尤其有用,及时且具有高的准确性。
But as you build intricate regexps and have large datasets or input files, it can become considerably more difficult to know which string or strings a regexp might match.
但在构建复杂的regexp而且具有大型数据集或输入文件时,要知道regexp可能匹配哪些字符串可能就会困难得多。
This is, as you've seen, a somewhat cumbersome method, particularly with large datasets where the repetitive nature of the formatting can serve to confuse the process.
如您所见,这种方法稍微有点不太方便,尤其是对于大型的数据集合,格式的重复属性会造成处理的混乱。
The quantification of the security shows obvious superiority over the qualitative ways, especially when it is used to process large datasets and mine the security relations.
量化的安全度量技术在挖掘数据潜在的安全关系和处理大数据量方面比定性方法有明显优势。
The tools are extremely efficient and allow the user to compare large datasets (e. g., next-generation sequencing data) with both public and custom genome annotation tracks.
该工具特别高效,并允许用户用公开的和定制的基因组注释轨道比较大的数据集(例如,下一代测序数据)。
Until you've tried looking for them, you won't realize just how hard it is to find large datasets, especially ones that fit the particular analytic scenarios you're trying to run.
除非您已经尝试过,否则您不会意识到寻找大型数据集是多么地困难,特别是适合您正在运行的特定分析场景的数据集。
While Mnesia excels at scalability and low latency in transactions on horizontally fragmented data, one remaining challenge may be how it will scale in terms of very large datasets.
对于横向分片数据,Mnesia在伸缩性和低延迟事务上表现突出,接下来的一个挑战可能是对于超大规模数据集它如何伸展。
That being said, Google and Yahoo, in particular, have been pretty good about releasing various large datasets, usually textual data useful for training natural-language processing models.
据说,尤其是谷歌和雅虎,在各种大数据集贡献上已经颇有建树,尤其是对训练自然语言处理模型非常有用的文本数据。
The technique in this article has been successfully applied to a file transfer portlet that makes use of the GridFTP protocol for managing large datasets between two third-party data grid nodes.
本文所探讨的技术已经被成功应用到文件传输portlet,该protlet采用GridFTP协议管理两个第三方数据网格节点之间的大型数据集。
A second large category of storage patterns were satisfied by access to simple query interface into structured datasets.
第二大类存储模式是通过简单的查询接口访问结构化的数据集。
It replaced the original indexing algorithms and heuristics in 2004, given its proven efficiency in processing very large, unstructured datasets.
它取代了2004开始试探的最初索引算法,它已经证明在处理大量和非结构化数据集时更有效。
应用推荐