A new method is proposed for sample selection in large data set.
提出了一种大规模数据集的训练样本选择方法。
What data structure to use for near realtime lookup on large data set?
什么数据结构用于实时查找附近的大数据集?。
But if you have a large data set, TreeGrid's performance is extremely slow.
但是如果您有一个大数据集,那么TreeGrid速度将极其缓慢。
This may significantly impact the execution time of the rule when executed on a large data set.
在针对大数据集执行时,这会很明显地影响执行时间。
The HMMs' parameters are further trained by the method of iterative learning from a large data set;
通过迭代学习的方法在大样本下进一步训练这些隐马尔可夫模型参数;
These applications typically have a very large data set and display a high information density per screen.
这些应用程序通常具有非常大的数据集,并且每屏显示的信息密度非常高。
Finally, Pig is a platform for large data set analysis that includes a high-level language for Hadoop programming.
最后,Pig是一个包括适用于Hadoop编程的高级语言的大型数据库集分析的平台。
The standard FCM algorithm is not only extremely time-consuming for clustering large data set, but also more sensitive to noise.
标准的FCM算法对大数据样本集进行聚类时极为耗时,而且对噪声比较敏感。
We are trying to give people a way to drive their testing scenarios via a large data set. Say for example you've written the following test in Twist
我们正努力给大家提供一种方式,通过大型数据集驱动测试场景。
Data Mining Technology, a tool that can discover information and knowledge in large data set, is used many fields, including anomaly detection.
数据挖掘是帮助人们在海量数据中发现信息和知识的工具,广泛应用到各个领域,包括异常检测。
And it offers high transfer rate (high throughput) to access the application data, for those with very large data sets (large data set) applications.
而且它提供高传输率(highthroughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。
The second query uses an XML predicate to ensure that only the rows for "95141" get generated, resulting in a shorter runtime, especially on a large data set.
第二个查询使用一个XML谓词来确保只生成对应于 “95141” 的行,从而减少了运行时间,对于大型的数据集,这种方法带来的性能好处尤其显著。
Knowledge discovery in databases is the nontrivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in large data set.
数据库中的知识发现是指在大型数据集中识别有效、新奇、潜在有用、且最终可理解模式的非平凡的过程。
For example, one may need to generate samples drawn from a normally distributed population or may need to compute the standard deviation of a large data set or plot a histogram.
例如,可能需要自常态分配数量之产生样本或需要计算大量资料的标准差或绘制直方图。
While loading, it is possible to reorganize the data to better suit the algorithms -- for example, loading into a hash table or a database may help in working with a large data set.
加载的时候,可以重新组织数据,以便更好地适应算法。比如,加载到散列表或者数据库中可以帮助处理大型数据集。
Currently at least 80% of the main protein sequence databases can be classified using these tools, thus providing a large data set to work from for analyzing protein domain architectures.
当前,至少80%的主要蛋白质序列数据库能够使用这些工具进行分类,因此提供了大量数据开始分析蛋白质域结构。
How difficult is it to divide a large data set into smaller pieces, and how much information will you have to add in order to reassemble the smaller pieces together correctly on the other side?
将大数据集分割为小片段是否很困难?为了在另一端将小片段正确地重新组装起来,必须添加多少额外信息?
These capabilities include sorting, grouping, aggregating, and a large set of functions to manipulate the different data types.
这些功能包括排序、分组、聚集以及一个操作不同数据类型的大型函数集。
We can consider the performance test to be based on a regular usage scenario, and extend to the large data test set imposing some extreme loads.
可以根据常规的使用场景考虑性能测试,然后扩展到模拟某些极端负载的大型测试数据集。
Merge-sort is not an inherently parallel algorithm, as it can be done sequentially, and is popular when the data set is too large to fit in memory and must be sorted in pieces.
合并排序本身并非并行算法,因为它可以顺序执行。当数据集太大,内存无法容纳,必须分片保存的时候,经常使用合并排序。
Depending on the level of search-term sophistication and the size of the underlying data store, the return set of matches for any particular query may be so large as to be unusable.
根据搜索词汇的复杂度和基础数据仓库的大小,任何特定查询返回的匹配集都可能十分庞大,以致于无法使用。
To present a large set of data in manageable sets, the requirement is often to present only a subset of data per page.
在可管理的组中表示一大组数据,提供每页数据的子集是经常的需求。
This will certainly take some time as the data set is fairly large. You must also have sufficient disk space to hold the data throughout the import process.
当数据集相当大时,这当然要花一些时间。
The data set was too large to fit into Informix's memory buffers.
数据集太大了,无法放入Informix内存缓冲器中。
QualityStage tackles the data to be cleansed using a large set of hand-crafted rules written using the services of knowledge experts.
QualityStage使用通过知识专家服务编写的大量巧妙的规则处理要清理的数据。
QualityStage tackles the data to be cleansed using a large set of hand-crafted rules written using the services of knowledge experts.
QualityStage使用通过知识专家服务编写的大量巧妙的规则处理要清理的数据。
应用推荐