一些IT专业人士认为,重复数据删除和单实例存储(SIS)是一回事,但其实并非如此。
Some IT professionals think that deduplication and Single Instance Store (SIS) are the same thing, but they are not.
不过,这种分割方式会创建重复数据。
锁定或交易,防止重复数据?
另外,读者可能会对书中大量的重复数据感到厌烦。
Moreover, the reader may weary of so many repetitive statistics.
数据备份过程通常含有最高级别的重复数据删除。
Typically, data backup processes contain the highest levels of duplication.
实现方法是创建一个压缩的符号字典,取代行级重复数据模式。
It does this by creating a compression dictionary of symbols which replace repeating data patterns at the row level.
重复数据库中的所有图像,并配以最小的结果获胜。
Repeat for all images in the database, and the match with the smallest result wins.
本文给出了一种等重复数据情形下的非等距正交多项式回归模型。
In this paper, an orthogonal polynomial repression model of non-equidistant points for equal-replicate data was proposed.
能够实时地防止重复数据,避免昂贵的手工清理,从而降低操作成本
The ability to prevent duplicates in real-time, thus reducing operational costs by avoiding pricey, manual cleanup
第二个问题是避免过度使用数据库或计算性资源,防止生成重复数据。
The second problem was avoiding excessive use of database or computational resources in generating duplicate data.
无论是自动精简配置,还是重复数据删除,在未来都会有助于提高存储的效率。
Both thin provisioning and data de-duplication will help drive storage efficiency in the future.
除此之外,最大的问题是,主存储上重复数据所占的比例要比备份数据要低得多。
Beyond that, the big problem is that primary storage has a much lower percentage of duplicate data than backup data.
经过长期的检索实践发现,CALIS书目数据库大致有21种重复数据类型。
After a long time of retrieval practice, CALIS union catalog database generally has 21 kinds of duplicate data types.
重复数据删除在备份市场已经非常流行,但是因为某些因素其在主存储上的价值一直被质疑。
Deduplication has been very popular in the backup market, but its value in primary storage is questionable for several reasons.
第二种容量管理办法可近似地认为是通过重复数据删除或数据压缩来减少主存储磁盘上的数据。
The second capacity management approach to consider is reducing data on primary disk storage via data deduplication or data compression.
到那时,IT管理员和架构师必须设计出能够在效益最明显的地方使用重复数据删除的解决方案。
Until then, it administrators and architects will have to create solutions that utilize data deduplication where it is most beneficial.
由于重复数据删除,只有一个附件的实际存储实例,每个实例,只是后来引用回到一个保存的副本。
With data deduplication, only one instance of the attachment is actually stored; each subsequent instance is just referenced back to the one saved copy.
更好的方法是创建一个缓存策略,这样可以改善系统性能并最大限度降低重复数据请求对源系统造成的负面影响。
Instead, create a caching strategy to both improve system performance and minimize any negative impact on the performance of source systems caused by repeated requests for data.
在处理拥有大量空白或重复数据的大型文件时,ZIP是一种十分优秀方法,能够降低带宽负载或存储设备使用量。
ZIP is a great way to reduce bandwidth overhead or storage usage when handling large files that have a lot of white space or repeated data.
避免重复数据库服务器的工作:指示DB 2根据您的需要过滤和处理数据,而不是在应用程序中做这项工作。
Avoid duplicating the work of a database server: Instruct DB2 to filter and process data according to your needs rather than doing this work in your application.
避免重复数据库服务器的工作:指示DB 2根据您的需要过滤和处理数据,而不是在应用程序中做这项工作。
Avoid duplicating the work of a database server: Instruct DB2 to filter and process data according to your needs rather than doing this work in your application.
应用推荐