缓存被频繁查看但很少更改的数据是一种减少用户等待时间的有效方式。
Caching of frequently viewed data that changes infrequently is a powerful way to decrease wait time for users.
如果这些数据不会随时间发生变化,您可以通过适度使用缓存来获得显著的性能提升。
If this data does not change over time, you can often achieve significant performance gains with appropriate use of caching.
使用这个模型,您可以对较高近距离缓存命中率对平均数据访问时间造成的影响进行模拟。
With this model, you can simulate the effects a higher near-cache hit will have on the average data access time.
图15.通过较大近缓存或者关联路由,逐渐增加近缓存命中的概率,可以减少平均数据访问时间。
Figure 15. Increasing the probability of a near-cache hit, either through larger near caches or affinity routing, reduces the average data access time.
数据实例和图像等表单资源也存储在缓存中,这会减少装载时间并提供更好的用户体验。
Form resources such as data instances and images are also stored in the cache to decrease load times and to provide a better user experience.
例如,当写入一个文件的时候,这个文件的长度和最后修改时间都要在元数据缓存器里面更新。
For example, when we write to a file, the new length and the last modification timestamp are updated in the metadata cache.
为频繁访问的数据提供缓存可以节省时间或降低数据库的争用和总体请求容量。
Providing a cache for frequently-accessed data might time-shift or reduce contention and overall request volumes to the database.
如果您使用命令行缓存,所有的数据都被缓存,并且都使用XML配置文件定义的时间。
If you used command cache, all the data is cached for the same time defined in the XML configuration file.
缓存可以显著减少访问数据花费的时间。
Caching minimizes the amount of time it takes to access the data. Most modern processors have three classes of memory.
通过在DML更新前清除会话缓存,我们将时间缩短到了4分钟,全部都是将数据加载到会话缓存中花费的时间。
By clearing the session cache before DML update, we cut the time to about 4 minutes, all of which is needed by the data loading into the session cache.
同样,如果您把缓存时间放进数据库中,那么您就可以在运行时更改缓存时间。
Also, if you put the cache time data into a database, you can then change the cache time at run time.
获得当前时间的网络调用是高速缓存的一个完美项,因为您的10秒约束意味着数据在至多10秒钟之内不会更改。
The network call to get the current time is a perfect item to cache because your 10-second constraint means the data will not be changing for, at most, 10 seconds.
如果您熟悉高速缓存,从本质上来说会话与它是相同的概念—存储数据以减少响应时间和服务器工作负载。
If you're familiar with caching, it is essentially the same concept — storing data to reduce both response time and server workload.
例如,您可以使用每个缓存项的最长生命周期和过期时间来为需要更新数据的特定用户刷新缓存数据。
For example, you could use the maximum life and expiration time for each cached item to refresh cached data for specific users that need updated data.
除非缓存大量数据是关键的,去掉随机操作的好处是双重的:提高软件模块性,释放服务器上的CPU时间。
Unless caching huge amounts is critical, the benefits of offloading randomization are twofold: improved software modularity and freed CPU time on the server.
文件系统驱动程序更新适当的元数据,但是没有时间将其缓存中的数据刷新到磁盘的新块中。
The filesystem driver updated the appropriate metadata, but didn't have time to flush the data from its caches to the new blocks on disk.
高速缓存连接通过避免与数据库连接的时间(除了第一次)减少了设置时间。
Caching the connection reduces the setup time by avoiding the time to connect to the database (except for the first time).
缓存api的设计目的是为了将数据缓存较长的一段时间,或者缓存至满足某些条件时,但每请求缓存则意味着只将数据缓存为该请求的持续时间。
Whereas the cache API is designed to cache data for a long period or until some condition is met, per-request caching simply means caching the data for the duration of the request.
要计算数据访问时间,您需要了解数据存在于近缓存中、服务器缓存中、以及数据库中的概率。
To compute the data access time, you need to know the probabilities for the data existing in the near-cache, in the server cache, and in the database.
将该变量设置为较高的值,以允许数据库在缓存中驻留更长的时间,这会减少服务需要完成的工作量。
Setting this to a higher value allows the databases to remain in cache longer, resulting in less work for the server.
比如,可以缓存地图数据来减少响应时间,但通常并不适合将搜索查询结果或者实时交通信息进行缓存。
For instance, it makes sense to cache map data to reduce response time, but it's often impossible or impractical to cache results of search queries or real-time traffic information.
近缓存中缓存命中率逐渐增加,其效果是数据访问时间提高了 42%。
The effect of increasing the probability of a cache hit in the near cache is a 42% improvement in data access time.
这个缓存层通过将数据保留一段时间(或者随即预先读取数据以便在需要是就可用)优化了对物理设备的访问。
This caching layer optimizes access to the physical devices by keeping data around for a short time (or speculatively read ahead so that the data is available when needed).
您可以选择Resume 以恢复AlphabloxCubeCaching服务,并指定刷新多维数据集缓存的时间和频率,如图11 所示。
You can select to Resume the Alphablox Cube Caching service and specify how often and when to refresh the cube cache, as shown in Figure 11.
一个解决办法是使用UPS,当电源掉电时,给系统足够得时间以便把缓存(也包括系统缓存)中的数据写到硬盘盘片上。
Adding a UPS to the system allows enough time to flush the drive cache (and potentially the system caches) solving the problem.
这包括同时使用Bigtable和Memcache来提供“昂贵”数据的缓存—需要花费很长时间才能从远程资源中检索到的数据。
This includes using Bigtable and Memcache together to provide caching of "expensive" data - data that takes a long time to retrieve from a remote resource.
因为预先缓存了来自[HistoricSales]数据集的查询结果,所以整个时间段的销售数据执行的查询性能得到了改善。
Because query results from the [HistoricSales] cube are pre-cached, the performance for queries that are run against the sales data of the entire time period is improved.
如果您使用上面的机制,该portlet在缓存时间内将一直返回相同的内容。该内容只有在缓存数据过期的时候才会被更新。
If you used the mechanism above, the portlet would always return the same content during the cache time, and the content would only be updated whenever the cache data expires.
在上例中,关键数据访问时间从60ms减少到6 ms是该缓存带来的直接好处,同时还减少了后端数据库的负载。
In the initial example above, the reduction of key data access time from 60 ms to 6 ms was a direct result of this caching benefit and the load reduction on the back-end database.
它比write - through缓存具有更快的响应时间,后者对缓存的更新会导致对数据库的及时更新。
This scenario also has faster response times than the write-through caching scenario where an update to the cache results in an immediate update to the database.
应用推荐