GB per of Internet data transfer (15 GB of data transfer "in" and 15 GB of data transfer "out" across all services except Amazon CloudFront).
每月30gb的internet数据传输(在除amazon CloudFront之外所有服务上15gb输入和15gb输出的数据传输)。
DB2 Express is limited to 4 GB of memory for the entire database server (the database engine will throttle the amount of memory consumption such that it does not exceed the 4 GB limit).
DB 2Express在整个数据库服务器中最多支持4GB内存(数据库引擎将限制内存消耗,使其不超过4 GB)。
Here, WebSphere eXtreme Scale is used as a network-attached side cache holding around 8 GB of profiles (4 GB + 4 GB of replicas for high availability).
在此,使用WebSphereeXtremeScale作为网络附加的sidecache,保存近8GB的配置文件(4 GB + 4 GB用于高可用性的副本)。
GB of available disk space (32-bit), 20 GB (64-bit).
16gb的可用磁盘空间(32位),或20gb的可用磁盘空间(64位)。
I usually allocate somewhere between 40 GB and 60 GB per operating system, and I leave the rest of my disk free to load other distributions.
我通常会为每个操作系统分配40GB到60 GB的空间,将剩余的磁盘空间留作装载其他内容。
On SLES 8, only 2 GB used for central memory because of the 31-bit operating system, and 2 GB expanded memory used for swap space.
在SLES8 上,只有2GB 的内存用作中央内存,因为这是一个 31 位的操作系统,有 2 GB 的扩展内存用作交换空间。
If each worker node has both DN and TT daemons, and each daemon costs 1 GB memory by default, the total memory allocated would be around 4.8 GB.
如果每个工作者节点都有DN和TT守护进程,每个守护进程在默认情况下占用1GB内存,那么分配的总内存大约为4.8 GB。
This will allow an additional 500 MB of memory available for database shared memory, pushing total utilization slightly above 2 GB to about 2.2 GB.
这样可以为数据库共享内存增加 500MB的可用内存,从而使内存使用总量从2GB 多一点儿增加到 2.2 GB。
With 100 GB and five memory pools, memory is broken into five pools of roughly 20 GB a piece and each LRUD handles around 20 GB (this is a very simplistic view).
假设使用 100GB内存和五个内存池,内存会划分为五个大约20 GB的池,每个LRUD 处理大约 20 GB(这是非常简化的描述)。
For this experiment, we used a small partition on an IBM POWER7 server with 0.5 processor entitlement, 4 GB of memory and 20 GB of storage.
对于该试验,我们在一个IBMPOWER 7服务器上使用了一个小分区,该服务器具有0.5个处理器、4GB的内存和20 GB的存储器。
Some can grow to over 2 GB, even 30 or 40 GB.
有些缓存可以增加到2GB,甚至30或40 GB。
On SLES 8, only 2 GB were used for central memory, because of the 31-bit operating system, 2 GB expanded memory for swap space.
在SLES8上,仅使用了2GB的中心内存,因为是31位的操作系统,2 GB的扩展内存将用于交换空间。
For example, if you had a 2 GB buffer pool, when 60% of changed pages is reached, 1.2 GB (60% of 2 GB) would be written to disk as page cleaners are triggered.
例如,如果您具有一个2GB的缓冲池,当达到60%的修改页面时,在触发页面清洗器时就有1.2 GB (2 GB的60%)将写入磁盘。
The instance also includes 22 GB of internal RAM and 1690 GB of disk storage.
该实例还包含了22GB的内部RAM以及1690 GB的磁盘存储空间。
As a result, the overall backup volume of that server (both NSF and NLO data) also reduced from 64 GB to 25 GB, a savings of about 61 percent overall.
因此,该服务器的整个备份量(包括NSF和NLO数据)也从64GB降低到25 GB,整体节省了61%。
The top two entries used as much as possible for buffers, 1.6 GB and 1.5 GB respectively.
前两个条目使用了尽可能多的缓存,分别为1.6GB和1.5 GB。
You should have at least 3 GB memory and 80-100 GB disk space.
至少应该拥有3GB内存和80- 100 GB磁盘空间。
As far as system requirements, a 1 GB RAM is the minimum requirement; I used a 2 GB RAM machine for this example.
至于系统需求,最少需要1GBRAM。为实现本实例,我使用2 GB RAM的机器。
With a page size of 32 KB, the final storage of 144,000 pages is equivalent to 4.4 GB, which is only 5.3 percent of the original raw data volume of 83 GB.
由于每个页面的大小为32KB, 144,000页的总存储量相当于4.4GB,这只占到原始数据量83 GB的5.3%。
For example, the maximum size of a table in DB2 is 64 GB for a 4 KB page size, 128 GB for a 8 KB page size, 256 GB for a 16 KB page size, and 512 GB for a 32 KB page size.
例如,在DB 2中,对4KB的页面大小而言,表的最大大小是64GB;对于8 KB的页面大小而言,表的最大大小是128 GB;对于16 KB的页面大小而言,表的最大大小是256 GB;对于32 KB的页面大小而言,表的最大大小是512 GB。
They also include 7 GB of inbound data transfer and 14 GB outbound transfer.
它们还包括7G的入站数据传输和14G的出站传输。
Currently, the 64-bit platform (processor and operating system) supports 8 GB to 1 TB of RAM, whereas the 32-bit operating system supports only up to 4 GB as indicated in the bullet below.
目前,64位平台上(处理器和操作系统)支持8gb到1tb的内存,而32位的操作系统,正如下列表所显示,最高仅能支持到4gb。
The current Stack Overflow full backup is about 7 GB compressed, and the other databases are perhaps 2 GB compressed.
(当前的StackOverflow的全备份压缩之后有7GB左右,而另一个数据库压缩后大概有2GB)。
The site runs on a Lenovo server, dual CPU, 24 GB of memory and 6 hard drives, as the DB server, and another Lenovo server, 1 CPU quad core, 8 GB of memory and dual mirrored HDD, as the web server.
网站的DB服务器采用联想服务器,具体配置为双CPU、24GB内存、6个硬盘驱动器,Web服务器使用另一个联想服务器,具体配置为一个四核CPU、8GB内存、双重镜像硬盘。
In that situation, the per box GB DRAM content of the smaller size notebook of up to 2 GB was replaced by tablets that only housed 256 Mb or 1/8th the amount. Hence, DRAM oversupply.''
在这种情况下,那些内存容量小等于2GB的小尺寸笔记本将逐渐被仅配有256Mb或32Mb容量内存的平板电脑产品所取代,这样内存市场会呈现供过于求的局面。
Logging scalability - for those who have a penchant for extremely long transactions, DB2 has increased the maximum active log space from 32 GB (version 7.2) to 256 GB.
日志记录可伸缩性—对于那些喜欢使用非常长的事务的人,DB 2已经将最大活动日志空间从32GB (V7.2)增加到256 GB。
If you have a 1 TB hard drive partitioned into a 250 GB partition and a 750 GB partition, what you have on the latter will not affect the other, and vice versa.
沙盘,差不多就算虚拟的概念吧,真实的操作不受影响如果你有个1tb的硬盘,分成两个区,一个250gb,一个750gb,这样你在这个分区的操作不会影响到另一个分区,反之亦然。
On SLES 8, only two GB were used for central memory because of the 31-bit operating system, and two GB expanded memory for swap space.
在SLES8 上,仅使用了2GB的中心内存,因为是31 位的操作系统,2 GB 的扩展内存将用于交换空间。
I have run large Oracle databases running AIX with 250 GB of memory and three 24 GB page Spaces.
我曾经在AIX上用250GB内存和三个24 GB的分页空间运行大型Oracle数据库。
I have run large Oracle databases running AIX with 250 GB of memory and three 24 GB page Spaces.
我曾经在AIX上用250GB内存和三个24 GB的分页空间运行大型Oracle数据库。
应用推荐