The scheme of a new data log based on PDA isp ro posed.
提出了一种基于PDA的新型数据记录仪的设计方案。
Continuously monitor, display, totalize and data log flow through any flume or weir.
连续监测,显示,总计和数据记录或通过任何堰流水槽。
Connect up to 32 TFT32 Transmitters in a network and run Greyline TFS tank Farm Software to display levels, monitor relay status, and data log tank levels.
连接32个TFT32发送器在网络中并运行格林黄素油库软件显示的水平,监测继电器状态,坦克和数据记录的水平。
In addition, you can receive data log, alarms, and summary reports at up to four different email addresses as an attachment to an email at user defined time periods.
另外,可以接收数据日志,报警,并简要汇报了四个不同的电子邮件地址作为附件的电子邮件,在用户定义时间时期。
DB2 9.7 compresses more than just row data; it can also compress indexes, log files, temporary tables, inline XML data, and large objects.
DB 2 9.7不仅压缩行数据;它还可以压缩索引、日志文件、临时表、内联xML数据和大对象。
The Hadoop runtime will split up the data (log files) that needs to be processed and give each node in your cluster a chunk of data.
Hadoop运行时将分割需要处理的数据(一些日志文件)并向您的集群中的每个节点分配一个数据块。
The data in this log contains information about the devices connected to the system and any faults and problems recorded by the system during the boot and operational process.
此日志中的数据包含关于连接到系统的设备的信息,以及在启动和操作过程中系统记录的任何错误和问题的信息。
Due to the large amount of data in the audit log table, a single INSERT statement will usually fail because the data per transaction exceeds the log file size of the database system.
因为审核日志表格里有大量的数据,单独的insert语句通常会失败,这是因为每个事务的数据超过了数据库系统里的日志文件的容量。
The compressed data is kept both on disk and memory and DB2 also compresses user data stored in log files, thereby reducing log file size.
被压缩的数据同时存放在磁盘上和内存中,DB2还压缩存储在日志文件中的用户数据,以便减少日志文件大小。
Having multiple copies of data in the log introduces some interesting benefits, which will be covered in more detail below.
日志中保留数据的多个副本可以带来一些有趣的优点,后面将详细谈到这些优点。
Up to 20 GB of disk space is required for data and log files to recreate all test cases described in this article.
在重新创建本文所介绍的所有测试用例时,需要20GB的磁盘空间用于存储数据和日志文件。
This article describes how the audit log is set up, and what data is written to the audit log table.
本文描述了如何建立审核日志,以及什么样的数据要写入审核日志表中。
A file system that spans two volumes is allocated for transaction log data.
我们为事务日志数据分配了一个跨越两个卷的文件系统。
Log data objects using the message logger to audit a message flow.
使用消息日志程序记录数据对象以便审计消息流。
Spread data, indexes, and log over multiple disks, with the log on different disks from the others..
将数据、索引和日志散布在多个磁盘上,日志使用的磁盘应不同于其他东西使用的磁盘。
You can run analytic calculations over this log data and use the results to provide a better service.
你可以分析这些日志数据,并根据分析的结果提供更好的服务。
For example, specific audit log data, or financial report content and format mandated by a relevant policy.
例如,特殊的审核日志数据,或金融记录和相关政策的格式化命令。
Data indirectly created as a side effect (e.g., log data) falls outside of this mission, as it isn't particularly relevant to lock-in.
而间接产生的数据(比如日志数据)与锁定无关,因此不在任务的范围内。
A real-world problem many people have to face is how to process huge quantities of log data.
许多人都要面对的一个真实问题是如何处理大量日志数据。
A file system, which spans two volumes, is allocated for transaction log data.
一个文件系统横跨两个卷,用于事务记录数据。
You may want to consider using a logging system that is non-blocking, so that your end user never waits for you to log data.
您可能会想考虑使用一个非阻塞的日志系统,以防止您的终端用户等待您记录数据。
A file system that spanned two volumes was allocated for transaction log data.
一个文件系统横跨两个卷,用于事务日志数据。
For example, a login process element includes a series of activities, the log-in credential data, and log-in rules for completing a user log-in process.
例如,登录流程元素包含一系列活动,登录证书数据以及完成用户登录过程的登录规则。
Whenever you access audit log data for BPEL processes, you should use the AUDIT_LOG_B view to do this.
无论您何时访问用于BPEL流程的审核日志数据,都应该使用 AUDIT_LOG_B视图来完成该操作。
Process choreographer writes audit log data to the AUDIT_LOG_T table of the process choreographer database.
流程编排器将审核日志数据写入该流程编排器数据库的AUDIT_LOG _ t表中。
The cache must be nonvolatile to ensure that log data is protected in the event of a catastrophic failure, such as loss of power.
高速缓存必须是非易失的以确保在发生灾难性故障(例如停电)时保护日志数据。
As NILFS is log structured, new data is written to the head of the log while old data still exists (until it's necessary to garbage-collect it).
由于NILFS是基于日志结构的,新数据被写到日志的头部,而旧数据仍然保留(直到需要对旧数据进行垃圾收集)。
You do this using the data collection agents in the data collection infrastructure for capturing trace, monitoring, and log data from production or development environments.
您可以通过使用数据收集代理完成这个工作,这些代理存在于来自产品或者开发环境的用于捕获跟踪、监视或者日志数据的数据收集框架中。
When using circular logging log file extents are reused once they no longer contain active log data.
使用循环日志记录时,当日志文件扩展不再包括活动数据日志时,会将其进行重新使用。
When using circular logging log file extents are reused once they no longer contain active log data.
使用循环日志记录时,当日志文件扩展不再包括活动数据日志时,会将其进行重新使用。
应用推荐