Sitemaps 0.90 is a simple and intuitive way for webmasters to provide the right level of information to Web crawlers so that they can efficiently crawl a Web site.
Sitemaps 0.90为Web 站点管理员提供了简单而且直观的方法,使他们可以向web爬虫程序提供正确的信息,从而使爬虫程序能够高效地爬行某个 Web 站点。
Also known as "invisible web", the term "deep web" refers to a vast repository of underlying content, such as documents in online databases that general-purpose web crawlers cannot reach.
“深网”也被称做“隐形网”,指的是网络中存储数量浩繁的基本信息的空间,例如存储于在线数据库中的文件,一般用途的网络搜索器是找不到这些文件的。
Search engines rely on programs known as crawlers (or spiders) that gather information by following the trails of hyperlinks that tie the Web together.
搜索引擎使用一种称为搜寻器(或称蜘蛛机器人)的自动程序随着将网页连接一起的超级链接的足迹来收集资料。
Yet how these hairy crawlers negotiate steep, slippery surfaces has been a tangled web for arachnologists.
不过这些毛绒绒的蜘蛛如何在或陡峭或湿滑的表面上立足,一直令蜘蛛学家们颇为费解。
Used to discover support for specific industry schemas by BICS-aware Web spiders and crawlers.
通过支持BICS的web蜘蛛和爬虫为特定行业模式提供发现支持。
Using Portal Search, administrators can define content source crawlers which use the HTTP protocol to crawl and index Web sites or content repositories.
使用PortalSearch,管理员可以定义内容源爬网程序,爬网程序使用HTTP协议来抓取Web站点或内容存储库并为其建立索引。
The final step is to allow this XML document to be made accessible to crawlers through a Web server.
最后一步需要使爬虫程序能通过Web服务器访问这个XML文档。
E-mail harvesting crawlers search Web sites for E-mail addresses that are then used to generate the mass of spam that we all deal with each day.
这种爬虫会在Web站点上搜索e - mail地址,然后生成我们每天不得不处理的大量垃圾邮件。
Other useful specialized crawlers include Web site checkers.
其他有用的专用爬虫包括Web站点检查器。
Helps to optimize the entire structure of the site for a crawler by providing an alternate set of Web pages so that crawlers can quickly access and index a large number of embedded pages.
有助于为爬网程序优化站点的整个结构,通过提供备用的We b页面组,从而使爬网程序能够快速地访问大量的嵌入式页面并针对这些页面建立索引。
DoGetCachedPage , which will retrieve Web pages that are cached by Google as its Web-crawlers encounter them.
doGet CachedPage,它将检索web页面,在它的Web - crawlers遇到Web页面时,由Google来进行缓存。
Web pages designed for human visitors are not friendly for crawlers. There are a number of site design techniques you can use to make the search engine's time at your site both easy and meaningful.
为人类访问而设计的Web页面并不适合搜索引擎的爬行器。
Keep in mind when optimizing a web page crawlers are basically only looking at your source code.
当你优化一个网站的时候时刻谨记爬行程序主要查看你的源代码。
Keep in mind when optimizing a web page crawlers are basically only looking at your source code.
当你优化一个网站的时候时刻谨记爬行程序主要查看你的源代码。
应用推荐