System and method for associating an extensible set of data...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C707S793000

Reexamination Certificate

active

06351755

ABSTRACT:

The present invention relates to a system and method for accessing documents, called web pages, on the world wide web (WWW) and, more particularly, to a method for associating an extensible set of data with each document downloaded by a web crawler.
BACKGROUND OF THE INVENTION
Documents on interconnected computer networks are typically stored on numerous host computers that are connected over the networks. For example, so-called “web pages” are stored on the global computer network known as the Internet, which includes the world wide web. Each web page on the world wide web has a distinct address called its uniform resource locator (URL), which identifies the location of the web page. Most of the documents on the world wide web are written in standard document description languages (e.g., HTML, XML). These languages allow an author of a document to create hypertext links to other documents. Hypertext links allow a reader of a web page to quickly move to other web pages by clicking on their respective links. These links are typically highlighted in the original web page. A web page containing hypertext links to other web pages generally refers to those pages by their URL's. Links in a web page may refer to web pages that are stored in the same or different host computers.
A web crawler is a program that automatically finds and downloads documents from host computers in networks such as the world wide web. When a web crawler is given a set of starting URL's, the web crawler downloads the corresponding documents, extracts any URL's contained in those downloaded documents and downloads more documents using the newly discovered URL's. This process repeats indefinitely or until a predetermined stop condition occurs. As of 1999 there were approximately 500 million web pages on the world wide web and the number is continuously growing; thus, web crawlers need efficient data structures to keep track of downloaded documents and any discovered addresses of documents to be downloaded.
Collecting Information About Documents Downloaded by a Web Crawler
After a document is downloaded by the web crawler, the web crawler may extract and store information about the downloaded page. For instance, the web crawler may determine if the downloaded page contains any new URL's not previously known to the web crawler, and may enqueue those URL's for later processing. In addition, pages downloaded by the web crawler may be processed by a sequence of processing modules. For instance, one processing module might determine whether the document has already been included in a web page index, and whether the page has changed by more than a predefined amount since its entry in the web page index was last updated. Another processing module might add or update a document's entry in the web page index. Yet another processing module might look for information of a specific type in the downloaded documents, extract the information and store it in a directory or other data structure.
During the course of processing a downloaded document, various data can be collected about it. Examples include the date and time of the download, how long it took to perform the download, whether the download was successful, the document's size, its MIME type, the date and time it was last modified, its expiration date and time, and a checksum of its contents. These data can be used for a variety of purposes, including, but not limited to:
passing information from one processing module to a later processing module in a processing pipeline;
collecting statistics about the downloaded documents; and
in the context of a continuous web crawler, the collected data can be used as a basis for determining when a document should next be downloaded (refreshed).
After a document has been processed, its associated data can be saved to disk and analyzed off line.
A continuous web crawler is one that automatically refreshes a database of information about the pages it has downloaded. A web page can have an assigned or purported expiration date and time, which indicates when the page should be assumed to be no longer valid. Furthermore, a web crawler can be configured to assume that certain types of pages, such as pages on certain types of web sites, cannot be valid for more that a particular length of time. Thus, pages on a news web site might be assumed to be valid for only a few hours, while pages of an online encyclopedia might be assumed to be valid for a much longer time, such as month.
In the context of a continuous web crawler, it may be advantageous to record not only the data associated with a document's most recent download, but also with its previous downloads. How complete a document download history to keep may vary depending on the user's requirements.
The Scooter (a trademark of AltaVista Company) web crawler saves a fixed set of data for each document it discovers and downloads, namely, the document's URL, the number of attempts that have been made to download it, the date and time of the last download attempt, the HTTP status code of the last download, and the document's last modification date and time.
The Sphinx web crawler developed by Bharat and Miller allows document classifiers to associate name/value pairs with a downloaded page. However, Sphinx discards any name/value pairs associated with a document once the document has been processed. Moreover, the values must be strings, not values of arbitrary types.
It would be desirable to provide a much more flexible mechanism that enables application programs that process downloaded pages to determine what information to save for each document downloaded. In that way the data structure for storing such information would be dynamically determined, and the manner in which that information is used would be dynamically determined, without having to customize the code of the web crawler for each application.
Prioritizing Document Downloads
Every web crawler must maintain a data structure or set of data structures reflecting the set of URL's that still must be downloaded. In this document, that set of data structures is called “the Frontier.” The crawler repeatedly selects a URL from the Frontier, downloads the corresponding document, processes the downloaded document, and then either removes the URL from the Frontier or reschedules it for downloading again at a later time. The latter scheme is used for so-called “continuous” web crawlers.
When selecting a URL from the Frontier, the inventors have determined that it would often be desirable for the crawler to preferentially select certain URL's over others so as to maximize the quality of the information processed by the other applications to which the web crawler passes downloaded documents. For instance, the web crawler may pass downloaded pages to a document indexer. An index of documents on an Intranet or the Internet will be more accurate or higher quality if the documents of most interest to the users of the index have been preferentially updated so as to make sure that those documents are accurately represented in the index. To accomplish this, the web crawler might preferentially select URL's on web servers with known high quality content. Alternately, heuristics might be used to gauge page quality. For instance, shorter URL's might be considered to be better candidates than longer URL's.
In the context of a continuous web crawler, it may be desirable to prefer URL's on web servers whose content is known to change rapidly, such as news sites. It may be desirable to prefer newly-discovered URL's over those that have been previously processed. Among the previously processed URL's, it may be advantageous to prefer URL's whose content has changed between the previous two downloads over URL's whose content has not changed, and to prefer URL's with shorter expiration dates over those with longer expiration dates.
Maintaining Freshness of Documents Downloaded by a Continuous Web Crawler
As alluded to earlier, web crawlers ar

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for associating an extensible set of data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for associating an extensible set of data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for associating an extensible set of data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2978310

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.