Electrical computers and digital processing systems: multicomput – Computer-to-computer session/connection establishing – Network resources access controlling
Reexamination Certificate
1999-10-20
2003-12-09
Wiley, David (Department: 2143)
Electrical computers and digital processing systems: multicomput
Computer-to-computer session/connection establishing
Network resources access controlling
C709S224000, C709S223000, C709S225000, C709S226000, C709S217000, C709S219000
Reexamination Certificate
active
06662230
ABSTRACT:
FIELD OF THE INVENTION
This invention relates generally to methods for limiting access of client computers over a computer network to data accessed through a server machine. More particularly, it relates to methods for monitoring client requests and denying access to clients whose requests significantly reduce server performance, or who are attempting to obtain excessively large portions of server resources.
BACKGROUND ART
The popularization of the Internet is changing the ways in which information is typically distributed. Rather than using a limited number of print publications, such as books or magazines, or gaining access to libraries, a person can obtain a great deal of information by accessing a Web server using a browser on a client computer.
Specialized Web sites exist that share large databases with the general public. For example, the U.S. Patent and Trademark Office (www.uspto.gov) provides a searchable full-text patent database containing all U.S. patents issued since 1976. Similarly, IBM hosts a Java™ Web site (www.ibm.com/java) through which developers access technical articles and case studies, and download code segments and other tools. Gourmet® and Bon Appetit® magazines jointly produce the Epicurious® (www.epicurious.com) Web site, which contains an enormous recipe database. Each of these sites allows users at client browsers to enter particular search queries, for example, patent classifications, code segment titles, or recipe ingredients. In response, the Web server provides the user with a set of matching Web pages. Each individual web page result can also be accessed directly using its Universal Resource Locator (URL).
Most Web servers track the number of times their sites are accessed, termed “hits”; popular Web sites receive thousands of hits in a single day. When a request is made to a server (a GET message), the request is logged in a log file. Log files are not standardized, but generally contain a timestamp, an identifier for,the client, and a request string. Web sites can then use the number of hits to attract advertisers to their site, offsetting their maintenance costs and allowing them to continue to provide unlimited and free access.
In addition to individual users, Web servers are also accessed heavily by robots, programs. that automatically traverse the Web to create an index. Robots, also known as spiders or webcrawlers, retrieve a document and then retrieve all the linked documents contained with the initial retrieved document, rapidly spreading throughout the Web. They may also systematically march through every document on a server. Robots are most commonly, but not exclusively, used by search engines. One robot (ImageLock) records every single image it encounters to determine possible Copyright infringers. Robots are not inherently destructive, but they can cause two significant problems for a Web server, both of which are referred to as “overcrawling.” First, if they request documents too frequently, they may significantly reduce a server's performance. Second, it is possible (although often a violation of copyright law) to systematically download an entire Web site information repository using a robot, and then publish the information elsewhere.
Currently, these problems are addressed manually. If a system administrator notices a significant performance decrease, he or she can examine the log files to determine the source of the problem. If one robot is causing the problem, it can be excluded using the Robot Exclusion Standard: the system administrator creates a structured text file called/robots.txt that indicates parts of the server that are off-limits to specific robots. In general, robots read the file before making a request, and do not, request files from which they are excluded. However, even if a robot does not follow the standard, it is possible to exclude it if its Internet Protocol (IP) address is known.
Manual patrolling of log files is quite time-consuming for the system administrator, especially as a Web site's hit count grows. Because it cannot be done in real-time, a crawler is blocked only after it has slowed down site performance dramatically, or after it has downloaded significant amounts of server resources.
A standard method for automatically limiting access to data is through the use of a firewall. A firewall is set of related programs that protect the resources of a private network by regulating access of outsiders to the network (and often also by regulating access of insiders to the Internet). Firewalls may allow outside access only to users with specific IP addresses or passwords, or may provide alarms when network security is being breached. However, they are generally not designed for protecting the resources of servers that provide information to the general public.
A variety of systems have been developed to monitor access of clients to server data. Two broad categories are found: those for clients who have previously registered to access a server, and who provide an identification that must be authorized; and systems for analyzing client activity to develop statistical data and client profiles, which can be used for marketing or advertising purposes. Both types of monitoring systems may also include features to determine if there is excessive traffic that will crash the server. Examples of the first category include U.S. Pat. No. No. 5,553,239, issued to Heath et al., which discloses a system and apparatus for monitoring a client's activity level during connection to a server; and U.S. Pat. No. 5,708,780 to Levergood et al., which provides a system for monitoring the requests an authorized client makes to a server. These systems cannot be used to address the current problem, which occurs in publicly accessible servers.
In the second category is U.S. Pat. No. 5,796,952, issued to Davis et al. In this system, a client profile is developed based on client requests and time spent using each requested file. A server stores information on the amount of data downloaded and the choices the client has made. Based on the data analysis, specific advertising can be sent to the client. This system does not address the problems detailed above, and is mainly concerned with the user's behavior after the requested file is sent to the client machine.
Real-time log file analysis is commonly performed; commercial software packages are available and can be tailored to suit a Web server's specific needs. These software packages maintain and analyze log files to create reports of demographics, purchasing habits, average time per visitor, and other information. In U.S. Pat. No. 5,787,253 to McCreery et al, an internet activity analyzer is disclosed. The analyzer provides source and destination information and indications of internet usage. It also detects potential server problems so that users may be notified. A real-time log file analyzer is also provided by U.S. Pat. No. 5,892,917, issued to Myerson. This analyzer creates supplemental log records for cached files that were likely used to satisfy user requests, in order to create a more accurate profile of user activity. None of the prior art log file analyzers use the gathered information to dynamically determine whether crawlers are abusing their access, either by excessively frequent requests or by downloading excessive portions of the server database, and none can dynamically decide to refuse access.
An additional problem, not addressed by the prior art, is that there is not always a one-to-one correlation between robots and IP addresses, or other client identifiers. For example, in many corporations, users access the internet through a gateway server. All of the users then have the same IP address, and may appear in a log file as a single user. Conversely, a robot might deceptively use multiple. IP addresses to systematically download Web site information without being detected.
There is a need, therefore, for a method for dynamically limiting robot access to server data as requests are being made.
OBJECTS AND ADVANTAGES
Accordingly, it is a prim
Eichstaedt Matthias
Emens Michael Lawrence
Kraft Reiner
Boutah Alina
International Business Machines - Corporation
Lumen Intellectual Property Services Inc.
Wiley David
LandOfFree
System and method for dynamically limiting robot access to... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method for dynamically limiting robot access to..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for dynamically limiting robot access to... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3156078