Systems and methods for continuously accumulating research...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06594654

ABSTRACT:

TECHNICAL FIELD
The present invention relates generally to computerized research, and more particularly relates to a system and methods for conducting research on networked computer systems, particularly Internet-based data archives, and accumulating the research information to obtain useful computerized research results.
BACKGROUND
The Internet is an immense network. As of early 2000, there are more than 100 million users accessing over 5 million active sites with over 800 million pages of information and it grows daily. That is an astounding mountain of raw data to sift through.
The Internet's greatest strength—the immense volume of information—is also the root of one of its weaknesses. Extracting specific knowledge from this vast repository of information can be frustrating and extremely time-consuming. Additionally, sites are published by thousands of people; there is no organization to this mass of information. Web pages are constantly added, deleted, updated, and moved. Finding relevant information on the Internet can be challenging in such a chaotic environment. Conventional search engines, such as ALTAVISTA.COM and YAHOO.COM, seldom find a desired answer without numerous irrelevant distractions.
There are several reasons why Internet searches are not effective. First, conventional search engines and directory services on the Internet are designed to provide instant, cursory reviews of the enormous numbers of pre-cataloged topics on the Internet. This method produces a tremendous quantity of raw and unrelated information. Generally, conventional search engines do not identify what is new or changed since the searcher last asked about the topic. Conventional search engines often return unmanageably large numbers of answers to a single question. Further, conventional engines rely on stale information—sometimes weeks to months old. Conventional search engines do not retain search results; the searcher must restart each time a search is conducted. Conventional engines work only while the searcher is online; cover a mere 20% of the available content on the Internet; only show preestablished or “canned” summaries that are frequently unrelated to the question; cannot report information that has frequently changing content; and do little or nothing to teach a searcher how to construct an effective query.
In order to perform a search with most search engines, a user typically submits a query containing one or more query terms. A query server program of the search engine then processes the query to identify any items that match the terms of the query. The results of the query is a set of web sites or documents which is typically presented to the user as a hypertext listing of the located items. If the scope of the search is large, the query result may contain hundreds, thousands, or even millions of items.
Due to the enormous and rapidly growing quantity and diversity of information accessible through the Internet, search engines generally maintain a tremendous amount of Internet content and pre-index the information to facilitate rapid searching. Therefore, when an Internet user enters a search, the search engine quickly looks into its index and tries to provide the user with a response within a few seconds. The accuracy of the information provided in the response, however, depends on the current state of the index, which may be incomplete and/or outdated.
Another class of search solution is the “meta-search engine,” as implemented on sites such as DOGPILE.COM and METACRAWLER.COM. These meta-search services collect the search request from the user, then farm out the request to two or more pre-selected search engines or directories. The results returned are then rapidly repackaged and presented to the user. Various implementations perform differing levels of compiling the results before presentation. The simplest merely report the results from each search engine or directory separately. More advanced ones merge the results into a single report, eliminating duplicates.
However, meta-search engines are wholly dependent on traditional search engines and directories for their results. Meta-search engines use a similar model of providing the results as rapidly as possible to the user. Accordingly, such engines have the same search engine limitations cited above—except that by accessing the indexes of more than one search engine, they increase the potential coverage of the Internet beyond the typical 20% of a single search engine. Further, many implementations of the meta-search concept fail to adapt or optimize the user-entered search syntax to the various search engines used. Meta-search engines also use a preset collection of search engines (some provide user selection of the specific choices) which are used for all search requests.
Although existing search engines are generally useful, users interested in acquiring and compiling focused information are often inundated with too many results. Moreover, prior art search engines are ill equipped to handle the formidable task of indexing the vast amounts of developing Internet content. Indeed, because existing search engines are tailored to giving users immediate responses, those responses are often inaccurate, irrelevant, and/or antiquated. The user ultimately takes the brunt of any errors, inaccuracies, and outdated information. Specifically, users are often presented with duplicative search results (i.e., the same found item may appear on one or more different web sites), or dead links (which generate the dreaded “Error
404
”, which means that the information, although indexed, is no longer available at the site that generated the index entry). In addition, search engines provide only one tool for actually conducting a research project.
Human beings traditionally conduct research in a manner that is not facilitated by present Internet search engines. Just like in conventional library research, people typically conduct research by (1) attempting to identify one or more authoritative sources of information, (2) locating and querying those sources, (3) inspecting manageable collections of information provided by the sources, (4) taking notes on the information (e.g. by writing on an index card), (5) “filtering” the information by categorizing the cards as a function of quality or state of currency or completeness, etc., (6) selecting and retaining those items of information that satisfy the researcher's goals, and (7) repeating the previous steps as necessary to achieve sufficient information to meet both initial research goals and to obtain informative updates over duration that the topic continues to be of interest or importance. The final research product is the result of selection of the most relevant items of information from the various sources.
As described above, existing Internet search engines only provide the user with a list of possible sources of information (i.e., a list of static items that have been indexed a day, a week, or may be a month ago). The list provided by an existing search engine is much like providing a library user with a listing from a card catalog. For example, like a card catalog, a list of sources only informs the user that there may be information available on a particular research topic. It does not provide the user with any additional assistance. The user is left to hunt down whether the information is still available and if so, determine whether it is truly relevant to the researched query.
Furthermore, while existing Internet search engines provide instantaneous responses, they do not provide users with any continuity of use, or adequate means for filtering the irrelevant information, or adequate means for determining quality of the seemingly relevant search results, or adequate means for retaining relevant findings. In other words, prior art search engines do not maintain a relationship with any one user and are therefore unable to identify one user from another. Accordingly, when a user enters a follow-up search request to obtain updated information from a past search, existing search

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Systems and methods for continuously accumulating research... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Systems and methods for continuously accumulating research..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for continuously accumulating research... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3004219

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.