System and method of performing profile matching with a...

Data processing: database and file management or data structures – Database design – Data structure types

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C707S793000, C707S793000, C707S793000, C715S252000, C715S252000

Reexamination Certificate

active

06681223

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to the field of data processing, and particularly to a software system and associated method for use with a search engine, to search data maintained in systems that are linked together over an associated network such as the Internet. More specifically, this invention pertains to a computer software product for generating profile matches between a structured document and web documents.
BACKGROUND OF THE INVENTION
The World Wide Web (WWW) is comprised of an expansive network of interconnected computers upon which businesses, governments, groups, and individuals throughout the world maintain inter-linked computer files known as web pages. Users navigate these pages by means of computer software programs commonly known as Internet browsers. Due to the vast number of WWW sites, many web pages have a redundancy of information or share a strong likeness in either function or title. The vastness of the WWW causes users to rely primarily on Internet search engines to retrieve information or to locate businesses. These search engines use various means to determine the relevance of a user-defined search to the information retrieved.
A typical search engine has an interface with a search window where the user enters an alphanumeric search expression or keywords. The search engine sifts through its index of web pages to locate the pages that match the user's search terms. The search engine then returns the search results in the form of HTML pages. Each set of search results includes a list of individual entries that have been identified by the search engine as satisfying the user's search expression. Each entry or “hit” includes a hyperlink that points to a Uniform Resource Locator (URL) location or web page.
A significant portion of the WWW documents today are authored in HTML, which is a mark-up language that describes how to display page information through a web-browser and to link documents up to each other. HTML is an instance of SGML (Standardized Markup Language) and is defined by a single document schema or Document Type Definition (DTD). The document schema puts forth a set of grammatical rules that define the allowed syntactical structure of an HTML document. The schema, or structure of HTML pages, is typically consistent from page to page.
Currently, Extensible Markup Language (XML) is gaining popularity. XML, which is a subset of SGML, provides a framework for WWW authors to define schemas for customized mark-up languages to suit their specific needs. For example, a shoe manufacturer might create a “shoe” schema to define an XML language to be used to describe shoes. The schema might define mark-up tags that include “color”, “size”, “price”, “material”, etc. Hence, XML documents written in this shoe language will embed semantic, as well as structural, information in the document. For example, a shoe XML document uses the mark-up tag “color” to indicate that the shoe is “blue”.
One advantage of XML is that it allows the efficient interchange of data from one business to another (or within the business itself). A business may send XML data that conforms to a predefined schema to another business. If the second business is aware of the first business's schema, it may use a computer program to efficiently process the data. To enable this efficient data interchange and processing, XML requires that standard and high-quality schemas be developed and conformed to, by XML documents.
As noted, the XML framework allows for the definition of document schemas, which give the grammars of particular sets of XML documents (e.g. shoe schema for shoe-type XML documents, resume schema for resume-type XML documents, etc.). The XML framework also puts forth a set of structural rules that all XML documents must follow (e.g. open and close tags, etc.). Moreover, it is possible for an XML document to have no associated schema. If a document has an associated schema, the schema must be specified within the document itself or linked to by the document.
Information about the quality of an XML document may be inferred by its conformance with the rules put forth by this XML framework. An XML document is said to be “valid” if it has an associated schema and conforms to the rules of the schema. An XML document is said to be “well-formed” if it follows the general structural rules for all XML documents. Ultimately, a high quality document has a higher probability of being both “valid” and “well-formed” than a low-quality document.
In addition, like HTML documents, XML documents form a hyperlinked environment in which each XML document that has an associated schema provides a link to the schema (if the schema is not defined within the document itself). Moreover, each XML document, using various mark-up structures, such as XLink or XPointer, may link up to other XML structures and XML documents. Unlike the HTML environment, however, the schemas of each hyperlinked document may vary from document to document. A document that satisfies one particular schema can point to a document that satisfies a different schema. Further, two documents with different schemas can point to a document with a third schema. The quality of each schema may vary significantly.
A search of web pages using keywords, in most cases, returns an over-abundance of search-results. For example, a search for “Harvard” might result in an excessive number of web pages. Search engines face the challenge of matching these results to a profile provided by the user. Text-based matching alone will often miss some pages that are relevant to the search.
Harvest, is a program that tries to solve the robotic copying problem by indexing each site rather than copying its entire contents. Using Harvest, a web site can automatically produce a concise representation of the information on its site. This informational snapshot is then provided to interested crawlers, avoiding congestion on the server and slowing down the Internet. One Harvest concept of an automatically generated information snapshot index is known as metadata and written in a language known as Summary Object Interchange Format (SOIF). SOIF extracts such details as title, author's name, data type, and if one is available, the abstract from a web site. In the case of text files, all the entire text is included.
Webcasting, or Internet push, automatically delivers information to the users based on user profiles. Information frequently updated and of regular interest to the users becomes a prime target for webcasting delivery such as headline news and stock quotes.
One of the main problems facing webcasting is the lack of sufficient support for personalization in that a subscribed channel often contains a significant amount of information irrelevant to the users' interest. For example, users cannot customize their subscription to receive only information about their favorite teams when subscribing to a sports channel. Moreover, the bandwidth wasted by delivering irrelevant content exacerbates the burden on network infrastructure, preventing widespread deployment.
Therefore there still remains a reed for a solution that enables users to filter subscribed channels according to their needs in an individualized profile, and more importantly matching profiles against available content on the server side. Thus, only information pertaining to the user's personal interest needs to be displayed and delivered over the network, significantly enhancing usability while reducing network traffic.
The Grand Central Station (GCS) project is more than a search engine. GCS combines both information discovery and webcasting-based information dissemination into a single system. GCS builds a profile of the user and keeps him or her informed whenever something new and relevant appears on the digital horizon. The GCS system generally includes two main components. The first component constantly gathers and summarizes new information in the manner of a robotic crawler. The second component matches this information against the profiles of in

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method of performing profile matching with a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method of performing profile matching with a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method of performing profile matching with a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3263801

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.