Method and apparatus for implementing a workgroup server array

Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06715100

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a server cluster, and more particularly to a method and apparatus for implementing a workgroup server array and its architecture for building various server clusters to accommodate scalable web-based Intranet, Extranet and Internet mission-critical applications.
The inventive server array comprises team/workgroup computer equipped with workgroup-based direct-access servers and controlling devices, as described in applicant's U.S. Pat. No. 5,802,391 entitled “DIRECT-ACCESS TEAM/WORKGROUP SERVER SHARED BY TEAM/WORKGROUPED COMPUTERS WITHOUT USING A NETWORK OPERATING SYSTEM”. Furthermore, this inventive server array creates a workgroup-server-array-based architecture, which can be employed to construct various highly available, scalable and mission-critical server clusters.
2. Prior Art
The explosion of innovative Internet technology is significantly influencing the way applications are written and deployed. The hundreds of thousands of Internet web sites that were once static “brochure-ware” are quickly becoming highly interactive Internet applications with transactional capabilities. Inside large corporations, developers are using Web technology to integrate enterprise applications into large-scale Intranets. Between corporations, business partners are building secure Extranets to streamline their supply chains and improve communication.
As web-based applications expand on the Internet, and on enterprise Intranets and Extranets, the functions they perform are becoming increasingly mission critical. Moreover, as businesses continue to apply web-based technologies to mission critical tasks, they will require sophisticated approaches for making their applications highly available and scalable.
In order to achieve high scalability and availability requirements, the trend is toward systems that involve many servers working together, i.e., server clusters to deliver applications that the end users request. Furthermore, a large-scale web-based service requires architecture to build server clusters, so that availability, scalability, reliability, performance, management and security issues can be accommodated.
However, current technologies available for building a highly scalable, highly available and mission-critical web-application-based server cluster by using a plurality of individual servers, tend to create a single-server-based 3-tier architecture, hereinafter referred as SS-3 architecture. This SS-3 architecture generally requires first-tier components, which are load balancers, second-tier components, which are application servers, and third-tier components, which are database and file servers. Each individual server, which can be PC-based, super-micro-based or mini-computer-based, comprises multiple CPU's with parallel processing capabilities using an Operating System, such as WinNT, Solaris, Linux and Unix.
Based on SS-3 architecture, a highly available and scalable server cluster for web-based applications can thus be built. However, the architecture also creates the following disadvantages:
1. Pertaining to each tiered component
a) Load balancers-Analyze all the incoming traffic and re-direct each individual web-based query/request to one of the available second-tiered application servers that are attached. The load balancer distributes requests to specific second-tiered web-based application servers based on the nature of the request and the availability and capability of the load-balanced web application server. There are three basic types of load balancers: switches, software balancers and appliance balancers. However, the Internet connection will likely be clogged if any of the above-mentioned load balancers is stressed.
b) Application servers-Receive the assignment from the first-tiered load balancer, carry out the web-based applications and interface with the third-tier database and file servers for application-oriented data retrieval. However, each application server may be different from one another, based on different hardware and software configuration, creating management complexity for the load balancer. In addition, each application server handles both loyalty-based and non-loyalty based queries, creating non-coherent program groups with different levels of security measures. Furthermore, each application server does not have the remote boot capability, unless a network-access-based secondary processor is included, so that if the primary processor of the server fails, the secondary processor accessed by other network-based management servers can then be triggered to reboot the primary processor.
c) Database/File servers—Are client-server-based servers that process database/file queries from all the second-tiered application servers deemed as clients. Since there is no differentiation between the loyalty-based and the non-loyalty-based traffic, application-oriented data for loyalty-based and non-loyalty-based are all sorted in one central file server and one database server, creating potential database/file retrieval bottlenecks if too many concurrent queries occur. Furthermore, if these file and database servers are implemented as part of a data center, which contains multiple distributed database and file servers that are linked to a plurality of SAN-enabled (storage-area-network) storage devices, the complexity of managing such a data center is high. It is due to the fact that complicated database software programs are required in both client-centric servers and server-centric servers. However, it is not ideal to lump application-oriented data and business sensitive data in one data center, because extra security measures, such as firewall filtering, have to be put forth to guard against any potential risk of being sabotaged by web-based browsing activities.
d) The inter-tier communication switches—Are required between the first tiered load balancer and the second-tiered application servers and between the application servers and the third-tiered file and database servers. Since every component is network-based, all the communication between servers is handled through these two switches, creating unnecessary inter-tiered traffic bottlenecks and management overhead.
e) More tiers means more components, which create more single-point failures—Based on SS-3 architecture, all the load balancers, application servers, file and database servers, routers and switches should have a fail-over scheme, so that mission critical applications can be maintained without failure. Even though the overall fail-over scheme can be developed, it is not efficient and cost-effective, due to the fact that there are too many hardware configurations and software programs involved.
2. Server cluster management
a) The monitoring and management of single-server-based server clusters become complicated because of the complexity of each component in regard to inter-tiered communication. Single software upgrades tend to create software incompatibility due to the fact that there are too many involved software programs that also may need to be upgraded from various venders.
b) The overall performance is not easily optimized. Once a server cluster is built based on SS-3 architecture, it has to meet the criteria of at least handling steady-state operation smoothly and accommodating peak-time operation without glitches. However, there are no distributed small-scale optimal points that can be gauged, thereby adding uncertain factors in controlling the steady-state operation and restricting necessary measures in dealing with the peak-time operation.
c) High availability and cost-effective linear scalability are difficult to maintain if too many database centric requests are to be serviced concurrently due to high-speed web access is prevalent. Currently, web-based queries are based on 56 kbps narrow-band transfer rate and the related services are centered in web-page delivery. However, if the prevalent data transfer rate jumps to 1 Mbps or higher by using cable modem or ADSL and the prevalent s

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for implementing a workgroup server array does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for implementing a workgroup server array, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for implementing a workgroup server array will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3221156

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.