Network server device and file management system using cache...

Electrical computers and digital processing systems: multicomput – Multicomputer data transferring via shared memory

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S112000

Reexamination Certificate

active

06327614

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a server device such as a Web server and a file management method to be carried out by the server device.
2. Description of the Background Art
As the speed of information distribution is drastically increased due to the progress of the Internet, more and more people are now making accesses to the Internet. Usually, the means for providing information is World Wide Wed (referred hereafter as Web), and the protocols used are HTTP (Hyper Text Transfer Protocol) and TCP/IP (Transmission Control Protocol/Internet Protocol). In order to provide information to many people, it is necessary to enhance the bandwidth of a network as well as the performance of a server that stores information to transmit information to a network.
The main required function of the server is to transmit information stored in a storage device to a network, but because a format of information stored in the storage device differs from a format of information to be transmitted to a network, so that there is a need to carry out the format conversion. In addition, there is a need to execute protocols in order to carry out communications using HTTP and TCP/IP.
Consequently, if all these processings are to be carried out by a single processor, the processing load on that processor becomes excessively large and causes a bottleneck in the performance improvement. Moreover, it would require a large memory bandwidth because the processor would be required to copy information within a memory very frequently for the sake of the format convention mentioned above.
One known way of easing these bottlenecks is to use servers in the multiprocessor scheme in which a plurality of processors and memories are connected together. For example, in the distributed PC scheme shown in
FIG. 1
, a plurality of Web servers
100
in forms of PCs are provided on a network
101
to which a plurality of terminals
102
are also connected, in order to improve the performance. However, in this distributed PC scheme, the file management becomes tedious because files are divided over a plurality of Web servers
100
, while there is also a need to control a Web server
100
to which an access is to be made and a complicated management is required for this purpose as well. In addition, there is also a problem that if accesses are concentrated to files stored in one Web server
100
, the processing load on that Web server
100
will be increased.
Also, in the shared memory multiprocessor scheme, the fact that a certain level of the hit rate can be expected for a cache memory of CPU is presupposed, and a bus and a memory bandwidth are provided under this presupposition. However, when the shared memory multiprocessor scheme is used in a Web server, there is hardly any hit In a cache of CPU and this causes a bottleneck so that it has been impossible to take a full advantage of the expensive high speed CPU processing power.
In addition, in the front-end+file server scheme shown in
FIG. 2
, HTTP is executed at the front-ends
110
connected to the network
111
to which a plurality of terminals
112
are also connected, while necessary files are requested to a file server
113
having storage devices
114
, via a back-end network
115
(see “Application of NFS Servers to Strategic Internet/Intranet Website Design”, Technical Report 13, Version 1.0, July 1996, Auspex Systems, Inc., Santa Clara, Calif.).
However, in this scheme, communications between the front-ends
110
and the file server
113
are to be handled by the back-end network
115
. Here, the back-end network
115
is operated by the standard communication specification such as NFS in general, and there is a problem that the processing load on CPU will increase in both the front-ends
110
and the file server
113
in order to carry out such communications.
Also, the function distributed multiprocessor scheme uses a structure as shown in
FIG. 3
in which system memories
121
is shared by network controllers
122
, file controllers
123
, and storage controllers
124
, on a shared bus
125
to which a local host
126
is also connected (see U.S. Pat. No. 5,355,453).
However, in this scheme, the reading operations by a plurality of network controllers
122
are all concentrated to the system memories
121
so that there has been a problem that it tends to cause bottlenecks at the shared bus
125
and system memories
121
.
Also, in the sparsely-connected multi-computer scheme shown in
FIG. 4
, processor nodes
131
having an inter-processor connection network
132
which are connected with each other by the hyper-cube or the like and which are connected to the network
133
to which a plurality of terminals
134
are also connected is used as a Web server (see “A Scalable and Highly Available Web Server”, Proceedings of the IEEE Computer Conference (COMPCON), Santa Clara, March, 1996).
However, in this scheme, there is a problem that a very high cost required for the inter-processor connection network
132
raises a cost of the system as a whole.
Thus the conventional server configurations are associated with problems such as a tedious management and a poor cost performance.
Now, TCP/IP is the protocol widely used for networks, and in the case of transmitting a TCP/IP packet to a network there is a need to calculate a parity with respect to the packet and attach it to a header portion. Conventionally, this parity calculation has been realized by reading out data to be transmitted at CPU, but this calculation not desirable as it exerts loads on both the memory bandwidth and CPU.
SUMMARY OF THE INVENTION
It is therefore an object of the present invention to provide a server device capable of realizing a simple management and a superior cost performance.
It is another object of the present invention to provide a server device with an efficient mechanism for transmission of information to a network.
It is another object of the present invention to provide a file management method which is efficient in a hierarchical cache system.
According to one aspect of the present invention there is provided a server device, comprising: a plurality of network interface processors connected to a network, each network interface processor having a network interface local memory functioning as a cache memory for storing a part of server data and a network interface local processor; at least one storage interface processor connected with a storage device for storing the server data; and a connection network for connecting the network interface processors and the storage interface processor; wherein the network interface local processor of one network interface processor carries out a control processing such that a requested data stored in the network interface local memory of the one network interface processor is transmitted to the network when a request received from the network at the one network interface processor satisfies a prescribed first condition, and that the request is given to the storage interface processor via the connection network when the request satisfies a prescribed second condition and a requested data returned from the storage interface processor to the one network interface processor via the connection network in response to the request is transmitted to the network.
According to another aspect of the present invention there is provided a file management method at a server device having a plurality of network interface processors connected to a network, each network interface processor having a network interface local memory functioning as a cache memory for storing a part of server data and a network interface local processor, at least one storage interface processor connected with a storage device for storing the server data, and a connection network for connecting the network interface processors and the storage interface processor, the method comprising the steps of: transmitting a requested data stored in the network interface local memory of one network interface processor to the network wh

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Network server device and file management system using cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Network server device and file management system using cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Network server device and file management system using cache... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2587221

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.