Electrical computers and digital processing systems: multicomput – Distributed data processing
Reexamination Certificate
1998-06-12
2001-01-30
Sheikh, Ayaz R. (Department: 2781)
Electrical computers and digital processing systems: multicomput
Distributed data processing
C709S202000, C709S203000, C709S231000, C710S107000, C710S031000, C710S120000
Reexamination Certificate
active
06182112
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to high speed data transmission within a general purpose digital computer and more particularly relates to control mechanisms for bi-directional transfer control.
2. Description of the Prior Art
Originally, digital computers were highly non-modular in construction, usually having a single hardware element for each function. Even though the required data transfer rates were relatively low, such systems tended to employ direct point-to-point interfaces between the various components.
As computer systems became more highly modularized, greater flexibility was afforded in providing the capability for configuring and reconfiguring systems to meet specific and changing requirements. Additional memory, for example, could be added as storage requirements increased. Unfortunately, as the number of modules increases, the number of interfaces (and hence the cost for interface hardware) increases geometrically, in order for each additional module to communicate within each existing module.
The current and most prevalent solution to the high cost of point-to-point interfaces for highly modular systems is through the use of common busing. In its simplest form, each module of the system is coupled to a single internal bus. Each module transfers data to other modules using that same common bus. Thus, each module within the system has a single interface (i.e., the interface to the common internal bus). As modules are added to the system, no changes and/or additions are required to the existing modules.
The major disadvantage of common busing is that the band pass of the system is limited by the band pass of the internal common bus. because all intermodule data is transferred over a shared bus. As the speed and throughput of the individual modules increase, this limitation becomes extremely severe. Eventually, a system having many high performance modules may actually decrease in performance as additional modules are added.
Some higher cost modern system employ a hybrid of point-to-point and common busing to enhance performance without unduly increasing cost. This approach then sacrifices configuration flexibility. Note that this is a partial regression to the original point-to-point approach. It is herein that the system thus becomes less flexible.
Thus, the current state-of-the-art in system design tends to employ this hybrid approach. For a typical application, the system designer maximizes performance through utilizing point-to-point interfaces for the highest demand data paths. Similarly, a common bus is used for interconnecting a larger number of lower band pass modules. In this way, maximum flexibility is retained for the lower band pass modules types, while encouraging higher performance from the higher throughput modules.
One method for increasing the overall band pass of a shared resource design is to utilize priority schemes. For example, in a typical system, a number of processors may communicate with one another across a shared bi-directional bus. However, only one of the processors may use the shared bus at any given time. Therefore, the computer system must employ a mechanism for ensuring that only one processor has access to the shared bus at any given time while blocking access of the remaining processors. Often, one or more of the processors may have a greater need to access the shared bus. One reason for this may be that one or more of the processors may be in the critical path of the computer system. If a processor is in the critical path of a computer system and it is not allowed to access the shared resource, the band pass of the entire computer system may suffer. A concrete example of this may be that a first of the processors connected to a shared bus may contain a memory therein for storing instructions which must be accessed by a main processor. A second of the processors connected to the shared bus may be responsible for controlling the I/O ports connected to a printer. It is clear that the first processor should be given priority to use the shared bus over the second processor. If this is not the case, the “band pass” of the computer system may be reduced because the second processor may have control of the bus thereby prohibiting the main processor from fetching instructions from the first processor. This is just an example of where priority schemes are essential to proper operation of modern computer systems.
One scheme advanced for solving this problem is a pure “first-in-time” priority scheme. In a pure first-in-time priority scheme, each of the processors that are coupled to the shared bus may assert a bus request signal when the corresponding processor wants to use the shared bus. The first processor that asserts the corresponding bus request signal is given priority and control over the shared bus. If a second processor asserts its corresponding bus request signal after the first processor has control over the bus, the second processor is denied access to the shared bus. After the first processor releases control of the bus, each processor is given another opportunity to obtain control of the bus by asserting its corresponding bus request signal. This process is repeated during normal operation of the computer system.
It is evident that one or more of the processors coupled to the shared resource may be effectively blocked from using the shared resource for an extended period of time. If one of these processors is in the critical path of the computer system, the band pass of the computer system may suffer. In addition, all of the processors that are coupled to the shared resource are given an equal opportunity to access the shared resource every time the shared resource is released by a processor. That is, even the processor that previously had control of the shared resource has an equal opportunity to gain control of the shared resource during the next cycle. Because of the inherent disadvantages of the pure first-in-time scheme described hereinabove, only applications that are non-bandpass limited typically use the pure first-in-time scheme. However, in these applications, the pure first-in-time scheme has the advantage of being simple to implement thereby not requiring much overhead circuitry.
A modified first-in-time scheme has been developed to reduce some of the disadvantages inherent in the pure first-in-time scheme. The modified first-in-time scheme does not allow the processor that previously had control of the shared resource to gain control of the shared resource during the next succeeding bus cycle. This modification prohibits one processor from dominating a shared resource over an extended period of time. One disadvantage of the modified first-in-time scheme is that two or more processors may still dominate a shared resource thereby effectively blocking other processors from accessing the shared resource. For this to occur, however, the two or more processors must alternate in controlling the shared resource thereby giving access to at least two of the processors coupled thereto.
In some applications, it is important that each of the users that are coupled to a shared resource be given an opportunity to access the shared resource on a periodic basis. The modified first-in-time scheme may include circuitry to prohibit a user that previous had control of the shared resource to gain control of the shared resource during the next “IN” succeeding bus cycles where N equals the number of users connected to the shared resource. In this configuration, the modified first-in-time scheme may allow all users access to the shared resource on a periodic basis.
Another priority scheme is termed the “first-in-place” scheme. The first-in-place scheme assigns a priority to each of the users connected to a shared resource. Each time an access to the shared resource is requested, the user having the highest priority assigned thereto is given access to the shared resource. For example, if a user having a priority of “2” and a user having a priority of “5” both request access
Bauman Mitchell Anthony
Gilbertson Roger L.
Malek Robert Marion
Johnson Charles A.
Nawrocki, Rooney & Sivertson P.A.
Phan Raymond N
Sheikh Ayaz R.
Starr Mark T.
LandOfFree
Method of and apparatus for bandwidth control of transfers... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method of and apparatus for bandwidth control of transfers..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of and apparatus for bandwidth control of transfers... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2504807