Management of the information flow within a computer system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S143000, C711S141000, C345S519000, C345S504000

Reexamination Certificate

active

06209063

ABSTRACT:

BACKGROUND OF THE INVENTION
The invention relates to the control of information flow between a memory storage area, a cache, and a raster display screen, and in particular to coordinating the flow of information between the memory storage area, the cache, and the raster display screen, with the blanking interval of the raster display screen.
Computer systems utilize a microprocessor which controls the operation of the computer. The microprocessor typically contains a core processor which at its most basic level performs read, write, and binary arithmetic operations. The core operates at an astonishing rate, it is not uncommon for microprocessor cores to operate at rates of between 30 MHz and 200 MHz. This means it takes the core from 5-30 nanoseconds to perform a single operation.
Core processors execute a defined instruction set and operate on data. The instruction set and the data are stored in memory locations external to the core processor. Furthermore, the core processor is not the only component of the microprocessor that requires access to the external memory areas, and all of the traffic between the microprocessor and the external memory typically takes place over a single external bus (e-bus). For example, data transfer can take place between the external memory area dedicated for storage of display screen raster bitmap and the display screen controller over the e-bus. A typical display screen with a pixel area of 640 columns by 480 rows, where each pixel requires 8 bits of memory, requires over 2.4 million bits of dedicated storage. Additionally, a typical display screen requires refreshing at a rate of at least 45 Hz. This means roughly 110 million bits of information flow through the e-bus every second merely to keep the display screen refreshed. In some systems, updating the display screen comprises one third of all of the e-bus traffic. Competition for access to memory via the e-bus creates significant data traffic problems in all computer systems. Accordingly, the core processor must compete with the rest of the system for e-bus bandwidth when accessing external memory. This creates substantial slowdowns in the operation of the core in at least two ways. First, since the core processor is capable of operating at several times the speed of the external memory, therefore, any trip to external memory causes core processor delay. On top of this delay, the core processor must compete for e-bus bandwidth with the rest of the system. This means the core is often idle while waiting to communicate with external memory due to heavy system traffic on the e-bus.
As mentioned above, memory access time is significantly slower than core processing time. For example, the time required to read or write to a single memory location can take 50 nanoseconds for a memory operating at 20 MHz. Therefore, while the fastest cores might operate at 200 MHz, the fastest memories operate at one-tenth that speed.
Maximizing the operation of fast microprocessor cores requires minimizing the frequency of the time-consuming read/write trips over the e-bus to external memory. To solve this problem, microprocessors include on-chip caches, which can store either data or instructions needed by the core. In general, microprocessor designs utilize one of two cache types. The first type of microprocessors utilize one cache for storing data and another cache for storing instructions, the second type of microprocessors use one cache for storing both instructions and data. Regardless, by creating a dedicated direct connection between the cache and the core, the core can very quickly perform read/write operations on the cache. If the information needed by the core resides in the cache, expensive and time consuming trips to memory are thereby eliminated. Thus, on-chip caches and operating system techniques to keep the caches updated with the information the core is most likely to use, comprise a main method to speed up overall computer system performance.
Use of on-chip caches, however, creates another systemic problem in computer systems. Core processing of data transferred from external memory to the cache will change the information residing in the cache. Thus, the original external memory locations require updating to reflect the operational changes taking place in the cache. Illustrating with the screen display example, the contents of external memory locations that correspond to display screen data locations require transfer into the cache for processing by the core. This occurs whenever the display on the display screen requires manipulation, which is nearly constantly in most computer systems. For a display screen that refreshes 45 times per second, the system must transfer information from external memory to the display screen controller at a similar rate or faster. However, updated screen display information may still reside in the cache. If the information does transfer from the cache to the external memory in time to transfer to the display screen, the display screen will display incorrect or incomplete information.
One prior art solution for this problem, common to larger PC-level processors, comprises designing into the microprocessor hardware a bus snooper. Bus snoopers require a specialized hardware connection between the cache and the screen display controller, and enable direct transfer of screen display data from the cache to the screen display controller by forcing the controller to use the recent data in the cache rather than the stale data in memory. This solution, however, proves impracticable for all but the largest and most powerful microprocessors. The premium on microprocessor die size eliminates the possibility of such dedicated hardware for most microprocessors, especially those microprocessors associated with highly compact and portable computer driven devices. While at the same time, the computing demands placed on these compact and portable computer devices continues to accelerate. Thus, these devices are called on to perform graphically like larger PC's, but due to size and power consumption concerns, they do not contain the facilities required by the powerful PC microprocessors to implement the PC microprocessor's solution.
Microprocessors without snoopers, or other similar dedicated hardware, utilize a different approach to solve this problem. Most caches can operate in one of two modes: (1) copy-back, where the cache is constantly accessed by the core and external memory is updated later upon the occurrence of certain events; and (2) write-through, where reads from memory are placed in the cache and writes to the cache are written synchronously to external memory. In situations where the computer system must force data from the cache to external memory, the solution comprises operating in write-through mode. However, this essentially cripples the operating speed of the core. In write-through mode the core is reduced to operating at the speed of the slowest memory component, e-bus traffic is maximized, plus the core must compete with all the other systems over the e-bus for access to external memory. Accordingly, the present invention substantially eliminates the difficulties encountered hereto in the prior art as discussed herein-above.
SUMMARY OF THE INVENTION
An object of the present invention comprises providing a computer system which can transfer screen display data between a microprocessor cache and an external memory.
Another object of the present invention comprises providing a computer system which can timely transfer data between a cache and an external memory area dedicated to storage of screen display data to ensure high quality screen displays.
These and other objects will become present upon reference to the following, specification, drawings, and claims.
The present invention intends to overcome the difficulties encountered heretofore. To that end, the present invention comprises a computer system that manages the flow of information between a memory storage area, a display screen, and a cache. The computer system monitors a system signal that in

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Management of the information flow within a computer system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Management of the information flow within a computer system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Management of the information flow within a computer system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2535725

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.