Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output data buffering
Reexamination Certificate
2000-06-28
2004-04-27
Elamin, Abdelmoniem (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output data buffering
C710S044000, C710S052000, C711S113000, C711S133000
Reexamination Certificate
active
06728800
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to the field of computers and computer systems. More particularly, the present invention relates to an efficient performance based mechanism for handling multiple TLB operations.
BACKGROUND OF THE INVENTION
Since the beginning of electronic computing, software applications have been placing greater and greater demands on memory requirements. Computer systems have also evolved to include memory hierarchies comprising various types of long term storage, main memory, and caches. However, as one moves down the down the memory hierarchy from caches to long term storage, device access times increase dramatically. An ideal solution is to have enough cache memory or fast main memory available to service the currently executing program. Furthermore, at any instant in time, most computers are running multiple processes, each with its own address space. But in most systems, physical memory is present in only limited amounts or programs demand more memory than is available.
One means of sharing a limited amount of physical memory among many processes is by utilizing virtual memory, dividing the physical memory into blocks and allocating the blocks dynamically to different processes. The use of virtual memory also allows a programmer to design programs which access more memory than is physically present in a system. Generally, a program is given its own address space. This address space is also divided into blocks called pages. During program execution, pages that are needed for current program execution are stored in main memory, whereas pages that are not currently being accessed are stored in slower secondary storage such as a hard disk drive. As a program is executed, pages are swapped in and out between main memory and the hard disk drive as specific code is required for program execution.
By using virtual memory, programs are using virtual addresses to access their code. The processor takes the virtual addresses and translates them to physical addresses which are used to access main memory. This process is called memory mapping or address translation. Current computer systems are capable of handling a very large virtual memory spaces. Depending on the page size of the system, the number of pages that need to be addressed can also be very large. Hence, virtual address to physical address translations can be complicated and time consuming.
In many systems, a data structure, referred to as a page table, is employed to maintain the mappings between virtual and physical addresses. When in use, a virtual page number (the number represents the position of a virtual page within virtual memory space) is used to reference an entry in the page table which contains the physical address translation for the virtual page which corresponds to the virtual page number. These tables can be very large.
Therefore, to save physical memory space, the page tables themselves are often only partially stored in main memory while the bulk of the table entries are stored on a hard disk and swapped in and out of memory on an as-needed basis. To reduce translation time, computers often use a translation look-aside buffer (TLB) to cache frequently used virtual to physical address translations.
Existing processors maintain TLB entries through either a software scheme or through a hardware scheme or a combination of the two. The tradeoff being in speed versus flexibility. As consumers demand faster and better system performance, the ability to quickly manage large memory spaces becomes increasingly important.
REFERENCES:
patent: 4695950 (1987-09-01), Brandt et al.
patent: 4733348 (1988-03-01), Hiraoka et al.
patent: 4980816 (1990-12-01), Fukuzawa et al.
patent: 6012134 (2000-01-01), McInerney et al.
patent: 6263403 (2001-07-01), Traynor
patent: 6538650 (2003-03-01), Prasoonkumar et al.
patent: 6560664 (2003-05-01), Calson
Lee Allisa Chiao-Er
Mathews Greg S.
Blakely , Sokoloff, Taylor & Zafman LLP
Elamin Abdelmoniem
Intel Corporation
LandOfFree
Efficient performance based scheduling mechanism for... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Efficient performance based scheduling mechanism for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Efficient performance based scheduling mechanism for... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3250894