High-speed address translation system

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S170000, C711S171000, C711S172000, C711S173000, C711S202000, C711S205000, C711S206000

Reexamination Certificate

active

06275917

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a high-speed address translation system. Particularly, it relates to memory allocation in a translation lookaside buffer (below, TLB) provided in a memory management unit (below, MMU) in a computer system. The high-speed address translation system according to the present invention can be advantageously utilized in the field of an electronic switching system formed of an online real-time system having high reliability.
2. Description of the Related Art
In general, a TLB is provided in order to dynamically access between a virtual address space (or, a logical address space) and a physical address space. In this case, a predetermined program is executed with reference to the virtual address space, and an actual content of the program is arranged in the physical address space. The TLB is formed by a plurality of entries and usually provided within a MMU, as hardware, in a normal computer system.
For example, in general, a system utilizing an address translation system is formed by a logical address space, a MMU connected to the logical address space and a physical address space connected to the MMU for executing an actual program (see FIG.
11
).
In this structure, the address space is managed based on a minimum unit of the memory management which is called a “page”. Further, the TLB stores corresponding relationships between the virtual address and the physical address in accordance with the page, and translates the virtual address to the physical address in response to an instruction access or a data access.
Since the TLB is formed by the hardware as mentioned above, it has a finite space which can be utilized as a resource. Accordingly, contents of the TLB must be updated in accordance with frequency in use thereof. For example, when an address to be translated from the virtual address to the physical address is missed (i.e., not hit) in the TLB, that address information is provided from a main memory to the TLB.
On the other hand, in the normal computer system, a predetermined program is loaded on the memory in accordance with a request from an operator, and the memory is released after execution of the predetermined program. Accordingly, in a conventional art, the page size is fixed to one kind of size as the minimum unit to be managed in order to raise efficiency in use of the memory.
However, in an online real-time system, such as a switching system, a program, in which a time-critical process is required, is loaded on an address space which was previously allocated. Further, in the time-critical process, when the address is missed in the TLB (below, TLB miss-hit), the TLB miss-hit is processed dynamically either by using a predetermined hardware, or by using an operating system (OS) as a trap operation (i.e., an interrupt to the OS when the TLB miss-hit occurs). In this case, however, since the above process for the TLB miss-hit is not recognized by an application program (i.e., the TLB miss-hit is “invisible” to an operator), an unexpected fall in performance occurs in the system.
The present invention aims to solve the above mentioned problems in the conventional art. That is, in an online real-time system required for high reliability, such as a switching system, the present invention aims to provide a high-speed address translation system in which it is possible to eliminate an overhead due to the TLB miss-hit in very important process, such as a basic call process in the exchange (in other word, the TLB miss-hit can be recognized by the operator) when executing the address translation in the present invention. According to the present invention, it is possible to considerably improve performance of the system, and to raise precision of expected performance without consideration of the TLB miss-hit in the real-time process.
SUMMARY OF THE INVENTION
The object of the present invention is to provide a high-speed address translation system which can eliminate an overhead due to the TLB miss-hit in a very important process, such as a basic call process in a switching system, when executing the address translation.
In accordance with the present invention, there is provided a high-speed address translation system provided in a computer system including a logical address space storing logical addresses, a physical address space for storing physical addresses and a microprocessor unit connected to both address spaces, and the microprocessor unit including a memory management unit, the system including; a translation lookaside buffer (TLB) provided in the memory management unit for translating the logical address to the physical address; and a unit for adjusting a size of each section formed of a file to a predetermined page size in an offline process in accordance with memory allocation designed in the offline process.
In a preferred embodiment, sections each having the same memory protection attribute are allocated to a continuous address space.
In another preferred embodiment, sections each having the same memory protection attribute and allocated to the continuous address space are merged to one section.
In still another preferred embodiment, the predetermined page size is a large page size, and when a sum of the size of each section does not reach to the large page size and occurs in a size of fraction (below, fraction size), a dummy section is provided to this fraction size in the offline process, and the dummy section is merged with each section in order to form the large page size so that whether the page size is large is easily determined, at high speed, in an online program.
In still another preferred embodiment, an interface is provided to the memory management unit in order to allocate the memory which becomes the large page size, for a program which performs memory allocation in an online process.
In still another preferred embodiment, the memory management unit provides a lock instruction to the TLB so that a processor having a lock function to the TLB can be easily utilized for a part of program performing a time-critical process.
In still another preferred embodiment, in a TLB architecture separately formed by an instruction TLB and a data TLB and TLB lock being performed separately to the instruction TLB and the data TLB, or in the other TLB architecture formed by the instruction TLB mixed with the data TLB, the TLB lock control is performed based on the same process in these architectures.
In still another preferred embodiment, the TLB is formed of the logical address, the physical address, data size, memory protection attribute, and cache attribute.
In still another preferred embodiment, the memory protection attribute is formed by a user mode and a privileged mode, and both modes includes read operation, write operation and execution.
In still another preferred embodiment, the offline process is executed by an offline program module which is formed of at least two user programs each including user code and user data, and at least two privileged programs each including privileged code and privileged data.
In still another preferred embodiment, the online process is executed by an online program module which is formed of sections including user code, user data, privileged code and privileged data.
In still another preferred embodiment, the offline program module is translated into the online program module, in the above translation, both user codes in the offline program module are merged to one user code in the online program module, both user data are merged to one user data, both privileged codes are merged to one privileged code, and both privileged data are merged to one privileged data.
In still another preferred embodiment, the online program module is formed of the user code section and the user data section, the user code section is formed of two user codes and the dummy section, and the user data section is formed of two user data and the dummy section.
In still another preferred embodiment, in the online program module, a size of the dummy section is determined

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High-speed address translation system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High-speed address translation system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High-speed address translation system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2516604

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.