Electrical computers and digital processing systems: memory – Address formation – Address mapping
Reexamination Certificate
2000-05-01
2003-03-11
Thai, Tuan V. (Department: 2186)
Electrical computers and digital processing systems: memory
Address formation
Address mapping
C711S200000, C711S203000, C711S206000
Reexamination Certificate
active
06532528
ABSTRACT:
BACKGROUND OF THE INVENTION
The invention relates to a data processor having a translation lookaside buffer and, more particularly, a data processing system using such a data processor. For example, the invention relates to a technique which is effective when it is applied to the realization of a high data processing speed.
In a virtual storage system, a virtual memory space which is sufficiently larger than a physical memory is prepared and a process is mapped into the virtual memory space. Now, “process” means a program which is being executed under management of an OS (Operating System). It is, therefore, sufficient to consider only the operation on a virtual memory as for the process. A MMU (Memory Management Unit) is used for mapping from the virtual memory to the physical memory. The MMU is usually managed by the OS (Operating System) and exchanges the physical memory so that the virtual memory which is needed by the process can be mapped into the physical memory. The exchange of the physical memory is performed between the MMU and a secondary storage or the like. The MMU generally also has a function to protect the storage so that a certain process doesn't erroneously access a physical memory of another process.
When an address translation from an address (virtual address) in the virtual memory to an address (physical address) in the physical memory is performed by using the MMU, there is a case where the address translation information is not registered in the MMU or a virtual memory of another process is erroneously accessed. In this instance, the MMU generates an exception, changes the mapping of the physical memory, and registers new address translation information.
Although the function of the MMU can be realized even by only software, if the translation is performed by software each time the process accesses to the physical memory, the efficiency thereof is low. To prevent it, a translation lookaside buffer for address translation is prepared on the hardware and address translation information which is frequently used is stored in the translation lookaside buffer. That is, the translation lookaside buffer is constructed as a cache memory for the address translation information. A different point from an ordinary cache memory is that when the address translation fails, the exchange of the address translation information is performed mainly in dependence on software.
Various cache memories are widely used to realize a high speed of data and instruction access.
SUMMARY OF THE INVENTION
The present inventors have examined the translation lookaside buffer and cache memory from a viewpoint of realizing a high speed of the memory access. As a processor to divide the translation lookaside buffer into a buffer for an instruction and a buffer for data, for example, there is a processor disclosed in PowerPC 603 RISC Microprocessor User's Manual (MOTOROLA, 1994). The processor further individually has a data cache memory and an instruction cache memory. At pages 7 to 15 of this literature, it will be understood that an instruction TLB miss and a data TLB miss are separately treated in the PowerPC. According to the examination of the present inventors, even if the translation lookaside buffers are separately provided, since there is no interrelation between them, if the address translation fails, necessary address translation information has to be obtained from an external memory and it has been found that there is a limitation in realization of a high memory accessing speed.
As for the cache memory, when a cache miss occurs, a cache entry is newly read out from the external memory by only an amount of one entry. In this instance, if there is no invalid cache entry, a valid cache entry is swept out from the cache memory in accordance with a logic such as LRU (Least Recently Used) or the like. The cache entry which was swept out as mentioned above may include data or instruction to be subsequently used. Therefore, it is desirable that an instruction to specify a processing routine such that a high speed or the like is required is always held in the cache memory. In such a case, it is also considered to enable the cache memory to be used as a random access memory. However, if all of the areas in the cache memory are constructed as mentioned above, all of the functions as a cache memory are lost, so that a case where an inconvenience is caused in dependence on an application is also presumed.
It is an object of the invention to provide a data processor which can realize a high memory accessing speed. In more detail, it is an object to provide a technique for realizing a high memory accessing speed from a viewpoint of address translation and to provide a technique for realizing a high memory accessing speed from a viewpoint of a cache memory.
The above and other objects and novel features of the present invention will be clarified from the description of the specification and the annexed drawings.
An outline of a typical invention among the inventions disclosed in the present invention will be briefly described as follows.
That is, according to a first aspect of the invention, a translation lookaside buffer is separately used for data and for an instruction, address translation information for instruction is also stored into the translation lookaside buffer for data, and when a translation miss occurs in the translation lookaside buffer for instruction, new address translation information is fetched from the translation lookaside buffer for data.
In detail, a data processor (
1
) comprises: a central processing unit (
2
); a first translation lookaside buffer (
4
) in which a part of address translation information to translate a virtual address that is treated by the central processing unit into a physical address is stored and which association-retrieves, from the address translation information, a physical address corresponding to the virtual address that is outputted by the central processing unit; and a second translation lookaside buffer (
3
) in which address translation information regarding an instruction address in address translation information possessed by the first translation lookaside buffer is stored and which association-retrieves, from the address translation information, a physical address corresponding to the virtual address that is outputted by the central processing unit upon instruction fetching, when a result of the associative retrieval indicates a retrieval miss, association-retrieves the first translation lookaside buffer by a virtual address according to the retrieval miss, and obtains the address translation information retrieved by the associative retrieval.
Another data processor according to such an aspect comprises: a central processing unit; a first translation lookaside buffer in which a part of address translation information to translate a virtual address that is treated by the central processing unit into a physical address is stored and which associatively retrieves, from the address translation information, a physical page number corresponding to a virtual page number that is outputted by the central processing unit; a second translation lookaside buffer in which address translation information regarding an instruction address in address translation information possessed by the first translation lookaside buffer is stored and which associatively retrieves, from the address translation information, a physical page number corresponding to the virtual page number that is outputted by the central processing unit upon instruction fetching; and a buffer control circuit (
320
) for, when a result of the associative retrieval by the second translation lookaside buffer indicates a retrieval miss, associatively retrieving the first translation lookaside buffer by a virtual page number according to the retrieval miss, and for supplying the address translation information retrieved by the associative retrieval to the second translation lookaside buffer.
According to the above means, when the translation miss occurs in the translation lookasi
Arakawa Fumio
Ito Masayuki
Narita Susumu
Nishii Osamu
Nishimoto Junichi
LandOfFree
Data processor and data processor system having multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data processor and data processor system having multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data processor and data processor system having multiple... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3042905