Secure cache for instruction and data protection

Electrical computers and digital processing systems: support – Data processing protection using cryptography

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C713S187000, C713S190000, C713S192000, C713S193000, C713S194000, C380S028000

Reexamination Certificate

active

06523118

ABSTRACT:

BACKGROUND
The present invention concerns memory management in a computer system designs and pertains particularly to a secure cache for instruction and data protection.
In order to protect against theft or misuse, secure information within a computing system can be encrypted before being stored in the memory for the computing system. When a secure integrated circuit uses the secure information, the secure information is transferred to the integrated circuit and decrypted before being used. Secure information returned to the memory for the computing system is encrypted before being stored.
Typically, decryption and encryption is handled by a secure memory management unit (SMMU) on the integrated circuit. When a processor requires the use of a page of secure information, the secure memory management unit on the integrated circuit obtains the page of secure information, decrypts the page of secure information and places the data in a cache memory for access by the processor. The cache is typically managed by the SMMU and is implemented using static random access memory (SRAM).
If, in order to bring in the page of secure information, a “dirty” page of information needs to be swapped out to memory, the SMMU performs the swap out of the “dirty” page of information before the new page is placed in the cache. A “dirty” page of information is a page of information which has been written to while in the cache where the changes made have not been written out to the system memory. If the “dirty” page of information contains secure information, the SMMU first encrypts the page before swapping the page out to system memory. While performing page swapping the SMMU holds off the processor while pages are being swapped to and from the processor cache.
In order to lessen the amount of hardware used to implement an hardware SMMU, a hardware direct memory access (DMA) device can be added to the integrated circuit to detect a page miss by the processor. After detecting a page miss, the device DMA holds off the processor until the device DMA has loaded and decrypted the next page of information. This requires the device DMA to sit in-line with the processor and the memory subsystem. The device DMA hardware also has to move the data through the encryption core and into the cache memory space. Such an implementation requires special care to meet timing and memory bus requirements. See, for example, the VLSI Part Number VMS 310 and VLSI Part Number VMS 320 both available from VLSI Technology, Inc., having a business address of 1109 McKay Drive, San Jose, Calif. 95131.
One problem with prior art SMMUs as described above is that they do not take into account processor blocks which already include a cache circuit. Additionally, the cache implementation can result in poor performance because every time there is a cache miss, an entire page of information must be first decrypted and placed in the cache before it can be utilized by the processor.
SUMMARY OF THE INVENTION
In accordance with the preferred embodiment of the present invention, a computing system, includes a processor, a cache, a memory system, and a secure cache controller system. The cache stores a plurality of cache lines. The memory system stores a plurality of blocks of encrypted data. The secure cache controller system is situated between the memory system and the cache. When there is a miss of a first cache line of data in the cache and the first cache line of data resides in a first block of encrypted data within the memory system, the secure cache controller system fetches the first block of encrypted data, decrypts the first block of encrypted data and forwards the first cache line to the cache.
The secure cache controller system includes, for example, a secure cache controller and an encryption and buffering block. In addition, to the memory system storing a plurality of blocks of encrypted data, the memory system can additionally store clear data.
In the preferred embodiment, the secure cache controller system forwards the first cache line to the cache when the first cache line is decrypted, even though the secure cache controller has not completed decrypting all of the first block of encrypted data. Once the secure cache controller system has completed decrypting all of the first block of encrypted data, the secure cache controller system stores the first block of encrypted data in a buffer in case additional accesses are made to cache lines of data within the first block. In the preferred embodiment, before the secure cache controller system fetches the first block of encrypted data, the secure cache controller system checks to see whether the first block has already been decrypted and is buffered within the secure cache controller system.
When a second cache line of data is written from the cache, the secure cache controller system accesses from the memory system a second block of encrypted data within the memory system, decrypts the second block of encrypted data and places the second cache line of data into the second block of encrypted data. After the secure cache controller system places the second cache line of data into the second block of encrypted data, the secure cache controller system encrypts the second block of encrypted data and returns the second block of encrypted data to the memory system. In the preferred embodiment, before the secure cache controller system fetches the second block of encrypted data, the secure cache controller system checks to see whether the second block has already been decrypted and is buffered within the secure cache controller system.
The secure cache architecture described herein has a distinct advantage in speed over a conventional block decryption design. As described herein, the secure cache controller need only decrypt a fetched encrypted block until the secure cache controller has decrypted a sought after cache line before forwarding the cache line to the cache. Without delaying the processor, the remaining portion of the encrypted block can be decrypted and stored locally to be ready for sequential cache line misses. The secure cache can take advantage of a posted write buffer since the encryption and decryption of the modified encryption block data can take place without halting processor operation.
The encryption blocks of data can be much larger than the cache line without affecting the performance of the processor system. This will make the encryption stronger for the external data and instructions.


REFERENCES:
patent: 4847902 (1989-07-01), Hampson
patent: 5224166 (1993-06-01), Hartman, Jr.
patent: 5386469 (1995-01-01), Yearsley et al.
patent: 5568552 (1996-10-01), Davis
patent: 5757919 (1998-05-01), Herbert et al.
patent: 5825878 (1998-10-01), Takahashi et al.
patent: 6061449 (2000-05-01), Candelore et al.
Hamacher, V. Computer Organization. McGraw Hill. 1978. Pp. 245-254.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Secure cache for instruction and data protection does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Secure cache for instruction and data protection, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Secure cache for instruction and data protection will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3140348

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.