Static information storage and retrieval – Floating gate – Particular biasing
Reexamination Certificate
2001-03-27
2003-03-04
Zarabian, A. (Department: 2824)
Static information storage and retrieval
Floating gate
Particular biasing
C365S185290
Reexamination Certificate
active
06529416
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to memory storage systems, and more particularly to flash memory systems.
BACKGROUND OF THE INVENTION
Computer systems have traditionally used hard disk systems with rotating magnetic disks as data storage media. However, disk drives are disadvantageous in that they are bulky and they require high precision moving mechanical parts. They are also not rugged and are prone to reliability problems, as well as consuming significant amounts of power.
More recently these hard disk systems are being replaced by semiconductor systems. These semiconductor systems use electrically erasable programmable read-only-memory (EEPROM) technology as memory storage cells as a substitute for the hard-disk magnetic media. The EEPROMs have the capability of electrically erasing data stored on the memory and replacing it with other data. However, programming the EEPROM is relatively slow since input/output of data and addressing is in a serial format. Additionally, special “high” voltages are required when programming the EEPROM. Even further, EEPROMs are typically only available in relatively small memory sizes such as 8 Kbyte or 16 Kbyte sizes. As more and more non-volatile memory space is required at lower power consumption for portable electronic apparatus, alternatives to EEPROM are required.
“Flash” EEPROM, also known as “flash memory”, has been the answer. Large regions of flash memory can be erased at one time which makes reprogramming flash memory faster than reprogramming EEPROM and which is the origin of the term “flash”. Additionally, it has lower stand-by power consumption than EEPROM. Also, in replacing hard disk systems, these flash memory systems are sometimes referred to as flash “disk” systems and similar descriptive terminology is used, even though no rotating magnetic disks are used.
In the flash memory system, a plurality of flash memory chips are arranged in banks hat share some of the control signals from a buffer chip. The flash memory chips are nonvolatile semiconductor-memory chips that retain data when power is no longer applied.
The flash memory chips are divided into pages and blocks. A 64 Mbit flash chip typically has 512-byte pages, which happens to match the sector size for IDE and small-computer system interface (SCSI) hard disks. Rather than writing to or reading from just one word in the page, the entire page must be read or written at the same time; individual bytes cannot be written. Thus flash memory operations are inherently slow since an entire page must be read or written.
Flash memory is also not truly random-access. While reads can be to random pages, writes require that memory cells must first be erased before information is placed in them; i.e., a write (or program) operation is always preceded by an erase operation.
The erase operation is done in one of several ways. For example, in some flash memories, the entire chip is erased at one time. If not all the information in the chip is to be erased, the information must first be temporarily saved, and is usually written into another memory (typically a RAM). The information is then restored into the nonvolatile flash memory by programming back into the chip.
In other flash memories, the memory is divided into blocks that are each separately erasable, but only one at a time. By selecting the desired block and going through the erase sequence the designated area is erased. While, the need for temporary memory is reduced, erase in various areas of the memory still requires a time consuming sequential approach.
In still other flash memories, the memory is divided into sectors where all cells within each sector are erasable together. Each sector can be addressed separately and selected for erase.
In even other flash memories, certain numbers of blocks are reserved to be pre-erased and a logical block address (LBA) to physical block address (PBA) translation must be performed.
While flash reads can be to random pages, flash writes require that larger regions, such as a sector, block, or chip be erased in a flash erase operation before a flash write can be performed. For example, in block erases, a block of 16 pages must be erased together, while all 512 bytes on a page must be written together.
In all these flash memories, flash erase operations are significantly slower than flash read or write operations. Further, only one erase operation per flash memory chip can be active at a time.
Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, a way of speeding up these operations has been long sought, but has equally as long eluded those skilled in the art.
Working from another direction, those skilled in the art have developed cache memories to speed up the performance of computer systems having slower access devices, such as flash memory. Typically, a part of system RAM is used as a cache for temporarily holding the most recently accessed data from the flash memory system. The next time the data is needed, it may be obtained from the fast cache instead of the slow flash memory system. This technique works well in situations where the same data is repeatedly operated on. This is the case in most structures and programs since the computer tends to work within a small area of memory at a time in running a program.
Most of the conventional cache designs are read caches for speeding up reads from flash memory. In some cases, write caches are used for speeding up writes to flash memory. However, in the case of writes to flash memory systems, data is written to flash memory directly every time they occur, while being written into cache at the same time. This is done because of concern for loss of updated data files in case of power loss. If the write data is only stored in the cache memory, which is a volatile memory, a loss of power will result in new updated files being lost from the cache before having the old data updated in non-volatile flash memory. The system will then be operating on the old data when these files are used in further processing. The need to write to flash memory every time is considered by those skilled in the art to defeat the benefits of the caching mechanism for writes. Read caching does not have this concern since the data that could be lost from cache has a backup in flash memory.
Those skilled in the art have also used direct-memory access (DMA) to facilitate data transfers. While DMA is efficient for transfers of raw data to a memory, flash memory chips also require command and address sequences to set up the relatively long flash operations. Unfortunately, DMA is not well suited to transfer addresses and commands since it is designed to transfer long strings of data beginning at a starting address through an ending address.
Thus, those skilled in the art working from different directions have encountered what appears to be an insurmountable bottleneck in speeding up flash memory systems to match faster and faster host computer system processors.
SUMMARY OF THE INVENTION
A method of memory operation providing a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and erasing a first portion of the memory to accommodate the plurality of entries to be written to memory and writing to the first portion of the memory the plurality of entries to be written to memory in which an erase operation is followed by a plurality of sequential write operations. Since the time taken by the flash erase and the write operations affect the operating speed of the entire flash memory system, the present invention provides a way of substantially speeding up these operations.
A memory system having a memory, a cache containing a plurality of entries with a plurality of the entries to be written to memory, a detector for detecting in the cache the plurality of entries to be written to memory, and a processor for erasing a first portion of the
Bruce Ricardo H.
Bruce Rolando H.
BiTMICRO Networks, Inc.
Ishimaru Mikio
Uriarte Stephen R.
Zarabian A.
LandOfFree
Parallel erase operations in memory systems does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Parallel erase operations in memory systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Parallel erase operations in memory systems will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3006182