Content addressable memory architecture

Static information storage and retrieval – Associative memories – Ferroelectric cell

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S189070, C365S230030

Reexamination Certificate

active

06775166

ABSTRACT:

BACKGROUND OF THE INVENTION
A Content Addressable Memory (“CAM”) includes a plurality of CAM cells arranged in rows and columns. As is well-known in the art, a CAM cell can be dynamic memory based or static memory based and can be a binary cell or a ternary cell. A binary CAM cell has two possible logic states ‘1’ and ‘0’. A ternary CAM cell has three possible logic states ‘0’, ‘1’ and don't care (‘X’) encoded in two bits.
A search and compare feature allows all of the CAM cells in the CAM to be searched for an entry with data that matches a search key. An entry can include a plurality of CAM cells. For example, a 72-ternary bit entry includes 72 ternary CAM cells. If an entry matching the search key is stored in the CAM, the address of the matching entry, that is, the match address, a match flag indicating whether there is a match and a multiple match flag indicating whether there are more than one match are typically provided. The match address may be used to find data associated with the search key stored in a separate memory in a location specified by the match address.
Each entry in the CAM has an associated match line coupled to each CAM cell in the entry. Upon completion of the search, the state of the match line for the entry indicates whether the entry matches the search key. The match lines from all entries in the CAM are provided to a match line detection circuit to determine if there is a matching entry for the search key in the CAM and then the result of the match line detection circuit is provided to a priority encoder. The priority encoder selects the match entry with the highest priority if there are a plurality of match entries for the search key in the CAM. The priority encoder also provides the match address and a match flag. The match flag is enabled when there is at least one match/hit.
Typically, a CAM with a large number of CAM cells is subdivided into a plurality of banks.
FIG. 1
illustrates a simplified prior art CAM
100
subdivided into a plurality of banks
102
A-D, with each bank including entries comprising a plurality of CAM cells (not shown) and a supporting circuit (not shown) for the bank. Search data
104
for a search and compare operation is received at external pins of the CAM
100
, routed to the center of the CAM, then routed from the center to each of the banks. The path from the external search data pin to bank
0
102
a
is shown as trace
106
. A search for a matching entry for the search data is performed in parallel in each bank
102
A-D. Upon completing a search operation for search data, each bank performs operations including priority encoding to select the match address for the highest priority matching entry stored in the respective bank. The result of the search in each bank is collected by the CAM output logic circuit
108
. The CAM output logic circuit
108
is located in the center of the CAM
100
. A priority encoder in the CAM output logic circuit
108
selects the highest priority matching entry from the result of the search in each bank, adds a bank identifier to the matching entry and outputs the match address
110
for the highest priority matching entry for the search word and a match flag. The operation of a priority encoder has been described but operations for other output results typically provided by a CAM such as, a match flag and a multiple match flag are also operative as known by those skilled in the art.
SUMMARY OF THE INVENTION
As described above, search data is routed to the center from external pins, then routed to each bank. After performing an operation, the result data from each bank is returned to the center. In order to support the result data and search data paths, all connecting traces are concentrated in the center, thereby making the center area bigger and distances between each bank wider. Therefore area efficiency of silicon is decreased because the center area must be reserved for the connecting traces and supporting circuit.
Routing congestion is avoided by replacing the plurality of banks with an array of sub-blocks. All of the data is input on one side of the array of sub-blocks and routed across each row of the array. Results are output on the opposite side of the array to the input data. The issue of latency is addressed with an optional pipeline stage in each sub-block. All of the pipeline stages are enabled resulting in higher latency when operating at a high clock speed. When the array is operated at a lower clock speed, some or all of the pipe stages can be bypassed and latency reduced.
Instead of routing all data lines and result lines to the center of the CAM, the present invention arranges data lines across each row of sub-blocks and forwards the result of a search of each sub-block to the next subsequent sub-block in the row. The results of the search in each row of sub-blocks are coupled to a priority encoder to select the match address for the highest priority matching entry stored in a matching entry in a sub-block in the array.
A Content Addressable Memory includes a plurality of data inputs for receiving data, an array of content addressable sub-blocks and a plurality of outputs for the results of operations in rows of sub-blocks in the array. The plurality of outputs are located on the side of the array opposite to the data inputs. Each sub-block in a first column of the array is coupled to the plurality of data inputs. Data received by a sub-block in a row in the first column of the array is propagated across the array to each subsequent sub-block in the row of the array. The Content Addressable Memory also includes priority encoder logic coupled to each sub-block in a last column in the array for selecting a highest priority row match output for the result of a search and compare operation. The priority encoder logic also provides a match flag and a match address corresponding to the selected highest priority matching entry.
Each sub-block comprises a plurality of cell arrays. The received data includes search data and each sub-block performs a search for a match for the search data stored in the plurality of cell arrays. Each sub-block in a row forwards a match flag and a sub-block match address dependent on the result of the search in the sub-block and the result of searches in all previous sub-blocks in the row to the next sub-block in the row.
Each sub-block in a row is coupled to a next sub-block in a subsequent column for forwarding received data and results to the next subsequent sub-block. Entries with highest priority may be stored in sub-blocks in the first column. A match in a sub-block in a previous column overrides a match in a sub-block in a subsequent column. Each sub-block may include a pipeline stage for latching the received data and the operation results prior to forwarding the received data to the next sub-block in the row. The pipeline stage may be enabled to increase latency or bypassed to decrease latency.
The number of columns in the array may be four and the number of rows in the array may be 32 or 16. Each cell array may include a plurality of dynamic random access memory based cells or static random access memory based cells. The cell may be ternary or binary. The operation may be a read, write or search and compare


REFERENCES:
patent: 5930359 (1999-07-01), Kempke et al.
patent: 6249449 (2001-06-01), Yoneda et al.
patent: 6324087 (2001-11-01), Pereira
patent: 6470418 (2002-10-01), Lien et al.
patent: 6584003 (2003-06-01), Kim et al.
patent: 6591331 (2003-07-01), Khanna
patent: 2002/0073073 (2002-06-01), Cheng
patent: 2002/0080665 (2002-06-01), Hata
patent: 0 227 348 (1987-07-01), None
patent: 2001236790 (2001-08-01), None
Clark, L.T. and Grondin, R.O., “A Pipelined Associative Memory Implemented in VLSI,”I.E.E.E. Journal of Solid-State Circuits 24(1):28-34, (1989).
Ghose, Kanad, “The architecture of response-pipelined content addressable memories,”Microprocessing and Microprogramming,40(6):387-410, (1994).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Content addressable memory architecture does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Content addressable memory architecture, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Content addressable memory architecture will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3318979

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.