Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-12-20
2004-07-06
Padmanabhan, Mano (Department: 2188)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S106000, C711S113000, C711S118000, C711S135000, C711S151000, C711S158000, C711S159000, C711S167000, C707S793000, C709S240000, C709S242000, C710S244000
Reexamination Certificate
active
06760813
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is related to the field of communications via networks, and more specifically to devices, softwares and methods for preemptively updating expiring entries in a cache memory, to anticipate future inquiries for these entries.
2. Description of the Related Art
In network communications, data is propagated from a source device to a destination device along a path in the network. The path is a series of nodes of the network. Each node has an electronic address that the data is forwarded to, as it is moving along the path. The electronic addresses of the nodes are stored in special devices, such as network servers.
Paths are regularly reconfigured to be through different nodes. This occurs for various reasons, such as to deal with changing network conditions. With reconfiguring, no path is expected to remain recommended indefinitely. In fact, each electronic address is typically provided with its own expiration time (T
0
). After the expiration time (T
0
), also called timeout time, the address is not necessarily reliable, and is frequently deleted.
When an inquiry is received for an electronic address, it is first determined if it is in the cache. If so, then its expiration time (T
0
) is compared against the current time. If it has not expired yet, then it is furnished as a reply. If it has expired, the inquiry is placed on hold, and then an updated entry is sought.
The process has problems. Seeking the updated entry takes a long time, while a transmission is waiting. This problem will be described in more detail, after describing the prior art process more analytically.
Referring now to
FIG. 1
, a flowchart
100
is used to illustrate the practice of the prior art. It will be appreciated that flowchart
100
helps describe the process from an event-driven point of view.
According to a box
110
, it is inquired whether an item is received. If not, then according to a next box
120
, it is inquired whether any entries in a cache memory have expired. This would be known if their expiration time T
0
is less than the current time. If not, then execution reverts to box
110
. If yes, then according to a next box
125
, the expired entries are deleted, and execution again reverts to box
110
.
If, in box
110
, an item has indeed been received, then according to a next box
140
, it is inquired whether the item is an inquiry for an address. If not, then according to a next box
150
, it is inquired whether instead the item is a reply to a request. If not, then according to a next box
160
, the item is processed otherwise, and execution reverts to box
110
.
If, in box
140
, the item is indeed an inquiry for an address, then the inquiry is processed. Processing the inquiry is described in more detail later, with reference to flowchart
200
of FIG.
2
. Execution then reverts back to box
110
.
If, in box
150
, the item is indeed a reply to a request, then the reply is processed. Processing the reply is described in more detail later, with reference to flowchart
300
of FIG.
3
. Execution then reverts back to box
110
.
Referring now to
FIG. 2
, a flowchart
200
is used to illustrate how the inquiry is processed after box
140
of
FIG. 1
in the prior art.
According to a box
210
, it is inquired whether the inquired for entry is already in the cache memory. It may have been, from fulfilling a prior entry, and it might not have been discarded yet at box
125
of FIG.
1
.
If the entry is already in the cache memory, then according to a next box
240
, it is inquired whether it is still valid, i.e. not expired. In other words, it is inquired whether its expiration time T
0
is larger than the current time. If yes, then according to a next box
250
, the inquiry is fulfilled by responding to it with the entry.
If the entry is not in the cache memory (NO, from box
210
), or it has timed out (NO, from box
240
), then according to a next box
260
, a request is formed based on the inquiry, for an updated value of the item. According to a next box
270
, the formed request is transmitted, and according to a next box
280
, the inquiry is placed on hold.
Referring now to
FIG. 3
, a flowchart
300
is used to illustrate how the inquiry is processed after box
350
of
FIG. 1
in the prior art.
According to a box
310
, the received item is stored in the cache memory.
According to a next box
320
, its timeout value T
0
is determined.
According to a next box
360
, the inquiry that generated the request is identified. Generation would be at a prior time, at box
260
of FIG.
2
.
According to a next box
370
, the item is forwarded to the maker of the inquiry.
The problem in the prior art is that it often takes a long time to receive at box
310
a reply to the request generated at box
260
of FIG.
2
. During that time, the inquiry is on hold, which slows down the network.
BRIEF SUMMARY OF THE INVENTION
The present invention overcomes these problems and limitations of the prior art.
Generally, the present invention provides devices, softwares and methods for preemptively updating entries in a cache memory that are expiring. This anticipates future inquiries for the entries, even after they expire.
In an embodiment of the invention, preemptive updating takes place at a scheduled refresh time after expiration. Refresh times are preferably scheduled on an individual basis, in accordance with when the future inquiry is expected to happen. Additional data is preferably maintained for each entry, such as a usage history and a change history. The additional data are used in computing an individualized refresh time for updating.
The invention offers the advantage that a frequently made inquiry will be fulfilled with an entry that has been refreshed preemptively in advance, even after its original expiration time. There will be no need to go out to find the updated answer, while keeping the inquiry on hold. This prevents time from being wasted in waiting for an update, after an inquiry has been made for it.
In addition, more popular inquiries will be updated more quickly than less popular ones. This streamlines better the performance of the network.
The invention will become more readily apparent from the following Detailed Description, which proceeds with reference to the drawings, in which:
REFERENCES:
patent: 6167436 (2000-12-01), Yamane et al.
patent: 6298343 (2001-10-01), Chang et al.
patent: 6532490 (2003-03-01), Lewis et al.
patent: 2002/0116583 (2002-08-01), Copeland et al.
Cisco Technology Inc.
Padmanabhan Mano
Song Jasmine
LandOfFree
Devices, softwares and methods for preemptively updating... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Devices, softwares and methods for preemptively updating..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Devices, softwares and methods for preemptively updating... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3252577