Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2001-02-23
2004-08-24
Vanderpuye, Kenneth (Department: 2661)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S412000, C370S474000
Reexamination Certificate
active
06781992
ABSTRACT:
TECHNICAL FIELD OF THE INVENTION
The present invention relates to data networking equipment and network processors. Specifically, the present invention relates to a network engine that reorders and reassembles data packets and fragments in a network data stream at wire speeds.
BACKGROUND OF THE INVENTION
The character and requirements of networks and networking hardware are changing dramatically as the demands on networks change. Not only is there an ever-increasing demand for more bandwidth, the nature of the traffic flowing on the networks is changing. With the demand for video and voice over the network in addition to data, end users and network providers alike are demanding that the network provide services such as quality-of-service (QoS), traffic metering, and enhanced security. However, the existing Internet Protocol (IP) networks were not designed to provide such services because of the limited information they contain about the nature of the data passing over them.
Existing network equipment that makes up the infrastructure was designed only to forward data through the network's maze of switches and routers without any regard for the nature of the traffic. The equipment used in existing networks, such as routers, switches, and remote access servers (RAS), are not able to process any information in the network data stream beyond the packet headers and usually only the headers associated with a particular layer of the network or with a set of particular protocols. Inferences can be made about the type of traffic by the particular protocol, or by other information in the packet header such as address or port numbers, but high-level information about the nature of the traffic and the content of the traffic is impossible to discern at wire speeds.
The ability to look beyond the header information while still in the fast-path and into the packet contents would allow a network device to identify the nature of the information carried in the packet, thereby allowing much more detailed packet classification. Knowledge of the content would also allow specific contents to be identified and scanned to provide security such as virus detection, denial of service (DoS) prevention, etc. Further, looking deeper into the data packets and being able to maintain an awareness of content over an entire traffic flow would allow for validation of network traffic flows, and verification of network protocols to aid in the processing of packets down stream.
One major problem with looking into the contents of data packets at wire speeds is the fact that data packets often end up on the network out of sequence and fragmented. Data packets can end up out of sequence in many ways. For example, one or more later data packets in a sequence may be routed through a different, faster path than earlier data packets causing the sequence to be out of order. Or, a data packet or packets may be held at a device on the network for additional processing or may get stuck in a slower queue in a network device causing later sequenced data packets to be sent ahead of earlier packets.
Similarly, data packets can also become fragmented. Fragmentation can occur when a data packet is transmitted through a device such as a router or switch, which has a maximum limit on the size of packets it processes. If the data packet is greater than this maximum, the packet is broken into two or more fragmented packets to be transmitted. Similarly, data packets that are sent across some ATM networks can end up fragmented, due in part to ATM's maximum cell size of 53 bytes.
Out of sequence packets and fragmented packets make it difficult to scan past header information into the payload contents of packets, and make it impossible to maintain any kind of intelligence or state between data packets since such intelligence or state would require scanning the contents of the packets in order. In order to scan the entire contents of data packets including the payloads, it is necessary to reassemble fragmented packets and reorder out of sequence packets.
Accordingly, what is needed is a queue engine that is able to reorder and reassemble data packets at wire speeds beyond 1 gigabit per second, thereby allowing the scanning of the entire contents of data packets including header and payload information so that state information or awareness can be maintained throughout an entire data traffic flow.
SUMMARY OF THE INVENTION
The present invention provides for a network engine that is operable to reorder and reassembly IP data packets in a network, such a network engine is hereinafter referred to as a queue engine for its ability to place packets into a datastream for applications such as deep packet classification. The queue engine includes an input interface that accepts the data packets into the queue engine where they are stored into a packet memory. A link list control unit and link list memory keep track of the locations of each data packet in memory. The data packets can be broken into smaller blocks for ease of storage and efficient memory consumption, in which case the link list controller keeps track of the location of each block and its relationship to the whole.
A packet assembler extracts information from the data packets, usually from the headers of the data packets and determines whether the data packet is a fragment or is out of sequence. The packet assembler uses unique fields in the data packet to access a session ID, which is used to associate the data packet with a particular traffic flow over the network. The session ID allows each data packet to be assigned to a traffic flow so sequence numbers can be used to anticipate the next data packet and out of order packets can be identified. Out of order packets are sent to a reordering unit, which reorders the data packets by modifying links to the packet memory.
The queue engine can further include a fragment reassembly unit, which is operable to take fragmented packets identified by the packet assembler and reassemble the fragments into complete data packets. Much like the reordering unit, the fragment reassembly unit collects the fragments and then places them into the proper order and modifies the links to the packet memory to reflect the complete data packet.
The foregoing has outlined, rather broadly, preferred and alternative features of the present invention so that those skilled in the art may better understand the detailed description of the invention that follows. Additional features of the invention will be described hereinafter that form the subject of the claims of the invention. Those skilled in the art will appreciate that they can readily use the disclosed conception and specific embodiment as a basis for designing or modifying other structures for carrying out the same purposes of the present invention. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the invention in its broadest form.
REFERENCES:
patent: 5619497 (1997-04-01), Gallagher et al.
patent: 5629927 (1997-05-01), Waclawsky et al.
patent: 5926475 (1999-07-01), Saldinger et al.
patent: 6246684 (2001-06-01), Chapman et al.
patent: 6665794 (2003-12-01), Koker et al.
Garrow Corey Alan
Rana Aswinkumar Vishanji
Cox Craig J.
Netrake Corporation
Vanderpuye Kenneth
LandOfFree
Queue engine for reassembling and reordering data packets in... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Queue engine for reassembling and reordering data packets in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Queue engine for reassembling and reordering data packets in... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3300684