Prioritized-buffer management for fixed sized packets in...

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S395700, C711S147000

Reexamination Certificate

active

06529519

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to stream data applications where a sequentially accessed buffer is used, and in particular to buffer systems employing memory management to prevent loss of still required or “leftover” buffer contents as a result of overwriting. The present invention also pertains to management of a prioritized buffer for stream data organized in fixed size packets or cells, as may be employed in wireless multimedia applications.
Wireless voice communications such as provided today by cellular systems have already become indispensable, and it is clear that future wireless communications will carry multimedia traffic, rather than merely voice. ATM (asynchronous transfer mode) technology has been developed over wired networks to carry high-speed data traffic with different data rates, different quality-of-service (QoS) requirements (for example, data reliability, delay considerations, etc.), different connection or connectionless paradigms, etc. for multimedia communications. It is then natural to assume that in the future wireless ATM-based (WATM) service will be provided at the consumer end of a wired network.
Existing efforts towards building a wireless local-area network (LAN) are focused around emerging standards of IEEE 802.11 in the United States and HIPERLAN in Europe. While these standards are almost mature, their development did not take into consideration ATM-based service requirements of QoS guarantees for both real-time and data traffic. Essentially, these requirements come about by multiplexing video, audio, and data services (multimedia) in the same medium. Audio data does not require the packet-error reliability required of data services, but cannot tolerate excessive delay. Video data can in general suffer more delay than audio; however it is intolerant to delay jitter. These delay and packet-error rate considerations forced ATM to adopt a connection-oriented service. It also forced error-control to be done end-to-end, instead of implementing error-control between every two nodes within the specified connection (error-control is a method of ensuring reliability of packets at a node, whereby a packet error is detected, and then a packet retransmission request is sent to the transmitting node). Such a strategy was feasible with the wired fiber-optic networks, which has a very small packet error rate. Wireless networks do not in general provide such low-error rates.
Delay considerations are also important for ATM service. A wired ATM network will simply block any services for which it cannot guarantee the required QoS. Typically wireless networks do not allow such a feature; the delay actually can increase exponentially in an overloaded network. Such a channel-access protocol is indeed specified in IEEE 802.11 and HIPERLAN.
The services that are supported over ATM have one of the following characteristics with regards to the time-varying feature of the data rate of service; also listed are the QoS parameters which are expected to be sustained by the network:
Constant Bit Rate (CBR): Specify Bit Rate
Variable Bit Rate (VBR)—RT: Specify Sustained Cell Rate, Max Burst Size, Bounded Delay
Variable Bit Rate (VBR)—NORT: Specify Sustained Cell Rate, Max Burst Size
Available Bit Rate (ABR): Best Effort Service—No Bandwidth Guarantees Except for a Minimum Rate Negotiation
Unspecified Bit Rate (UBR): ABR without any Guaranteed Rate
Clearly, an important issue in designing a WATM system is that the Medium Access Control (MAC) protocol, which specifies the method of access to the wireless channel among multiple users, must satisfy the basic requirements of ATM.
One method of implementing a MAC protocol is to use Time-Division Multiple Access (TDMA), wherein TDMA frames are divided into slots, each of which is assigned to an unique user. In general, this assignment can either be fixed, resulting in classical TDMA, or could be variable, resulting in reservation-based TDMA (R-TDMA). In R-TDMA, a sub-framing occurs in terms of different “phases” of the TDMA frame consisting typically of a “control” phase where reservations are asked and assigned, and a “data” phase, where transmission slots are used. To accommodate ATM QoS, the MAC protocol could implement R-TDMA utilizing a sequence of Control-Data Frames (CDFs), each CDF consisting of a control phase followed by a data phase. During the control phase, multiple wireless terminals specify a number of ATM slots required for their use. Once this request is successful, a certain number of ATM slots will be reserved for a particular wireless terminal and the wireless terminal can then transmit its designated packets in a specified sequence during the data phase.
To implement R-TDMA, the MAC layer needs a single prioritized buffer. Two issues are important to the MAC layer buffer control. First, incoming cells from the upper layer have to be sorted according to their ATM QoS specifications, i.e. ATM cells which have more immediate requirement must be sent earlier. Second, the MAC layer must support power saving, i.e., the MAC layer should be active only when required.
The prioritized buffer implementation presents a problem in buffer management, as memory fragmentation can result. For example, assume first that the buffer is empty. Then assume that 5 ATM cells occupied sequential addresses in memory. Because of QoS considerations, assume that ATM cells
2
and
4
were transmitted during the current CDF, leaving gaps in the buffer and resulting in a memory fragmentation problem. Since the buffer size cannot be infinite, a method must be found to reuse these gaps.
Generally, the fragmentation problem could be solved in software executed by the processor, i.e., a processor-based embedded system is used to manage defragmentation of the prioritized buffer. A simple technique could recopy all the “leftover” packets within the buffer to the head of the buffer. However, such a solution, although programmable, can be quite expensive with respect to the processor's resources. For bursty sources, it is possible that there may be a significant number of leftover packets within the buffer, and moving all of those packets is a significant overhead. Note that this creates two problems, namely that a significant amount of processor time could be used for memory defragmentation, and also that an upper bound to the amount of time that a processor needs for memory defragmentation is large causing problems in how the scheduling of processor tasks are to be undertaken.
Another solution with reference to the above architecture is to copy all the leftover ATM cells from the “input” buffer to another place, for example an additional buffer, and implement memory defragmentation in a controlled way using processor software, i.e., copy the leftover packets in the prioritized buffer to appropriate spaces within the additional buffer. This alleviates the problem significantly as compared to the method described above, as leftover packets occurring only during the current CDF must be moved every time. However, the problem in this technique is memory duplication and also the processor essentially implements two memory copy commands for every byte that is transmitted, namely one from the prioritized buffer to the additional buffer, and another from the additional buffer to the physical layer FIFO.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a sequentially accessible buffer including memory management means arranged to defragment the buffer, which management of defragmentation is implemented in a manner that the processor is not loaded with the problem of either relocating or “writing-around” leftover packets.
It is a further object of the present invention that such management of defragmentation is implemented in a simple and yet extremely controlled way which maximizes the buffer utilization, and minimizes the processor interaction with the defragmentation. In the case of a WATM terminal, minimizing processor interaction with the defragmentation enables better power-saving.
These and o

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Prioritized-buffer management for fixed sized packets in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Prioritized-buffer management for fixed sized packets in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Prioritized-buffer management for fixed sized packets in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3070581

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.