Electrical computers and digital processing systems: memory – Address formation – Address mapping
Reexamination Certificate
1996-12-31
2001-10-23
Peikari, B. James (Department: 2186)
Electrical computers and digital processing systems: memory
Address formation
Address mapping
C709S241000, C711S153000, C711S170000, C711S202000, C713S001000
Reexamination Certificate
active
06308248
ABSTRACT:
BACKGROUND OF THE INVENTION
The invention relates to memory management in a computer system.
A computer system includes a main memory (typically implemented with dynamic random access memories or DRAMs) used to store code and data information. Generally, a computer system includes several layers of buses, including a host bus, a Peripheral Component Interconnect (PCI) bus, and an expansion bus such as the Industry Standard Architecture (ISA) or Extended Industry Standard Architecture (EISA) bus. The main memory is used as a common resource accessible to system devices on the buses. Typically, the main memory is located on the host bus along with the central processing unit (CPU) for optimal performance of the computer system because the CPU requires fast access to the main memory.
Certain other system devices may also require relatively fast access to a memory device. One such system device is the video card, which typically includes a video memory (or frame buffer) for storing video data. Local storage of the video data allows a video controller in the video card to refresh or update a video display within the required amount of time to avoid flickering problems.
To avoid the need for a dedicated frame buffer on the video card, the VESA (Video Electronics Standards Association) Unified Memory Architecture (VUMA) was developed, which is described in “VESA Unified Memory Architecture (VUMA) Standard Hardware Specifications,” Version 1.0 (Mar. 8, 1996), and which is hereby incorporated by reference. VUMA is a computer system memory architecture in which the main memory is “physically” shared between the system (including the CPU and other devices) and a peripheral device (a “VUMA device”). Typically, a system device other than the CPU accesses the main memory through a direct memory access (DMA) controller over the ISA or EISA expansion bus, and a PCI bus device accesses the main memory in a memory transaction over the PCI bus. A VUMA device, on the other hand, is directly coupled to the main memory bus, as is the CPU, which improves memory access times for the VUMA device. One such VUMA device is a video card.
Referring to
FIG. 1
, a VUMA device
18
is connected to a Peripheral Component Interconnect (PCI) bus
12
in a computer system. The VUMA device
18
is also connected to a main memory
16
(having a storage capacity of, for example, 8 megabytes) over a memory bus
20
, which is further connected to a core logic circuit
14
(including a bridge to the PCI bus
12
and a memory controller). The core logic circuit
14
acts as the interface between a CPU
10
and the PCI bus
12
and the main memory
16
. Other devices (not shown) are also connected to the PCI bus
12
.
The main memory
16
is shared between the system (including the CPU
10
and other PCI bus devices) and the VUMA device
18
, which can directly access the main memory
16
over the memory bus
20
. A portion
22
of the main memory
16
is allocated as the memory for the VUMA device
18
(“the VUMA memory”), and the remainder of the main memory
16
is allocated to the rest of the computer system.
The VUMA memory
22
is mapped to the top of the main memory
16
as a single, contiguous region. Thus, for example, if the VUMA device
18
requires 2 megabytes (MB) of memory space, a 2-MB contiguous region of the main memory
16
is mapped to the VUMA memory
22
, leaving the remaining 6-MB region for system use.
SUMMARY OF THE INVENTION
In general, in one aspect, the invention features a method of allocating memory space in a main memory of a computer system to a unified memory architecture device. The main memory is associated with a physical address space. A required linear address range is determined for the device, and the linear address range is mapped to scattered portions in the physical address space.
Implementations of the invention may include one or more of the following features. A page table is created containing page frame numbers corresponding to page frames in the main memory, the page frames being allocated to the device. The page frames are in non-contiguous blocks of the main memory. The frame numbers are loaded into a translation look-aside buffer for converting a linear address in the linear address space to a physical address in the physical address space. The device includes a video card.
In general, in another aspect, the invention features a method of storing video data of a video card in a main memory of a computer system. Non-adjacent portions of the main memory are allocated, and the allocated portions are used to store the video data.
Implementations of the invention may include one or more of the following features. A page table is created containing page frame numbers corresponding to page frames in the main memory, the page frames being allocated to the video card. The page frames are in non-contiguous blocks of the main memory. The video card is associated with a linear address space. The frame numbers are loaded into a translation look-aside buffer for converting a linear address in the linear address space to a physical address in the physical address space.
In general, in another aspect, the invention features a method of mapping a linear address of a unified memory architecture device to a physical address in a main memory in a computer system. Page frame numbers are stored in map registers, wherein the page frame numbers correspond to non-adjacent page frames in the main memory. One of the map registers is accessed with the linear address to retrieve a portion of the physical address.
In general, in another aspect, the invention features a computer system including a main memory having a physical address space. A device is coupled to the main memory and allocated a linear address space. A controller is adapted to map the linear address space of the device to scattered portions in the physical address space.
Implementations of the invention may include one or more of the following features. A page table accessible by the controller contains page frame numbers representing page frames in the main memory allocated to the device. The page frames are in non-contiguous blocks of the main memory. The device includes a translation look-aside buffer. The controller is adapted to load the frame numbers of the page table into the translation look-aside buffer for converting a linear address to a physical address. The device includes a video card. The device is allocated to multiple regions in the main memory, the multiple regions being accessible by the device. Multiple page tables are accessible by the controller, with each page table containing page frame numbers representing page frames in the main memory corresponding to an allocated region.
Implementations of the invention may include one or more of the following advantages. Memory space in a main memory allocated to a computer system device can be scattered throughout the main memory without the need to allocate a contiguous region of the memory. Main memory space can be more effectively utilized by avoiding the requirement of allocating a contiguous portion of the main memory for any one particular device. By scattering the memory allocated to the device in multiple banks of the main memory, concurrent access to the different memory banks by the device and by other system devices is enabled.
Other advantages and features will be apparent from the following description and claims.
REFERENCES:
patent: 3913070 (1975-10-01), Malcolm et al.
patent: 4197579 (1980-04-01), Otis, Jr. et al.
patent: 5113180 (1992-05-01), Gupta et al.
patent: 5129060 (1992-07-01), Pfeiffer et al.
patent: 5159678 (1992-10-01), Wengelski et al.
patent: 5640332 (1997-06-01), Baker et al.
patent: 5802605 (1998-09-01), Alpert et al.
patent: 5854638 (1998-12-01), Tung
patent: 5907863 (1999-05-01), Bolyn
patent: 5920898 (1999-07-01), Bolyn et al.
Video Electronics Standards Association, “VESA Unified Memory Architecture (VUMA) Standard Hardware Specification, Version 1.0”, San Jose, CA, pp. 1-45 (Mar. 8, 1996).
Video Electronics Standards Association, “VE
Bonola Thomas J.
Moriarty Michael P.
Welker Mark W.
Compaq Computer Corporation
Peikari B. James
Sharp Comfort & Merrett P.C.
LandOfFree
Method and system for allocating memory space using mapping... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for allocating memory space using mapping..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for allocating memory space using mapping... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2588288