Assignment of dual port memory banks for a CPU and a host...

Electrical computers and digital processing systems: multicomput – Multicomputer data transferring via shared memory

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S214000, C711S215000, C711S216000, C711S217000

Reexamination Certificate

active

06816889

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an InfiniBand™ computing node configured for communication with remote computing nodes in an InfiniBand™ server system.
2. Background Art
Networking technology has encountered improvements in server architectures and design with a goal toward providing servers that are more robust and reliable in mission critical networking applications. In particular, the use of servers for responding to client requests has resulted in a necessity that servers have an extremely high reliability to ensure that the network remains operable. Hence, there has been a substantial concern about server reliability, accessibility, and serviceability.
In addition, processors used in servers have encountered substantial improvements, where the microprocessor speed and bandwidth have exceeded the capacity of the connected input/out (I/O) buses, limiting the server throughput to the bus capacity. Accordingly, different server standards have been proposed in an attempt to improve server performance in terms of addressing, processor clustering, and high-speed I/O.
These different proposed server standards led to the development of the InfiniBand™ Architecture Specification, (Release 1.0), adopted by the InfiniBand™ Trade Association. The InfiniBand™ Architecture Specification specifies a high-speed networking connection between central processing units, peripherals, and switches inside a server system. Hence, the term “InfiniBand™ network” refers to a network within a server system. The InfiniBand™ Architecture Specification specifies both I/O operations and interprocessor communications (IPC).
A particular feature of InfiniBand™ Architecture Specification is the proposed implementation in hardware of the transport layer services present in existing networking protocols, such as TCP/IP based protocols. The hardware-based implementation of transport layer services provides the advantage of reducing processing requirements of the central processing unit (i.e., “offloading”), hence offloading the operating system of the server system.
The InfiniBand™ Architecture Specification describes a network architecture, illustrated in FIG.
1
. The network
10
includes nodes
11
, each having an associated channel adapter
12
or
14
. For example, the computing node
11
a
includes processors
16
and a host channel adapter (HCA)
12
; the destination target nodes
11
b
and
11
c
include target channel adapters
14
a
and
14
b
, and target devices (e.g., peripherals such as Ethernet bridges or storage devices)
18
a
and
18
b
, respectively. The network
10
also includes routers
20
, and InfiniBand™ switches
22
.
Channel adapters operate as interface devices for respective server subsystems (i.e., nodes). For example, host channel adapters (HCAs)
12
are used to provide the computing node
11
a
with an interface connection to the InfiniBand™ network
10
, and target channel adapters (TCAs)
14
are used to provide the destination target nodes
11
b
and
11
c
with an interface connection to the InfiniBand™ network. Host channel adapters
12
may be connected to a memory controller
24
as illustrated in FIG.
1
. Host channel adapters
12
implement the transport layer using a virtual interface referred to as the “verbs” layer that defines in the manner in which the processor
16
and the operating system communicate with the associated HCA
12
: verbs are data structures (e.g., commands) used by application software to communicate with the HCA. Target channel adapters
14
, however, lack the verbs layer, and hence communicate with their respective devices
18
according to the respective device protocol (e.g., PCI, SCSI, etc.).
However, arbitrary hardware implementations may result in substantially costly hardware designs. In particular, implementation of the computing node
11
a
as illustrated in
FIG. 1
creates throughput and latency issues due to contention for access of the single port memory
26
by the CPU
16
, the HCA
12
, or any other I/O device (e.g., the memory controller
24
) having DMA capability.
SUMMARY OF THE INVENTION
There is a need for an arrangement that enables an InfiniBand™ computing node to be implemented in a manner that minimizes latency and optimizes throughput.
There also is a need for arrangement that optimizes memory resources within an InfiniBand™ computing node by eliminating memory access contention between memory resource consumers such as a CPU, or an HCA.
These and other needs are attained by the present invention, where an InfiniBand™ computing node includes a dual port memory configured for storing data for a CPU and a host channel adapter in a manner that eliminates contention for access to the dual port memory. The dual port memory includes first and second memory ports, memory banks for storing data, and addressing logic configured for assigning first and second groups of the memory banks to the respective memory ports based on prescribed assignment information. The host channel adapter is configured for accessing the dual port memory via the first memory port, and the CPU is configured for accessing the dual port memory via the second memory port. The CPU also is configured for providing the prescribed assignment information to the addressing logic, enabling the host channel adapter to access the first group of memory banks via the first memory port as the CPU concurrently accesses the second group of memory banks via the second memory port. Following access of the first group of memory banks by the host channel adapter, the CPU dynamically reassigns the memory banks, enabling the host channel adapter to continue accessing the second group of memory banks via the first memory port, concurrent with the CPU accessing the first group of memory banks via the second memory port. Hence, the host channel adapter can perform continuous memory access for transmission or reception of data without the necessity of the CPU directly accessing the host channel adapter. Hence, host channel adapter throughput may be optimized by eliminating contention for memory access between the host channel adapter and the CPU.
One aspect of the present invention provides a computing node configured for sending and receiving data packets on an InfiniBand™ network. The computing node includes a memory, a host channel adapter, and a processing unit. The memory has first and second memory ports, a plurality of memory banks for storing data, and addressing logic configured for assigning first and second groups of the memory banks to the respective first and second memory ports based on first prescribed assignment information. The host channel adapter is configured for accessing the memory via the first memory port for at least one of transmission and reception of a data packet according to InfiniBand™ protocol. The processing unit is configured for accessing the memory via the second memory port and providing the first prescribed assignment information to the addressing logic. The processing unit also is configured for overwriting the first prescribed assignment information in the addressing logic, following access of the first group of the memory banks by the host channel adapter, with second prescribed assignment information specifying assignment of the second group of the memory banks to the first memory port. Hence, the processing unit can switch memory banks accessible by the host channel adapter and the processing unit, enabling the continuous transfer of data between the processing unit and the host channel adapter via the memory.
Another aspect of the present invention provides a method in a computing node. The method includes coupling a processing unit and a host channel adapter to first and second memory ports of a memory, respectively. The memory has memory banks for storing data, and addressing logic configured for assigning first and second groups of the memory banks to the respective first and second memory ports based on first prescribed assignment information. The method also includes providin

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Assignment of dual port memory banks for a CPU and a host... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Assignment of dual port memory banks for a CPU and a host..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Assignment of dual port memory banks for a CPU and a host... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3319130

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.