Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed
Reexamination Certificate
1998-10-30
2001-06-05
Ngo, Chuong Dinh (Department: 2121)
Electrical computers: arithmetic processing and calculating
Electrical digital calculating computer
Particular function performed
Reexamination Certificate
active
06243734
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to computers, and more particularly, to computer program products and methods for causing a computer to function in a particular efficient fashion.
2. Description of the Related Art
Modern computers contain microprocessors, which are essentially the brains of the computer. In operation, the computer uses the microprocessor to run a computer program.
The computer program might be written in a high-level computer language, such as C or C++, using statements similar to English, which statements are then translated (by another program called a compiler) into numerous machine-language instructions. Or the program might be written in assembly language, and then translated (by another program called an assembler) into machine-language instructions. In practice, every computer language above assembly language is a high-level language.
Each computer program contains numerous instructions, which tell the computer what precisely it must do, to achieve the desired goal of the program. The computer runs a particular computer program by executing the instructions contained in that program.
Frequently the goal of the program is to solve complicated real world problems which can be described in mathematical terms. Modern microprocessors permit such programs to be rapidly executed using techniques such as pipelining and speculative execution.
Modern microprocessors use a design technique called a pipeline, in which the output of one process serves as input to a second, the output of the second process serves as input to a third, and so on, often with more than one process occurring during a particular computer clock cycle.
Pipelining is a method used in some microprocessors of fetching and decoding instructions in which, at any given time, several program instructions are in various stages of being fetched or decoded. Ideally, pipelining speeds execution time by insuring that the microprocessor does not have to wait for instructions; when it completes execution of one instruction, the next is ready and waiting. In order to have the next instruction that is to be executed ready and waiting in the pipeline, the microprocessor somehow must predict what that instruction will be.
Branch prediction is a technique used in some microprocessors to guess whether or not a particular path in a program—called a branch—will be taken during program execution, and to fetch instructions from the appropriate location. When a branch instruction is executed, it and the next instruction executed are stored in a buffer. This information is used to predict which way the instruction will branch the next time it is executed. When the prediction is correct, executing a branch does not cause a pipeline break, so the system is not slowed down by the need to retrieve the next instruction. When the prediction is incorrect, a pipeline break does occur, and the system is slowed down because it then needs to locate and retrieve the next instruction. Such incorrect predictions are sometimes called branch mispredictions.
Speculative execution is a technique used in some microprocessors in which certain instructions are executed and results made available before the results are actually needed by the program, so that the results are ready and waiting when the program needs them. Which instructions are to be executed speculatively is based on the guesses made about which branches in the program will be taken. In general, when a branch is mispredicted and instructions speculatively executed based on that incorrect branch prediction, the results of the speculatively executed instructions must be discarded, and consequently the computer time and resources used to obtain the now discarded results are wasted.
Real-world problems frequently can be expressed mathematically using a group of equations generally referred to as a system of simultaneous equations. Those equations, in turn, can be expressed in what is sometimes called matrix form, described more fully below. A computer can then be used to manipulate and perform calculations with the matrices, and solve the problem.
A matrix is a set of numbers arranged in rows and columns so as to form a rectangular array. The numbers are called the elements of the matrix. If there are m rows and n columns, the matrix is said to be “m by n” matrix, written “m×n”. For example,
[
1
3
8
2
-
4
5
]
&AutoRightMatch;
is a 2×3 matrix; it has two rows, and three columns. A matrix with m rows and m columns is called a square matrix of order m. An ordinary number can be regarded as a 1×1 matrix; thus, the number 3 can be thought of as the matrix [3].
In a common notation, a capital letter denotes a matrix, and the corresponding small letter with a double subscript denotes an element of that matrix. Thus, a
ij
is the element in the ith row and the jth column of the matrix A. If A is the 2×3 matrix shown above, then a
11
equals 1, a
12
equals 3, a
13
equals 8, a
21
equals 2, a
22
equals −4, and a
23
equals 5. Under certain conditions described more fully below, matrices can be added and multiplied as individual entities.
Matrices occur naturally in systems of simultaneous equations. In the following system for the unknowns x and y,
2
x
+3
y
=7
3
x
+4
y
=10
the array of numbers
[
2
3
7
3
4
10
]
&AutoRightMatch;
is a matrix whose elements are the coefficients of the unknowns. The solution of the equations depends entirely on these numbers and on their particular arrangement. If 7 and 10 were interchanged, the solution would not be the same.
A matrix A can be multiplied by an ordinary number c, which is called a scalar. The product is denoted by cA or Ac, and is the matrix whose elements are ca
ij
.
The multiplication of a matrix A by a matrix B to yield a matrix C is defined only when the number of columns of the matrix A equals the numbers of rows of the matrix B. To determine the element c
ij
, which is in the ith row and the jth column of the product, the first element in the ith row of A is multiplied by the first element in the jth column of B, the second element in the row by the second element in the column, and so on until the last element in the row is multiplied by the last element of the column; the sum of all these products gives the element c
ij
. In symbols, for the situation where A has n columns and B has n rows,
C
ij
=a
i1
b
1j
+a
i2
b
2j
+ . . . +a
in
b
nj
.
The matrix C has as many rows as A, and as many columns as B. Thus if A has m rows and n columns, and B has n rows and p columns, then C has m rows and p columns.
When B has only one column, that is, p=1, B is sometimes referred to as a column vector, or simply a vector. In a common notation, a single subscript is used to denote elements of a vector. Thus, v
i
is the ith element of the vector V.
The multiplication of a matrix A by a vector V to yield a vector D is defined only when the number of columns of the matrix A equals the number of elements of the vector V. Thus, multiplying an m×n matrix A by an n-element vector V, yields an m element vector D, the elements of which are indicated below, where the symbol “*” denotes multiplication.
D
=
⁢
A
*
V
=
[
a
11
a
12
a
13
…
a
1
⁢
n
a
21
a
22
a
2
⁢
n
a
31
⋮
a
m1
a
mn
]
⁡
[
v
1
v
2
v
3
v
n
]
=
⁢
[
a
11
⁢
v
1
+
a
12
⁢
v
2
+
a
13
⁢
v
3
⁢
⁢
…
+
a
1
⁢
n
⁢
v
n
a
21
⁢
v
1
+
a
22
⁢
v
2
+
a
2
⁢
n
⁢
v
n
⋮
a
m1
⁢
v
1
+
a
m2
⁢
v
2
+
…
⁢
⁢
a
mn
⁢
v
n
]
&AutoRightMatch;
The individual elements of a matrix may be zero or non-zero. A matrix in which the non-zero elements amount to a very small percentage of the total number of elements, is sometimes referred to as a sparse matrix. Sparse matrices occur frequently in practice. Problems such as structural analysis, network flow anal
Doshi Gautam
Golliver Roger
Norin Bob
Blakely , Sokoloff, Taylor & Zafman LLP
Intel Corporation
Ngo Chuong Dinh
LandOfFree
Computer product and method for sparse matrices does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer product and method for sparse matrices, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer product and method for sparse matrices will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2541728