High level synthesis method, thread generated using the...

Computer-aided design and analysis of circuits and semiconductor – Nanotechnology related integrated circuit design

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C716S030000

Reexamination Certificate

active

06704914

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a high level synthesis method for automatically generating a logic circuit of a register transfer level (RTL) from an operation description, a thread produced using the high level synthesis method, and a method for producing a circuit including the thread. The present invention is especially effective for designing of an ASIC (Application Specific Integrated Circuit) or other circuits which are required to be designed in a short period of time.
2. Description of the Related Art
A high level synthesis method automatically generates a logic circuit of an RTL which includes a hardware structure (registers, calculators and the like), data flow between registers per operation cycle, and processing, based on an operation description which describes only processing operations but does not include information on the hardware structure. This high level synthesis method is disclosed in, for example, Japanese Laid-Open Publication No. 6-101141. Hereinafter, a flow of such a conventional high level synthesis method will be described.
(i) Conversion of an operation description to a control data flowgraph (CDFG)
In high level synthesis, an operation description is first analyzed and converted to a model, referred to as a CDFG, which represents the dependency relationship among calculations, input to and output from an external device, and a memory access execution order.
FIG. 1
shows an exemplary operation description. The operation description shown in
FIG. 1
is performed as follows. In a description
101
, a memory content corresponding to address (adr) is substituted for a variable a. In a description
102
, a memory content corresponding to address (adr+
1
) is substituted for a variable b. In a description
103
, a memory content corresponding to address (adr−
1
) is substituted for a variable c. In a description
104
, the value of (a+b+c) is substituted for a variable d.
FIG. 2
shows an exemplary CDFG obtained by converting the operation description shown in FIG.
1
. In the CDFG shown in
FIG. 2
, a node
105
represents an input to the circuit from an external device, and a node
106
represents an output from the circuit to an external device. Nodes
107
through
109
each represent a read request to the memory (memory read request or memory access request), and nodes
110
through
112
each represent data read from the memory. A node
133
represents an increment, and a node
134
represents a decrement. Nodes
135
and
136
each represent an addition.
Branches
113
and
114
each represented by a chain line in
FIG. 2
are each a data dependency edge (or control dependency edge). The data dependency edge
113
connects the node
107
to the node
108
. The data dependency edge
114
connects the node
108
to the node
109
. A node, to which another node is connected to, needs to be scheduled to a step later than the step to which the another node is scheduled. For example, in a scheduling stage described below, the node
108
is scheduled to a step later than the step to which the node
107
is scheduled. When a pipeline-accessible memory is used, the memory read requests
107
through
109
are executed in the same order as in the operation description and are scheduled to different steps from each other. Herein, the term “pipeline-accessible memory” is defined to refer to a memory which can request an access in each clock cycle. In the operation description shown in
FIG. 1
, memory read is performed three times. The data dependency edges
113
and
114
are provided so that these three memory read operations are performed in different steps in the order described.
Branches
117
through
119
each represented by a dashed line in
FIG. 2
are each a data dependency edge. The data dependency edge
117
connects the node
107
to the node
110
. The data dependency edge
118
connects the node
108
to the node
111
. The data dependency edge
119
connects the node
109
to the node
112
. A node, to which another node is connected to, needs to be scheduled to a step later by n steps than the step to which the another node is scheduled. Here, “n” represents a relative step number
120
,
121
or
122
which is associated with each data dependency edge. For example, in the scheduling stage described below, the node
110
is scheduled to a step later by two steps than the step to which the node
107
is scheduled.
FIG. 3
is a timing diagram illustrating a read timing of a pipeline-accessible memory. When a memory which causes read data RDATA to be valid two clock cycles after a rise of a memory read request signal RREQ is used as shown in
FIG. 3
, the data dependency edges
117
through
119
each having a relative step number of
2
are provided. In this specification, a “memory which causes read data to be valid n clock cycles after a rise of a memory read request signal and also is pipeline-accessible” is referred to as a “pipeline memory having a latency of n”.
Branches
123
through
132
each represented by a solid line in
FIG. 2
are each a data dependency edge. The data dependency edge
123
connects the node
107
representing a read request to the memory to the node
105
representing a memory address which is input from an external device. The data dependency edge
126
connects the node
108
representing a read request to the memory to the node
133
representing an increment. The data dependency edge
124
connects the node
133
representing an increment to the node
105
representing a memory address which is input from the external device. The data dependency edge
127
connects the node
109
representing a read request to the memory and the node
134
representing a decrement. The data dependency edge
125
connects the node
134
representing a decrement to the node
105
representing a memory address which is input from the external device. The data dependency edge
128
connects the node
110
representing data read from the memory to the node
135
representing an addition. The data dependency edge
129
connects the node
111
representing data read from the memory to the node
135
representing an addition. The data dependency edge
130
connects the node
135
representing an addition to the node
136
representing an addition. The data dependency edge
131
connects the node
112
representing data read from the memory to the node
136
representing an addition. The data dependency edge
132
connects the node
136
representing an addition to the node
106
representing an output to an external device. The processing result is output to the external device through the node
106
.
(ii) Scheduling
In the scheduling stage, each node of the CDFG is assigned to a time slot, referred to a step, which corresponds to a state of a controller (finite state transfer machine).
FIG. 4
shows an exemplary result obtained by scheduling the CDFG shown in FIG.
2
. In this example, each node is scheduled to one of five steps of steps
0
through
4
. Calculations which are scheduled to different steps can share one calculator. For example, in
FIG. 4
, the node
135
representing an addition and the node
136
also representing an addition are scheduled to different steps from each other and thus can share one calculator. In the scheduling stage, the nodes are scheduled to the steps such that the number of hardware devices is as small as possible so as to reduce the cost.
(iii) Allocation
In an allocation stage, calculators, registers and input and output pins which are required to execute the scheduled CDFG are generated. Nodes representing calculations of the CDFG are allocated to the calculators. Data dependency edges crossing the borders between the steps are allocated to the registers. External inputs and outputs and memory accesses are allocated to the input and output pins.
FIG. 5
shows an exemplary manner of allocation. In this example, an incrementor
137
, a decrementor
138
and an adder
139
are generated. As represented by a dashed lin

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High level synthesis method, thread generated using the... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High level synthesis method, thread generated using the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High level synthesis method, thread generated using the... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3237469

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.