慶應義塾大学
2007年度 秋学期
コンピューター・アーキテクチャ
Computer Architecture
第3回 10月22日
Lecture 3, October 22: Processors: Arithmetic
Outline of This Lecture
- Integer Arithmetic
- Twos-Complement Representation
- Addition: Half Adders, Full Adders, Carry-Ripple
- Multiplication: Carry-Save
- Division
- Handling Overflow
- Floating Point Arithmetic
- Representation: Significand, Exponent and Base (and Sign)
- Advantages and disadvantages
- Operations
- Exceptions
- Final Thoughts
- Homework
Integer Arithmetic
Let's look at some simple eight-bit, twos-complement binary numbers:
0 | | | 00000000 |
1 | | | 00000001 |
2 | | | 00000010 |
3 | | | 00000011 |
4 | | | 00000100 |
8 | | | 00001000 |
16 | | | 00010000 |
32 | | | 00100000 |
64 | | | 01000000 |
65 | | | 01000001 |
127 | | | 01111111 |
-1 | | | 11111111 |
-2 | | | 11111110 |
-3 | | | 11111101 |
-128 | | | 10000000 |
The leftmost bit is called the sign bit. To negate a
number, flip every bit, and add one. This representation of signed
numbers is called twos complement. Twos complement has a
couple of advantages: a) the addition circuit for signed and unsigned
numbers is exactly the same, and b) there is no redundant
representation of zero, as in ones complement
or sign-magnitude representation.
Integers can also be represented using a biased representation,
which allows negative numbers to be written. Although the bias can be
any number, in practice it is generally a power of two (or a power of
two minus one). Here are some eight-bit numbers with a bias of
-127:
-127 | | | 00000000 |
-126 | | | 00000001 |
-125 | | | 00000010 |
-124 | | | 00000011 |
-123 | | | 00000100 |
-119 | | | 00001000 |
-111 | | | 00010000 |
-95 | | | 00100000 |
-63 | | | 01000000 |
-62 | | | 01000001 |
0 | | | 01111111 |
1 | | | 10000000 |
2 | | | 10000001 |
3 | | | 10000010 |
128 | | | 11111111 |
We will see biased numbers again when we discuss floating point, below.
Addition
Let's look at an addition:
| 19 | | | 00010011 |
(+) |
(14) |
( | ) |
(00001110) |
= | 33 | | | 00100001 |
Notice how the carry moves up the word, the same as in decimal
arithmetic. The simplest form of adder is known as a ripple
carry adder. Arithmetic circuits are usually formed from two
simple types of blocks: the half adder and the full
adder. The half adder takes in two inputs and generates two
outputs: the modulo two sum of the input bits, and
the carry.
Half-Adder Logic Table
Input | Output |
A | B | C | S |
0 | 0 | 0 | 0 |
0 | 1 | 0 | 1 |
1 | 0 | 0 | 1 |
1 | 1 | 1 | 0 |
And the full adder:
Full-Adder Logic Table
Input | Output |
A | B | Cin | Cout | S |
0 | 0 | 0 | 0 | 0 |
0 | 1 | 0 | 0 | 1 |
1 | 0 | 0 | 0 | 1 |
1 | 1 | 0 | 1 | 0 |
0 | 0 | 1 | 0 | 1 |
0 | 1 | 1 | 1 | 0 |
1 | 0 | 1 | 1 | 0 |
1 | 1 | 1 | 1 | 1 |
From these two types of blocks, we can easily construct a ripple carry
adder, by connecting the carry out of one full adder to the carry in
of the next. The circuit is very simple. The problem is that it is
slow: O(n) gate times are required to add two n-bit
numbers.
There are a number of other types of adders:
- Carry lookahead
- Carry select
- Conditional sum
And still others; we will not go into them in detail here.
Multiplication
The most obvious way to do multiplication is simply to repeatedly
shift and add. This approach also happens to be the slowest way to
multiply. CPUs generally contain a specialized circuit for
multiplication, known as a carry-save multiplier, which
operates by overlapping the propagation of carrys with the next stage
of the addition. Using a carry-save multiplier, the latency of a
multiplication is roughly twice that of an addition.
Floating Point Arithmetic
Representation
A floating point number consists of three parts:
- The sign bit
- The significand
- The exponent
We will assume that the base is two. The significand is
the fraction; because floating-point numbers are stored in
a normalized representation, the high-order bit of the
fraction is always one, and we do not have to include it in the
register. In IEEE 32-bit floating-point numbers,
the exponent is 8 bits, with a bias of 127.
In this example, the sign bit is zero, so the number is positive. The
significand is represented as 01011011111100001010100, but that
doesn't include the assumed leading one, known as the hidden
bit. With the decimal point and the hidden bit, our number is
1.01011011111100001010100. The exponent is 10000000, which, with
bias, represents "1". So, we need to move the decimal point one digit
to the right: 10.1011011111100001010100. This number now
represents
21 + 2-1 + 2-3 +
2-4...
= 2 + 0.5 + 0.125 + 0.0625...
= 2.71828175
Advantages and Disadvantages
- Advantage: dynamic range
- Disadvantages: less precision, roundoff error,
complex implementation
Operating on Floating-Point Numbers
Addition of floating-point numbers requires that both numbers have the
same exponent; usually, the larger one is left normalized and
the smaller one is aligned to match. Then standard integer addition
can be performed, after which the result must be renormalized
(have its exponent adjusted so that the high-order bit is one). The
steps are as follows:
- Subtract exponents
- Align significands
- Add (or subtract) significands, produce sign of result
- Normalize result
- Round
- Determine exception flags and special values
Multiplication of floating-point numbers does not require the initial
alignment step.
- Initial multiplication
- Multiply magnitudes (result is 2m bits, for
two m-bit significands)
- Add exponents
- Generate sign
- Normalization
- Rounding
Exceptions
So far, we have ignored errors, but there are several important cases
that must be tracked in arithmetic:
- Integer or floating point overflow
- Floating point underflow
- Divide by zero
In different processors, those exceptions behave differently. In all
processors, a flag will be set; in processors,
an exception is thrown. Some processors always throw an
exception, others do it under program control, either with
a processor control flag that is set by the programmer, or by
using different instructions that do or do not throw exceptions.
Final Thoughts
- Arithmetic is its own, important area of research and development. It
is still possible to build an entire career around studying and
improving computer arithmetic.
- Write any large-scale scientific program that does simulations
requires an understanding of how float-point precision behaves.
- It is often both faster and more accurate to use integers
as fixed point numbers, rather than bothering
with floating point.
- Some programming languages (such as Lisp and Mathematica) offer
"bignum" (indefinitely large integers) or "rational" (fraction) number
types. However, these are constructs of the language and runtime
environment, not supported in the hardware.
宿題
Homework
This week's homework (submit via email):
- Tell me what decimal values the following floating point numbers represent:
- 00000000000000000000000000000000
- 00111111100000000000000000000000
- 10111111100000000000000000000000
- 00111111000000000000000000000000
- 01000000000000000000000000000000
- 01000000010000000000000000000000
- 01000000010000000000000000000001
- 01000000010010010000111111011010
- Write a program to test the speed of your processor on integer
arithmetic. Test the four basic functions: add, subtract, multiply,
divide. If you are using a Unix-like machine (or Cygwin), you may
use the "time" command.
- Write a program to test the speed of your processor on
floating-point arithmetic. Test the four basic functions: add,
subtract, multiply, divide.
- Test the accuracy and precision of floating point arithmetic on
your processor.
Next Lecture
Next lecture:
第4回 10月29日
Lecture 4, October 29: Processors: Basics of Pipelining
Readings for next time:
- Follow-up from this lecture: Appendix I (on the CD)
- For next time: Appendix A.1 and A.2
Additional Information
その他