慶應義塾大学
2015年度 秋学期

コンピューター・アーキテクチャ
Computer Architecture

2015年度秋学期 月曜日3時限
科目コード: 35010 / 2単位
カテゴリ:
開講場所:SFC
授業形態:講義
担当: Rodney Van Meter
E-mail: rdv@sfc.keio.ac.jp

第7回 11月12日
Lecture 7, November 12: Processors: Basics of Pipelining

Hennessy and Patterson Appendix A slides!

Stages of Instruction Execution

This model of how an instruction is executed is tilted slightly toward the MIPS architecture, of which Hennessy and Patterson were two of the instigators. However, the actions in any CPU would be similar.

  1. Instruction Fetch cycle (IF)
    Fetch the current instruction from memory, using the program counter (PC) as the address, add 4 to the PC, and store the PC (actually, in MIPS, store the tentative new PC into an internal register called NPC, Next PC).
  2. Instruction Decode/register fetch cycle (ID)
    Determine which instruction we are holding, fetch the register values (two, always, in this instruction set), compare the two registers and set the EQUAL flag if equal.
  3. Execution/effective address cycle (EX)
    Depending on the instruction type:
  4. Memory access (MEM)
    If the instruction is a LOAD or a STORE, do the appropriate thing, otherwise do nothing. (In MIPS, update the PC using either NPC or the output of the ALU operation.)
  5. Write-Back cycle (WB)
    If the instruction was LOAD, write the value fetched from memory into the matching register; if it was an ALU operation, write the result to the register.

The MIPS Pipeline

The MIPS Pipeline (Fig. A.17 in the text)

Pipeline Hazards

Sometimes conflicts occur between the different stages of the pipeline. Such as condition is called a pipeline hazard. There are three types of hazards:

Hazards result in pipeline stalls or pipeline bubbles.

Final Thoughts

The five-stage pipeline we have discussed is far from the only way to divide the work in a pipeline. The Intel Prescott microprocessor (Feb. 2004) had a thirty stage pipeline! Filling that pipeline takes some serious time, so every branch is a problem. The most famous pipeline of all:

Ford Model T assembly line, 1913, via
			       Wikipedia

宿題
Homework

This week's homework is available on SFS, and is slightly different from this:

  1. Modify vecadd.s to multiply two four-by-four matrices and print the results (行列の掛け算). Call it arraymult.s. vecadd.sを改変して、4次の正方行列をかけ算するプログラムarraymult.sを 作りなさい. Include a printout of the output. Use these arrays (same as above):
    array1: .float 3.14159265, 2.71828183, 1.0, -0.10 1.0, 0.0, 1.0, 0.0 0.0, 1.0, 0.0, 1.0 -1.0, 1.0, -1.0, 1.0
    array2: .float 2.71828183, 1.0, 3.14159265, 1.0 1.0, 0.0, 1.0, 0.0 -1.0, 1.0, -1.0, 1.0 3.0, 2.0, 1.0, 0.0
  2. Take your assembly-language matrix multiplication program and count the following:
    1. Floating-point additions actually executed over all loops
    2. Floating-point multiplications actually executed over all loops
    3. Integer additions/subtractions actually executed over all loops
    4. Branches actually executed over all loops
    5. The number of instructions between branch instructions
  3. Calculate the ideal throughput for your assembly program, assuming one instruction per clock cycle. How many clock cycles will your program take? How many seconds is that?
  4. Find and describe a real-world pipeline. Include:
    1. The number of stages
    2. Functionality of each stage
    3. Interlocking between stages
    4. Any hazards
    5. How balance in execution time is maintained
  5. Pipeline hazards equate to arrows flowing right to left on the figure above. Identify the arrows on the diagram above by type and indicate the maximum delay that the hazard can cause.
  6. The three pipeline programs we "executed" during class today are linked to below. Calculate the following for each:
    1. The number of instructions that must be executed. Don't forget to account for the loop in program 3. (n.b.: the #-28 in the branch is decimal!)
    2. The number of clock cycles the entire program takes, accounting for data and control hazards.
    3. The average clock cycles per instruction (CPI) for the program.

Next Lecture

Next lecture:

第7回 11月5日 Lecture 7, November 5: Memory: Caching and Memory Hierarchy

Readings for next time:

Additional Information

その他