Course Overview
|
Course Synopsis
|
This is a graduate level course. It builds on the concepts presented in the undergraduate computer architecture course. The emphasis is given to expose advances in the field through cost-performance-power trade-offs and good engineering design of computers. The course introduces the quantitative principles of computer design, performance enhancement methodologies, static and dynamic exploitation of instruction level parallelism in high-performance processors and performance enhancement of memory and input/output systems.
|
Course Learning Outcomes
|
Upon successful completion of this course, students should be able to:
- Understand the quantitative principles of computer design and metrics for performance measurement.
- Familiarize the benchmark to analyze the performance of different architectures.
- Exploit instruction level parallelism using static and dynamic techniques in high-performance processors including superscalar execution.
- Recognize the centralized and distributed share-memory multiprocessor architectures.
- Design memory hierarchy and storage systems with optimum performance.
- Be acquainted to input/output systems design and their performance benchmarks.
|
Course Calendar
|
1
|
History and Introduction
|
2
|
Quantitative Principles
|
3
|
Quantitative Principles (Continue)
|
4
|
Instruction Set Architecture (ISA)
|
5
|
Instruction Set Architecture (ISA) (Continue)
|
6
|
Instruction Set Architecture (ISA) (Continue)
|
7
|
Computer Hardware Design
|
8
|
Computer Hardware Design (Continue)
|
9
|
Computer Hardware Design (Continue)
|
10
|
Computer Hardware Design (Cont.)
|
11
|
Computer Hardware Design (Cont.)
|
13
|
Instruction Level Parallelism (ILP) (Continue)
|
14
|
Instruction Level Parallelism (ILP) Continue.
|
15
|
Instruction Level Parallelism (ILP) Continue..
|
16
|
Instruction Level Parallelism (ILP) Continue
|
17
|
Instruction Level Parallelism (ILP) Continue
|
18
|
Instruction Level Parallelism (ILP) Continue
|
19
|
Instruction Level Parallelism (ILP) Continue
|
20
|
Instruction Level Parallelism (Static Scheduling)
|
21
|
Instruction Level Parallelism (Static Scheduling - Multiple Issue Processor)
|
22
|
Instruction Level Parallelism (Software pipelining and Trace Scheduling)
|
23
|
Instruction Level Parallelism (Hardware Support at Compile Time)
|
24
|
Instruction Level Parallelism (Concluding Instruction Level Parallelism)
|
25
|
Memory Hierarchy Design (Storage Technologies Trends and Caching)
|
26
|
Memory Hierarchy Design (Concept of Caching and Principle of Locality)
|
27
|
Memory Hierarchy Design(Cache Design Techniques)
|
28
|
Memory Hierarchy Design(Cache Design and policies )
|
29
|
Memory Hierarchy Design(Cache Performance Enhancement by(Reducing Cache Miss Penalty):
|
30
|
Memory Hierarchy Design(Cache Performance Enhancement)(Reducing Miss Rate)
|
31
|
Memory Hierarchy Design(Cache Performance Enhancement by )(Miss Penalty/Rate Parallelism a
|
32
|
Memory Hierarchy Design (Main and Virtual Memories)
|
33
|
Memory Hierarchy Design(Virtual Memory System)
|
34
|
Multiprocessors(Shared Memory Architectures)
|
35
|
Multiprocessors(Cache Coherence Problem)
|
36
|
Multiprocessors(Cache Coherence Problem … Cont’d )
|
37
|
Multiprocessors(Performance and Synchronization)
|
38
|
Input Output Systems(Storage and I/O Systems)
|
39
|
Input Output Systems(Bus Structures Connecting I/O Devices)
|
40
|
Input Output Systems(RAID and I/O System Design)
|
41
|
Networks and Clusters (Networks: Interconnection and Topology )
|
42
|
Networks and Clusters(Networks Topology and Internetworking .. Cont’d)
|
43
|
Networks and Clusters(Internetworks and Clusters)
|
44
|
Putting It All Together(Case Studies)
|
45
|
Putting It All Together(Review: Lecture 1 - 43)
|
|
|
|