Basic Architecture Ideas and Linear Algebraic Notation
Only three general concepts are really
required for scientific computing, and this has been true throughout the
history of electronic computers:
- Data locality
- Pipelining
- Parallelism
Those concepts are the basis of all high performance architecture.
The basic computer architectural ideas to deal with those are
- Memory hierarchies
- Cache modeling
- Pipelining and loops
- Enhancing data locality and pipelining in codes
Architecture
A computer's architecture is a high level framework defining
the components making up a computer system, and their interfaces.
For scientific computing the important components are
- the memory system
- the interconnect between memory and cpu(s), e.g., caches
- internal CPU capabilities
- I/O systems
For parallel machines this is extended to include
the interconnection network and its topology for the processors.
You need to understand enough computer architecture to
- Know how to map particular algorithms to a machine.
- Develop implementations of algorithms suitable for the machine.
- Understand why load/store analysis works
Linear Algebraic Notation and Conventions
Simple linear algebra operations are used throughout to illustrate architecture
ideas, probe computer systems experimentally, and to demonstrate and test
load/store analysis.
In general the
Householder convention will be used.
For those example operations, this means
x and y vectors (n × 1), A, B, and
C
are two-dimensional arrays (m × n, e.g.), and α is a scalar
real number.
The pseudo-code used will follow some basic conventions:
Because some browsers aren't up to snuff, and to match the strict ASCII character
sets of most programming languages, Greek letters are sometimes
spelled out. E.g., alpha = x(3)*A(2,11).
The first architectural aspect to consider is effective use of a
memory hierarchy.
- Started change tracking: Tue 13 Sep 2011, 11:52 AM
- Last Modified: Mon 04 Nov 2019, 06:55 AM