The optimization manuals at
www.agner.org/optimize/#manuals have now been updated. The most important
additions are:
- AMD Piledriver and Jaguar processors are now described in the
microarchitecture manual and the instruction tables.
- Intel Ivy Bridge and Haswell processors are now described in the
microarchitecture manual and the instruction tables.
- The micro-op cache of Intel processors is analyzed in more detail
- The assembly manual has more information on the AVX2 instruction set.
- The C++ manual describes the use of my vector classes for writing parallel
code.
Some interesting test results for the newly tested processors:
AMD Piledriver
- Similar microarchitecture to Bulldozer
- Supports fused multiply-and-add instructions in both the FMA3 and FMA4
form. FMA3 is compatible with Intel processors. See
Wikipedia for a discussion of the incompatibility between these
instruction sets.
- The throughput of FMA3 instructions is only half as much as the throughput
of FMA4 instructions, even though they are doing exactly the same calculations.
- Memory writes with the 256-bit AVX registers are exceptionally slow. The
measured throughput is 5 - 6 times slower than on the previous model
(Bulldozer), and 8 - 9 times slower than two 128-bit writes. No explanation
for this has been found. This design flaw is likelty to negate any advantage
of using the AVX instruction set.
- The problems with cache performance on the Bulldozer seem to have been
fixed in the Piledriver
AMD Jaguar
- Similar microarchitecture to Bobcat
- Supports the AVX instruction set
- Does not support AMD's 3DNow and XOP instruction sets. This is OK with me
since few programmers would care to make a special version of their code
specifically for AMD processors.
- The vector execution units are doubled in size from 64 bits in Bobcat to
128 bits in Jaguar. The throughput of vector instructions is doubled. Floating
point scalar (non-vector) performance was quite good already on the Bobcat and
is unchanged on the Jaguar.
- Load and store units are also doubled from 64 bits to 128 bits.
- Store-to-load forwarding is much faster than on Bobcat
- The prefetch instruction is particularly slow on Jaguar. The throughput is
much lower than on other AMD processors.
- Integer division is improved
- Register moves with vector registers are eliminated if the register is
known by the processor to be zero. Register moves are not eliminated if the
value of the register is unknown. This seems to indicate that registers are
not allocated if they are known to be zero.
- The VMASKMOVPS instruction with a memory source operand takes more than 300 clock cycles on the Jaguar when the mask is zero, in which case the instruction should do nothing. This appears to be a design flaw.
This instruction is not very common, though.
Intel Ivy Bridge
- Similar microarchitecture to Sandy Bridge
- Can eliminate register-to-register moves by renaming the target register
- Problem with decoding long NOPs in Sandy Bridge has been fixed
- Some execution units have been moved to a different port
- Handling of partial registers is improved
- The prefetch instructions are particularly slow on Ivy Bridge. The
throughput is much lower than on other Intel processors.
- Store-to-load forwarding is generally good, but in some unfortunate cases
of an unaligned 256-bit read after a smaller write, there is an unusually
large delay of more than 200 clock cycles.
Intel Haswell
- Supports the new AVX2 instruction set which allows integer vectors of 256
bits and gather instructions
- Supports fused multiply-and-add instructions of the FMA3 type
- The cache bandwidth is doubled to 256 bits. It can do two reads and one
write per clock cycle.
- Cache bank conflicts have been removed
- The number of read and write buffers, register files, reorder buffer and
reservation station are all bigger than in previous processors
- There are more execution units and one more execution port than on
previous processors. This makes a throughput of four instructions per clock
cycle quite realistic in many cases.
- The throughput for not-taken branches is doubled to two not-taken branches
per clock cycle, including fused branch instructions. The throughput for taken
branches is largely unchanged.
- There are two execution units for floating point multiplication and for
fused multiply-and-add, but only one execution unit for floating point
addition. This design appears to be suboptimal since floating point code
typically contains more additions than multiplications. But at least it enables Intel to boast a floating point performance of 32 FLOPS per
clock cycle.
- The fused multiply-and-add operation is the first case in the history of
Intel processors of micro-ops having more than two input dependencies. Other
instructions with more than two input dependencies are still split into two
micro-ops, though. AMD processors don't have this limitation.
- The delays for moving data between different execution units is smaller
than on previous Intel processors in many cases.
|