Agner`s CPU blog

Software optimization resources | E-mail subscription to this blog | www.agner.org

 
thread Test results for AMD Ryzen - Agner - 2017-05-02
replythread Ryzen analyze - Daniel - 2017-05-02
last reply Ryzen analyze - Agner - 2017-05-02
replythread Test results for AMD Ryzen - Peter Cordes - 2017-05-02
last replythread Test results for AMD Ryzen - Agner - 2017-05-03
last replythread Test results for AMD Ryzen - Phenominal - 2017-05-06
last replythread Test results for AMD Ryzen - Agner - 2017-05-06
last replythread Test results for AMD Ryzen - Phenominal - 2017-05-06
last reply Test results for AMD Ryzen - Agner - 2017-05-06
reply Test results for AMD Ryzen - Tacit Murky - 2017-05-05
last replythread Test results for AMD Ryzen--POPCNT - Xing Liu - 2017-05-08
last reply Test results for AMD Ryzen--POPCNT - Agner - 2017-05-11
 
Test results for AMD Ryzen
Author: Agner Date: 2017-05-02 04:22
The new Ryzen processor from AMD represents a complete redesign of the CPU microarchitecture. This is the first of a series of "Zen" architecture processors. I must say that this redesign is a quite successful one which puts AMD back in the game after several years of lagging behind Intel in performance.

The Ryzen has a micro-operation cache which can hold 2048 micro-operations or instructions. This is sufficient to hold the critical innermost loop in most programs. There has been discussions of whether the Ryzen would be able to run four instructions per clock cycle or six, because the documents published by AMD were unclear at this point. Well, my testing shows that it was not four, and not six, but five. As long as the code is running from the micro-operations cache, it can execute five instructions per clock, where Intel has only four. Code that doesn't fit into the micro-operations cache run from the traditional code cache at a maximum rate of four instructions per clock. However, the rate of fetching code from the code cache is not 32 bytes per clock, as some documents seem to indicate, but mostly around 16 bytes per clock. The maximum I have seen is 17.3 bytes per clock. This is a likely bottleneck since most instructions in vector code are more than four bytes long.

The combination of a compare instruction and a conditional jump can be fused together into a single micro-op. This makes it possible to execute a tiny loop with up to six instructions in one clock cycle per iteration. Except for tiny loops, the throughput for jumps is one jump per two clock cycles if the jump is taken, or two not-taken jumps per clock cycle.

256-bit vector instructions (AVX instructions) are split into two micro-ops handling 128 bits each. Such instructions take only one entry in the micro-operation cache. A few other instructions also generate two micro-ops. The maximum throughput of the micro-op queue after the decoders is six micro-ops per clock. The stream of micro-ops from this queue are distributed between ten pipelines: four pipes for integer operations on general purpose registers, four pipes for floating point and vector operations, and two for address calculation. This means that a throughput of six micro-ops per clock cycle can be obtained if there is a mixture of integer and vector instructions.

Let us compare the execution units of AMD's Ryzen with current Intel processors. AMD has four 128-bit units for floating point and vector operations. Two of these can do addition and two can do multiplication. Intel has two 256-bit units, both of which can do addition as well as multiplication. This means that floating point code with scalars or vectors of up to 128 bits will execute on the AMD processor at a maximum rate of four instructions per clock (two additions and two multiplications), while the Intel processor can do only two. With 256-bit vectors, AMD and Intel can both do two instructions per clock. Intel beats AMD on 256-bit fused multiply-and-add instructions, where AMD can do one while Intel can do two per clock. Intel is also better than AMD on 256-bit memory writes, where Intel has one 256-bit write port while the AMD processor has one 128-bit write port. We will soon see Intel processors with 512-bit vector support, while it might take a few more years before AMD supports 512-bit vectors. However, most of the software on the market lags several years behind the hardware. As long as the software uses only 128-bit vectors, we will see the performance of the Ryzen processor as quite competitive. The AMD can execute six micro-ops per clock while Intel can do only four. But there is a problem with doing so many operations per clock cycle. It is not possible to do two instructions simultaneously if the second instruction depends on the result of the first instruction, of course. The high throughput of the processor puts an increased burden on the programmer and the compiler to avoid long dependency chains. The maximum throughput can only be obtained if there are many independent instructions that can be executed simultaneously.

This is where simultaneous multithreading comes in. You can run two threads in the same CPU core (this is what Intel calls hyperthreading). Each thread will then get half of the resources. If the CPU core has a higher capacity than a single thread can utilize then it makes sense to run two threads in the same core. The gain in total performance that you get from running two threads per core is much higher in the Ryzen than in Intel processors because of the higher throughput of the AMD core (except for 256-bit vector code).

The Ryzen is saving power quite aggressively. Unused units are clock gated, and the clock frequency is varying quite dramatically with the workload and the temperature. In my tests, I often saw a clock frequency as low as 8% of the nominal frequency in cases where disk access was the limiting factor, while the clock frequency could be as high as 114% of the nominal frequency after a very long sequence of CPU-intensive code. Such a high frequency cannot be obtained if all eight cores are active because of the increase in temperature.

The varying clock frequency was a big problem for my performance tests because it was impossible to get precise and reproducible measurements of computation times. It helps to warm up the processor with a long sequence of dummy calculations, but the clock counts were still somewhat inaccurate. The Time Stamp Counter (TSC), which is used for measuring the execution time of small pieces of code, is counting at the nominal frequency. The Ryzen processor has another counter called Actual Performance Frequency Clock Counter (APERF) which is similar to the Core Clock Counter in Intel processors. Unfortunately, the APERF counter can only be read in kernel mode, unlike the TSC which is accessible to the test program running in user mode. I had to calculate the actual clock counts in the following way: The TSC and APERF counters are both read in a device driver immediately before and after a run of the test sequence. The ratio between the TSC count and the APERF count obtained in this way is then used as a correction factor which is applied to all TSC counts obtained during the running of the test sequence. This method is awkward, but the results appear to be quite precise, except in the cases where the frequency is varying considerably during the test sequence. My test program is available at www.agner.org/optimize/#testp

AMD has a different way of dealing with instruction set extensions than Intel. AMD keeps adding new instructions and remove them again if they fail to gain popularity, while Intel keeps supporting even the most obscure and useless undocumented instructions dating back to the first 8086. AMD introduced the FMA4 and XOP instruction set extensions with Bulldozer, and some not very useful extensions called TBM with Piledriver. Now they are dropping all these again. XOP and TBM are no longer supported in Ryzen. FMA4 is not officially supported on Ryzen, but I found that the FMA4 instructions actually work correctly on Ryzen, even though the CPUID instruction says that FMA4 is not supported.

Detailed results and list of instruction timings are in my manuals: www.agner.org/optimize/#manuals.

   
Ryzen analyze
Author:  Date: 2017-05-02 04:23
Hello Agner, I have recently read your anylyze and test of the new Ryzen microprocessor. Good work ! I would like to ask you, if you will be so kind and do your testing with an Excavator CPU (Bristol Ridge on some AM4 motherboard with DDR4) so the analyse of BD family will be complete.

regards Daniel

   
Ryzen analyze
Author: Agner Date: 2017-05-02 07:46
Daniel wrote:
I would like to ask you, if you will be so kind and do your testing with an Excavator CPU.
I have not been able to get my hands on an Excavator and find a motherboard that fits it. If you have one, and you give me remote access, I can test it.
   
Test results for AMD Ryzen
Author:  Date: 2017-05-02 14:16
On Ryzen, Bulldozer, and/or Jaguar, does vxorps-zeroing of a ymm register still only take 1 micro-op? Unlike the non-special case, where vxorps ymm1,ymm2,ymm3 which is split into two?

I'm worried that the special-case of xor-zeroing might not be identified until after the decoder has already split it in two. Or that if it still needs an execution port, ymm zeroing might still use two instead of taking advantage of AVX implicit zero-extension to high lanes. (Previously posted at stackoverflow.com/questions/43713273/is-vxorps-zeroing-on-amd-jaguar-bulldozer-zen-faster-with-xmm-registers-than-ymm, but this is probably a better place to ask.)

If ymm-zeroing is slower on any CPUs, then compilers should use vxorps xmm0,xmm0,xmm0 even for _mm256_setzero_ps.

---

For _mm512_setzero_ps, using a VEX-encoded instruction saves a byte vs. EVEX. (reported to clang as bug 32862).
No existing AVX512 hardware has a problem with mixing VEX and EVEX vector instructions, or vector widths, AFAIK. And there's no reason to expect problems on future CPUs because of AVX's zero-extending to VLMAX.

----

On Intel CPUs, the choice affects whether it warms up the 256b execution units (and throttles the max-turbo on Xeon CPUs). So calling a noinline function that returns _mm256_setzero_ps wouldn't be a reliable way to warm up the execution units. But it already wasn't portably reliable anyway, because MSVC already always uses 128b vxorps for zeroing ymm/zmm regs. Returning 256b all-ones would work, but only clang and icc avoid loading a constant when AVX2 isn't available. See all 4 compilers on godbolt.

   
Test results for AMD Ryzen
Author: Agner Date: 2017-05-03 00:12
Peter Cordes wrote:
does vxorps-zeroing of a ymm register still only take 1 micro-op? Unlike the non-special case, where vxorps ymm1,ymm2,ymm3 which is split into two?

I'm worried that the special-case of xor-zeroing might not be identified until after the decoder has already split it in two. Or that if it still needs an execution port, ymm zeroing might still use two instead of taking advantage of AVX implicit zero-extension to high lanes.

xor'ing a ymm register with itself generates two micro-ops, while xor'ing an xmm register with itself generates only one micro-op. So you have a point. Zeroing a register as 128 bit and relying on implicit zero extension to 256 bits is faster. The throughput of these is four micro-ops per clock, i.e. four 128 bit xor or two 256 bit xor per clock. These micro-ops go to any of the four floating point units in the Ryzen. There is no dependence on the previous value of the register.
   
Test results for AMD Ryzen
Author: Phenominal Date: 2017-05-06 00:29
Agner wrote:
The maximum I have seen is 17.3 bytes per clock. This is a likely bottleneck since most instructions in vector code are more than four bytes long.
Does SMT have any effect on this? If SMT is disabled in bios it may fetch 32 bytes. Also in general how IPC is effected for single threaded tests based on SMT bios setting?

Thanks for the great documents.

   
Test results for AMD Ryzen
Author: Agner Date: 2017-05-06 03:08
Phenominal wrote:
Does SMT have any effect on this?
These results are for a single thread. The maximum throughput is only slightly more than half of this if two threads are running in the same core.
   
Test results for AMD Ryzen
Author: Phenominal Date: 2017-05-06 04:33
Thanks for the reply. What I was wondering is since some parts of the core is statically partitioned, whether enable/disable SMT in BIOS makes a difference to single threaded test running on single core. Some reviewers has mentioned that disabling SMT in BIOS has improved some benchmarks. This could also impact instruction fetch. Any truth to these statements?
   
Test results for AMD Ryzen
Author: Agner Date: 2017-05-06 09:41
Phenominal wrote:
Some reviewers has mentioned that disabling SMT in BIOS has improved some benchmarks. This could also impact instruction fetch. Any truth to these statements?
The benchmarks are probably using as many threads as possible. The cost of SMT is that all the threads run at half speed, and you get additional cache evictions and branch mispredictions. The processor core has no problem allocating all resources to a single thread when it is only running one thread.

I don't have physical access to the machine I have tested so I haven't been able to experiment with the BIOS setting, but I haven't had any reason to do so either.

   
Test results for AMD Ryzen
Author: Tacit Murky Date: 2017-05-05 12:24
Hello, Agner.
Here are latest results from AIDA64 HW-bench for Zen: users.atw.hu/instlatx64/AuthenticAMD0800F11_K17_Zen_InstLatX64.txt . We can see it takes 0.23 cl to execute «2299 LNOP :LNOP8» (8-byte long NOP), which makes it ~35 B/cl. However, it's not clear whether it's from L1I or L0m (mop-cache). Also it's not clear about generating 2 mops by 1 decoder lane: is it possible for all 2-mop instructions or just AVX-256? So, can 2 mops fit in 1 mop-entry of L0m cache, even if it's not AVX-256 instruction? We only know that microcoded instructions are not cacheble, reading directly from mROM. Where „fused“ 2-mop instructions dissolve to 2 distinct mops?

It is important to know topology and restrictions of L0m. Code portion from L1I (32 B probably) generate certain amount of mops to be cached. Intel CPU requires cached 32 B portion to have 1-18 mops to fit into 1-3 L0m lines (6-mop each), all located in a common set. And it's not possible to break multi-mop instruction between lines. A cached portion must have only 1 entry (at its start); jumping in the middle will cause a miss and refill, so there can be copies of same portion with different entry points. And there is a maximum of 2 jumps per line. Zen must have similar rules, that must be tested. Remember: some 4 years ago you did a test of Sandy Bridge's L0m and send me xls-file with some interesting results. I hope you still got that code.

Eviction policy is also important. Is L0m inclusive with L1I? Will L0m flush on context switch? But we do know that L0m will statically divide for 2 threads. Also, L0m decreases branch misspredict penalty (if target address is cached) — by yet unknown value. Is it possible to read a line for one thread and write for another in the same cycle?

It's good to know the details about 6 OoO-queues (14 mops each) for 6 GPR execution ports. How mops are distributed among them on allocation? Then, knowing that FMAs use 3 inputs, borrowing 1 read port „aside“, how many vector reads can be made per clock with both FMAs loaded?

   
Test results for AMD Ryzen--POPCNT
Author:  Date: 2017-05-08 11:06
Hi Agner,

Nice work! I see you mentioned about the throughput of POPCNT instruction in Ryzen is 0.25 cycle. Does that mean a single Ryzen core is able to execute 4 POPCNT (64bit register operands) at the same time? Thank you.

   
Test results for AMD Ryzen--POPCNT
Author: Agner Date: 2017-05-11 10:15
Xing Liu wrote:
Does that mean a single Ryzen core is able to execute 4 POPCNT (64bit register operands) at the same time?
Yes