Hey everyone!
Recently I made a benchmark with two different PCs to see the difference between using DX9 and DX11. All tests were made with FSAA 8, 1080p and VSync off.
Sample configuration:
OBS: Only FSAA and Render system are different.
An example showing how the test was made.
PC 1 specs:
OS: Windows 10 64-bit (19045);
Motherboard: GA-78LMT-USB3 6.0;
CPU: AMD FX-8320E (8 CPUs), 3.2GHz;
RAM: 16GB;
GPU: NVIDIA GeForce GTX 960 (2GB).
Results:
DX11: 6 FPS (AVG) - 10.9% (GPU) - 15% (CPU)
DX9 Ex: 5.7 FPS (AVG) - 22.7% (GPU) - 16.2% (CPU)
DX9: 5.9 FPS (AVG) - 23.3% (GPU) - 16.9% (CPU)
PC 2 specs:
OS: Windows 10 Pro 64-bit (19045);
Motherboard: B760M D2H DDR4;
CPU: i5-12400F (12 CPUs), 2.5GHz;
RAM: 32GB DDR4;
GPU: NVIDIA GeForce RTX 4070 SUPER (12GB).
Results:
DX11: 21.4 FPS (AVG) - 5.1% (GPU) - 15.3% (CPU)
DX9 Ex: 21.9 FPS (AVG) - 16.5% (GPU) - 17.1% (CPU)
DX9: 21.8 FPS (AVG) - 16.9% (GPU) - 16.9% (CPU)
I really don't understand much about how 3D graphics work, and I'm curious why the GPU and CPU have such low usage. Based on the usage, it seems like I could get more FPS, because my GPU is using less than half its capacity — but the scenario is different.