Geekbench 6 Benchmark Internals [pdf] (www.geekbench.com)

🤖 AI Summary
Primate Labs’ Geekbench 6 Benchmark Internals lays out the full workload and implementation details behind the CPU and GPU suites, clarifying how performance is measured for modern compute and ML-heavy applications. The document lists supported platforms (Android, iOS, Linux, macOS, Windows), minimum OS versions, and compiler toolchains (Clang 14–16 across releases), and explains runtime behavior such as per-workload gaps (2s in 6.0, 5s in 6.1+) and a switch to a “shared task” multi-threading model that more closely mirrors real applications but can reduce idealized scaling compared with the previous “separate task” approach. For the AI/ML community the report is important because Geekbench 6 explicitly includes numerous ML and compute-focused workloads (object detection, background blur, image synthesis, feature/stereo matching, structure-from-motion, quantized inference in Photo Library using MobileNet 1.0, etc.) and documents use of hardware accelerators and instruction-set extensions. Builds target SSE2/AVX2 (x86) and ARMv8 (ARM) as bases, with optional runtime-guarded intrinsics for AVX-VNNI/AVX512-VNNI/AMX on x86 and DOTPROD/I8MM/SME on ARM to accelerate quantized ML. The scoring is calibrated to a 2,500 baseline (Core i7-12700) with composite single/core weights (integer 65%, FP 35%). Overall, the internals give hardware vendors, compiler engineers and ML practitioners specifics to interpret Geekbench numbers for real-world ML, inference, and GPU compute workloads.
Loading comments...
loading comments...