A Configurable Mixed-Precision Fused Dot Product Unit for GPGPU Tensor Computation

Presented in Vortex Workshop and Tutorials, MICRO 2025

Paper Slides

Abstract

Efficient mixed-precision MMA operations are critical for accelerating Deep Learning workloads on GPGPUs. However, existing open-source RTL implementations of inner dot products rely on discrete arithmetic units, leading to suboptimal throughput and poor resource utilization. To address these challenges, we propose a scalable mixed-precision dot product unit that integrates floating-point and integer arithmetic pipelines within a singular fused architecture, implemented as part of the open-source RISC-V based Vortex GPGPU’s Tensor Core Unit extension. Our design supports low-precision multiplication in (FP16/BF16/FP8/BF8/INT8/UINT4) formats and higher-precision accumulation in (FP32/INT32), with an extensible framework for adding and evaluating other custom representations in the future. Experimental results demonstrate 4-cycle operation latency at 306.6 MHz clock frequency on the AMD Xilinx Alveo U55C FPGA, delivering an ideal filled pipeline throughput of 9.812 GFLOPS in a 4-thread per warp configuration.

@misc{rout2025configurablemixedprecisionfuseddot,
  title={A Configurable Mixed-Precision Fused Dot Product Unit for GPGPU Tensor Computation}, 
  author={Nikhil Rout and Blaise Tine},
  year={2025},
  eprint={2512.00053},
  archivePrefix={arXiv},
  primaryClass={cs.AR},
  url={https://arxiv.org/abs/2512.00053}, 
}