In this paper we propose an architecture for the computation of the double—precision floating—point multiply—add fused (MAF) operation A + (B × C) that permits to compute the floating—point addition with lower latency than floating—point multiplication and MAF. While previous MAF architectures compute the three operations with the same latency, the proposed architecture permits to skip the first pipeline stages, those related with the multiplication B ×C, in case of an addition. For instance, for a MAF unit pipelined into three or five stages, the latency of the floating—point addition is reduced to two or three cycles, respectively. To achieve the latency reduction for floating-point addition, the alignment shifter, which in previous organizations is in parallel with the multiplication, is moved so that the multiplication can be bypassed. To avoid that this modification increases the critical path, a double-datapath organization is used, in which the alignment a...
Javier D. Bruguera, Tomás Lang