Path: EDN Asia >> News Centre >> Computing/Peripherals >> Hard-core floating point optimises FPGAs
Computing/Peripherals Share print

Hard-core floating point optimises FPGAs

23 Apr 2014  | Max Maxfield

Share this page with your friends

Altera made a game-changing move when it introduced hard-core floating point DSP blocks in their FPGAs and SoCs (in this context, SoCs refers to FPGAs that also contain hard ARM cortex microcontroller sub-systems).

Until now, designers working with FPGAs have been forced to realise their DSP algorithms using fixed-point arithmetic. There are some advantages to working with fixed-point values, but there are a lot of disadvantages also, the main one being that fixed-point values can represent only a limited range of values, which makes fixed-point arithmetic susceptible to a variety of computational inaccuracies.

Similarly, there are some disadvantages when it comes to working with floating-point values, but there are also many advantages, including the fact that they have a much larger dynamic range than their fixed-point cousins.

When it comes to implementing DSP algorithms in FPGAs, designers typically start working at a high level of abstraction—perhaps using MATLAB or Simulink from MathWorks—and they also typically start working with floating-point values. Translating these floating-point representations into fixed-point equivalents is a non-trivial task that can bring the strongest amongst us to our knees. It can take a huge amount of time to ensure that the fixed-point signal path can handle the algorithms without overflowing the values or introducing artefacts into the data stream.

In order to get around this, FPGA designers sometimes implement floating-point data paths using a combination of hardened fixed-point multipliers and soft programmable fabric. Altera has a very nice implementation called Fused Datapath that uses extra bits in the mantissa to reduce the amount of normalisation and de-normalisation operations that have to be performed. Like any other "soft" floating-point implementation, however, these do consume large amounts of programmable fabric resources, burn a lot of power (relatively speaking), and are limited in performance.

With regard to Altera's announcement, what the folks at Altera have done is really rather clever. They already have a hardened variable-precision fixed-point DSP block that can support standard-precision (18bit) or high-precision (27bit) modes. They've now added a third mode that supports IEEE 754-complient single-precision floating-point calculations.

Hard-core floating point DSP blocks

It turns out that this capability is already in Altera's high-performance mid-range 20nm Arria 10 FPGAs and SoCs, which are currently shipping (the little scamps at Altera held this nugget of information back until they were ready to announce it). This means Arria 10 FPGAs and SoCs will be able to offer DSP datapaths operating at 400MHz to 450MHz providing up to 1.5TFLOPS of single-precision floating-point.

These hardened floating-point DSP blocks are also going to be available in Altera's 14nm Stratix 10 FPGAs and SoCs when they become available in 2015. In this case, Stratix 10 FPGAs and SoCs will be able to offer DSP datapaths providing up to 10TFLOPS of single-precision floating-point.

There are two important things to note here. First, this capability is not going to be limited to a subset of devices. These hardened floating-point DSP blocks are going to be in every member of the 20nm Arria 10 and 14nm Stratix 10 families. Second, the floating-point DSP blocks are backwards-compatible with existing designs. Users can configure each block to run in any of its three modes (18bit fixed-point, 27bit fixed-point, or IEEE-compliant single-precision floating-point).

As I mentioned earlier, this is a real game-changer. The higher performance and lower power consumption provided by hardened floating-point functions targets floating-point applications in all five military domains (air, land, sea, space, and cyber). Similarly, this capability is of tremendous interest in the commercial world for compute and storage applications, including oil and gas (seismic calculations), data centres (search and analytics), security (facial recognition, artificial intelligence), finance (risk analysis, best price algorithms, real-time heading valuation), research (bioinformatics, quantum chemistry, life sciences), manufacturing and industrial control (mould and flow, fluid dynamics, structural mechanics), and the list goes on.

Another huge consideration is the time-to-market advantages that ensue from using hardened floating-point DSP blocks.

Hard-core floating point DSP blocks

The original design flow that involved creating the design and verifying the algorithms in floating-point and then translating them to fixed-point was laborious and time-consuming. Implementing "soft" floating point as a mix of hardened multipliers and programmable fabric improved the situation to some extent, but the results were less than ideal in terms of performance and power consumption.

Now, the ability to create the design using floating-point and then directly implement that design in hardened floating-point DSP blocks inside the FPGA promises to dramatically reduce development time and cost (Altera is saying this new flow can save six to 12 months on a complex design).

Of course, we've only touched on the surface of this topic here. For more information, please bounce on over to Altera's website at Altera.com.




Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact