Path: EDN Asia >> Design Centre >> Computing/Peripherals >> Add vision capabilities to embedded systems
Computing/Peripherals Share print

Add vision capabilities to embedded systems

04 Dec 2012  | Jeff Bier

Share this page with your friends

This prototype system, operating on 720p resolution video at 60 frames per second, was implemented by BDTI on a combination of a Xilinx Spartan-3A DSP 3400 FPGA and a Texas Instruments OMAP3430 CPU. The total compute load is on the order of hundreds of billions of operations per second.

Processors
As we've mentioned, vision algorithms typically require high compute performance. And, of course, embedded systems of all kinds are usually required to fit into tight cost and power consumption envelopes. In other digital-signal-processing application domains, such as digital wireless communications, chip designers achieve this challenging combination of high performance, low cost, and low power by using specialised coprocessors and accelerators to implement the most demanding processing tasks in the application. These coprocessors and accelerators are typically not programmable by the chip user, however. This trade-off is often acceptable in wireless applications, where standards mean that there is strong commonality among algorithms used by different equipment designers.

In vision applications, however, there are no standards constraining the choice of algorithms. On the contrary, there are often many approaches to choose from to solve a particular vision problem. Therefore, vision algorithms are very diverse, and tend to change fairly rapidly over time. As a result, the use of non-programmable accelerators and coprocessors is less attractive for vision applications compared to applications like digital wireless and compression-centric consumer video equipment. Achieving the combination of high performance, low cost, low power, and programmability is challenging. Special- purpose hardware typically achieves high performance at low cost, but with little programmability. General-purpose CPUs provide programmability, but with weak performance or poor cost-, energy-efficiency.

Demanding embedded vision applications most often use a combination of processing elements, which might include, for example:

 • A general-purpose CPU for heuristics, complex decision-making, network access, user interface, storage management, and overall control
 • A high-performance DSP-oriented processor for real- time, moderate-rate processing with moderately complex algorithms
 • One or more highly parallel engines for pixel-rate processing with simple algorithms
While any processor can in theory be used for embedded vision, the most promising types today are:

 • High-performance embedded CPU
 • Application-specific standard product (ASSP) in combination with a CPU
 • Graphics processing unit (GPU) with a CPU
 • DSP processor with accelerator(s) and a CPU
 • Mobile "application processor"
 • Field programmable gate array (FPGA) with a CPU
In this section, we'll briefly introduce each of these processor types and some of their key strengths and weaknesses for embedded vision applications.

High-performance embedded CPU
In many cases, embedded CPUs cannot provide enough performance—or cannot do so at acceptable price or power consumption levels—to implement demanding vision algorithms. Often, memory bandwidth is a key performance bottleneck, since vision algorithms typically use large amounts of memory bandwidth, and don't tend to repeatedly access the same data. The memory systems of embedded CPUs are not designed for these kinds of data flows. However, like most types of processors, embedded CPUs become more powerful over time, and in some cases can provide adequate performance.

And there are some compelling reasons to run vision algorithms on a CPU when possible. First, most embedded systems need a CPU for a variety of functions. If the required vision functionality can be implemented using that CPU, then the complexity of the system is reduced relative to a multi-processor solution.

In addition, most vision algorithms are initially developed on PCs using general-purpose CPUs and their associated software development tools. Similarities between PC CPUs and embedded CPUs (and their associated tools) mean that it is typically easier to create embedded implementations of vision algorithms on embedded CPUs compared to other kinds of embedded vision processors.

In addition embedded CPUs typically are the easiest to use compared to other kinds of embedded vision processors, due to their relatively straightforward architectures, sophisticated tools, and other application development infrastructure, such as operating systems.

An example of an embedded CPU is the Intel Atom E660T.

ASSP in combination with a CPU
Application-specific standard products (ASSPs) are specialised, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip.

By virtue of specialisation, ASSPs typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialised coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive application software.

The specialisation that enables ASSPs to achieve strong efficiency, however, also leads to their key limitation: lack of flexibility. An ASSP designed for one application is typically not suitable for another application, even one that is related to the target application. ASSPs use unique architectures, and this can make programming them more difficult than with other kinds of processors. Indeed, some ASSPs are not user- programmable.

Another consideration is risk. ASSPs often are delivered by small suppliers, and this may increase the risk that there will be difficulty in supplying the chip, or in delivering successor products that enable system designers to upgrade their designs without having to start from scratch.

An example of a vision-oriented ASSP is the PrimeSense PS1080-A2, used in the Microsoft Kinect.

GPU with a CPU
Graphics processing units (GPUs), intended mainly for 3-d graphics, are increasingly capable of being used for other functions, including vision applications. The GPUs used in personal computers today are explicitly intended to be programmable to perform functions other than 3-d graphics. Such GPUs are termed "general-purpose GPUs" or "GPGPUs."

GPUs have massive parallel processing horsepower. They are ubiquitous in personal computers. GPU software development tools are readily and freely available, and getting started with GPGPU programming is not terribly complex. For these reasons, GPUs are often the parallel processing engines of first resort of computer vision algorithm developers who develop their algorithms on PCs, and then may need to accelerate execution of their algorithms for simulation or prototyping purposes.

GPUs are tightly integrated with general-purpose CPUs, sometimes on the same chip. However, one of the limitations of GPU chips is the limited variety of CPUs with which they are currently integrated, and the limited number of CPU operating systems that support that integration.

Today there are low-cost, low-power GPUs, designed for products like smart phones, tablets. However, these GPUs are generally not GPGPUs, and therefore using them for applications other than 3-d graphics is very challenging.

An example of a GPGPU used in personal computers is the NVIDIA GT240.

DSP with accelerator(s) and a CPU
Digital signal processors ("DSP processors" or "DSPs") are microprocessors specialised for signal processing algorithms and applications. This specialisation typically makes DSPs more efficient than general-purpose CPUs for the kinds of signal processing tasks that are at the heart of vision applications. In addition, DSPs are relatively mature and easy to use compared to other kinds of parallel processors.

 First Page Previous Page 1 • 2 • 3 • 4 • 5 • 6 Next Page Last Page


Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact