Path: EDN Asia >> Design Centre >> Computing/Peripherals >> Add vision capabilities to embedded systems
Computing/Peripherals Share print

Add vision capabilities to embedded systems

04 Dec 2012  | Jeff Bier

Share this page with your friends

However, the PC is not an ideal platform for implementing most embedded vision systems. Although some applications can be implemented on an embedded PC (a more compact, lower-power cousin to the standard PC), many cannot, due to cost, size, and power considerations. In addition, PCs lack sufficient performance for many real-time vision applications.

And, unfortunately, many of the same tools and libraries that make it easy to develop vision algorithms and applications on the PC also make it difficult to create efficient embedded implementations. For example vision libraries intended for algorithm development and prototyping often do not lend themselves to efficient embedded implementation.

OpenCV is a free, open source computer vision software component library for personal computers, comprising over two thousand algorithms. [3] Originally developed by Intel, now maintained by Willow Garage. The OpenCV library, used along with Bradski and Kahler's book, is a great way to quickly begin experimenting with computer vision.

However, OpenCV is not a solution to all vision problems. Some OpenCV functions work better than others. And OpenCV is a library, not a standard, so there is no guarantee that it functions identically on different platforms. In its current form, OpenCV is not particularly well suited to embedded implementation. Ports of OpenCV to non-PC platforms have been made, and more are underway, but there's currently little coherence to these efforts.

Some promising developments
While embedded vision development is challenging, some promising recent industry developments suggest that it is getting easier. For example, the Microsoft Kinect is becoming very popular for vision development. Soon after its release in late 2010, the API for the Kinect was reverse-engineered, enabling engineers to use the Kinect with hosts other than the Xbox 360 game console. The Kinect has been used with PCs and with embedded platforms such as the Beagle Board.

The XIMEA Currera integrates an embedded PC in a camera. It's not suitable for low-cost, low-power applications, but can be a good fit for low-volume applications like manufacturing inspection.

Several embedded processor vendors have begun to recognise the magnitude of the opportunity for embedded vision, and are developing processors specifically targeted embedded vision applications. In addition, smart phones and tablets have the potential to become effective embedded vision platforms. Application software platforms are emerging for certain EV applications, such as augmented reality and gesture- based UIs. Such software platforms simplify embedded vision application development by providing many of the utility functions commonly required by such applications.

With embedded vision, the industry is entering a "virtuous circle" of the sort that has characterized many other digital signal processing application domains. Although there are few chips dedicated to embedded vision applications today, these applications are increasingly adopting high-performance, cost-effective processing chips developed for other applications, including DSPs, CPUs, FPGAs, and GPUs. As these chips continue to deliver more programmable performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products. Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, efficiency, and programmability.

The author gratefully acknowledges the assistance of Shehrzad Qureshi in providing information on lens distortion correction used in this paper.

3. OpenCV: Bradski and Kaehler, "Learning OpenCV: Computer Vision with the OpenCV Library", O'Reilly, 2008
4. MATLAB/Octave: "Machine Vision Toolbox", P.I. Corke, IEEE Robotics and Automation Magazine, 12(4), pp 16-25, November 2005. P. D. Kovesi. "MATLAB and Octave Functions for Computer Vision and Image Processing." Centre for Exploration Targeting, School of Earth and Environment, The University of Western Australia.
5. Visym (beta):
6. "Predator" self-learning object tracking algorithm: Z. Kalal, K. Mikolajczyk, and J. Matas, "Forward-Backward Error: Automatic Detection of Tracking Failures," International Conference on Pattern Recognition, 2010, pp. 23-26. http:1/
7. Vision on GPUs: GPU4vision project, TU Graz: http://gpu4vision
8. Lens distortion correction: Luis Alvarez, Luis Gomez and J. Rafael Sendra. "Algebraic Lens Distortion Model Estimation." Image Processing On Line, 2010. DOI:10.520llipol.2010.ags-alde:

About the author
Jeff Bier is founder of the Embedded Vision Alliance, an industry partnership formed to enable the market for embedded vision technology by inspiring and empowering design engineers to create more capable and responsive products through integration of vision capabilities. Jeff is also co-founder and president of Berkeley Design Technology, Inc., where he oversees BDTI's benchmarking and analysis of chips, tools, and other technology. Jeff is also a key contributor to BDTI's consulting services, which focus on product-development, marketing, and strategic advice for companies using and developing embedded digital signal processing technologies.

To download the PDF version of this article, click here.

 First Page Previous Page 1 • 2 • 3 • 4 • 5 • 6

Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.

Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming

News | Products | Design Features | Regional Roundup | Tech Impact