Path: EDN Asia >> Design Centre >> IC/Board/Systems Design >> Selecting, integrating IP cores
IC/Board/Systems Design Share print

Selecting, integrating IP cores

15 Dec 2014  | Ali O. Ors, Daniel Reader

Share this page with your friends

Companies designing new system-on-chip (SoC) products face an ongoing market pressure to do more with less and achieve higher returns. The result is shrinking engineering teams, reduced design tool budgets and shortened time lines to get new products to market. This has led companies designing complex SoCs to move increasingly towards licensing IP cores for a majority of the building blocks of their designs instead of building their own in-house custom versions. Selecting the right IP cores is the fundamental challenge of this developing paradigm—and the means of evaluating and presenting those cores is as important to the purchaser as it is to the developer.

The reality is that IP cores are offered with a huge variety of features and options. And, even once you've sorted through the catalogue of potential vendors and products, there is still a vast range in IP quality. The trick is to separate the truly robust and capable from IP that is buggy, insufficiently tested, and lacking in real world performance with a wide and active set of successful users.

Embedded vision is a field, where use cases are still developing and many teams won't know their real needs until a design project is well underway. When it comes to vision processing, CogniVue is focused on not only having the highest quality IP to offer, but also ensuring it meets the needs of the widest range of applications possible, both for today and tomorrow. Such applications include small, smart cameras that see and react to the world around them, cars that see and avoid accidents, cameras on TVs that recognise faces and gestures, as well as smart phones that see and give an augmented view of the world around. Enabling this new world of embedded vision technologies requires a new approach around selecting and integrating IP.


 Vision-enabled SoC architecture

Figure 1: Example of vision-enabled SoC architecture with a CogniVue APEX2-642 Core.


CogniVue's APEX image cognition processing core (figure 1), is designed for pipelining of embedded image and vision processing algorithms. The Image Cognition Processor (ICP) is in production and used in many applications including automotive cameras such as Zorg Industries for AudioVox as well as some new wearable type consumer products like the Neo.1 smart pen from NeoLAB Convergence Inc. as shown in figure 2. The advantages of using a processor designed specifically for image processing explain the drive to increasingly incorporate such IP in these types of consumer applications. For example, a core such as the APEX core is touted to deliver 100x better performance per area per power for vision processing compared with conventional processor architectures. For the NEO.1 product it was able to provide processing at rates of 120 frames per second while still maintaining very low power dissipation, allowing this battery-powered device to last for many days on a single charge.


 NEO.1 Smart Pen

Figure 2: The CogniVue APEX core powers the NeoLAB Convergence Inc. NEO.1 Smart Pen.


Enabling this kind of performance is achieved both through a fundamental knowledge of image processing requirements and through an exhaustive testing and demonstration approach that targets customer needs within their industrial landscape. Before any core is delivered, extensive validation is needed, especially in markets such as automotive where compliance to industry standards for safety (e.g. ISO 26262 "Road vehicles – Functional safety") is required.


Evaluating IP
Although testing is necessitated by such requirements, there is also ancillary motivation for IP companies to provide validation and evaluation platforms that not only show functionality and compliance, but that also perform at levels capable of highlighting their true value to prospective customers.

As an example of this motivation, consider the fact that it is less difficult to create vision IP that performs well for narrow, targeted applications that are currently known. Building vision usefulness and flexibility into the technology from the ground up is, however, what will ensure that the IP can perform at the highest levels across multiple applications. And we know that talk is cheap; the IP quality and fit for the application may not be apparent without a real-world "eyes-on" demonstration to prove out the IP's quality and capabilities.

1 • 2 • 3 • 4 Next Page Last Page


Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact