Path: EDN Asia >> Design Centre >> Test & Measurement >> The black box approach to silicon validation
Test & Measurement Share print

The black box approach to silicon validation

19 Sep 2014  | Vera Apoorvaa

Share this page with your friends

Validation is an open ended problem, and this applies to SoC validation just the same. The innumerable test possibilities and permutations can add up in time to spill way beyond the project schedule. In some cases, the time estimated for complete coverage (all known permutations) adds up to years.

This reminds me of cryptographic brute force attack. If a cipher uses an exorbitant (really long) cipher key, then the time taken to discover the key by brute force attack is greater than the time period for which the message is useful. We don't want to be there! We want to deliver the best silicon (defect free) to the market, in time. A bug is more significant if it is something that could come up in real world applications. But the tricky part is, we can never dismiss the possibility of a bug to come up in the real world, since the customer can exercise the part in unpredictable ways. Catch-22? The question here is what to do and where to stop.

The challenge is to bring validation to closure and to be able to assure zero defects. Validation closure is accomplished by execution of a preplanned test list. It is therefore imperative that the test list be developed with maximum prudence. Below, I will share my perspective on test list development with a "black box" approach.

I will consider a controller area network (CAN) protocol block as an example to convey my approach of defining the test suite (only the receiver logic should be sufficient for this purpose). It is quite easy to imagine this block as a black box, without having to know how it is designed to do what it is defined to do!
On CAN briefly
The CAN protocol uses a broadcast mechanism, which means the receiver logic includes filters to determine if the message on the bus is to be accepted or rejected by it. The receiver filters the messages based on the identifier field. A standard CAN 11bit identifier frame has the format shown below:


SOF – Start of frame identifier

11 bit Identifier – used to filter the messages at the receiver end.

RTR – Remote transmit request, used when some information needs to be requested from a node.

IDE – Identifier extension, used to indicate if the frame is a standard frame or an extended frame.

r0 – reserved bit

DLC – Data length code, indicated the bytes to be transmitted, range between 0-8

CRC—Cyclic redundancy check

ACK—Acknowledgement

EOF – end of frame

IFS – Inter frame space


CAN error mechanisms: What to test?
The CAN protocol standard defines five error scenarios that can be handled by the protocol – bit error, stuff error, CRC error, form error, and ACK error.

To answer the question 'what to test?', let's consider some test categorisations:
1. Feature testing: Feature tests aim at verifying if the device is compliant to what it is claimed to support. We can simply list what to test for:

Test if the DUT:

 • Filters the message based on the identifier value.
 • Receives data correctly for all lengths ranging from 0-8B.
 • Responds appropriately to a remote transmit request.
 • Detects the errors on occurrence of any kind of errors defined by the CAN standard.
 • Receives the frame correctly for all supported baud rates.
These tests are straightforward and aim to check all features supported by the block.
2. Directed testing: We expect every frame on the CAN bus to be received correctly by the DUT. Consider a form error occurring on the bus, with the DUT detecting the error correctly: we stop here while we do the feature testing. But we can go one step further, and check if the DUT receives a following data packet correctly. Here we are directing the test case to check for expected performance after occurrence of an error scenario. Several such sequences of events can be identified and exercised on the DUT as part of directed testing.
3. Random testing: Once valid transactions are identified, we can randomise the occurrence of the sequence of events. For example:

1. Error types (form, bit, CRC, stuff, and ACK errors); 2. Different frame types (remote and non-remote transmit frames); 3. Variable data length frames, and; 4. Frames with different identifier values.

All of these can be interleaved randomly, and at each step, the functionality of the block is verified.
4. Negative testing: Let us consider the basic if-else statement. Say the block we are testing does something like this:



1 • 2 Next Page Last Page


Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact