Path: EDN Asia >> Design Centre >> Computing/Peripherals >> Prototypes come of age
Computing/Peripherals Share print

Prototypes come of age

01 Jun 2012  | Brian Bailey

Share this page with your friends

Over the past few years, interest in prototyping electronic designs has grown. The rising size and complexity of systems and the limitations of using a single-purpose model—the hardware-design model—have fueled this growth. Engineers have traditionally modeled the hardware-design model at the RTL (register-transfer level) and then performed a number of refinement steps until that model becomes the implementation model. This single-purpose model has in the past found use only in hardware design, although engineers are now considering its use for other purposes. The EDA industry has developed tools, such as equivalence checkers, to ensure that the functionality of this evolving model stays consistent during these transformations. In an ideal world, all modifications would apply to the single model at the start of the chain, and changes would propagate throughout the chain. In the real world, however, engineers make changes to the subsequently derived models, potentially leading to departures between the original high-level model and the final implementation. This area has always been regarded as a risk in the development cycle, and that risk increases as additional independent models are generated throughout the flow.

The single hardware-design model can no longer provide all of the functions users demand. The rapid increase in software content means that its development, debugging, and integration cannot be left until first silicon comes back from the fab. The RTL model is too slow for the effective performance of these tasks because it contains implementation details that software execution does not require. Engineers have used emulation to speed the RTL model, but this approach is still too slow, and it is often too expensive for manufacturers to make available to software-development teams. Engineers need faster and cheaper prototypes that are available much earlier in the design flow.

Time for change
Among the changes now taking place in this area is the migration to higher levels of abstraction for hardware design. Driving this change is a desire for increased productivity in the hardware that provides unique value in a product. The ability to derive several implementations from a single high-level description is also desirable. Once this block is developed at an abstract level and verified, it can then be used to generate several microarchitectures or target implementation technologies, each with characteristics such as low power or small size. High-level synthesis is helping in this area. As a by-product of high-level synthesis, verification teams have discovered that their process can become more efficient when they use abstract models. As systems become more concurrent, it is no longer possible to analyze the throughput, latency, or other aspects of a system architecture using static methods, so the role of verification is also increasing to include system-level concepts.

Another change affecting hardware- and software-development teams is that more functionality is being provided by reuse—either internal or acquired from third parties. The size, sophistication, and number of these blocks are increasing, and the verification approach for these blocks differs from the approaches used for new blocks. Many hardware blocks now come with sophisticated software stacks, and they also must be integrated into the software flow. These changes have led many development teams to the conclusion that they must develop additional models to enable aspects of a system's design or verification to be performed that are not on the hardware-development path. This new approach in turn signals a radical change in the make-up of a development team. Hardware development is no longer the central hub around which everything else revolves. Instead, the system becomes the focus, and hardware development becomes one of the contributors. The prototype becomes the way in which contributors share information.

Roles of a prototype
The first step in choosing a prototype is to understand the needs and expectations associated with the prototype. Who will create the prototype? When will it be created? For what purpose is it needed? Costs, responsibilities, maintenance, verification, and values of the prototype should be determined; otherwise, it will not receive the necessary time and attention. Teams who see a prototype as an unnecessary distraction are setting themselves up for failure, and in many cases this is because the developer of the model and the user will be in different groups—one bearing the cost and the other receiving the value. This situation requires management to take a systems approach to budgeting and staffing rather than viewing each discipline separately.

Consider prototypes before, during, and after the RTL phase of the hardware implementation. Virtual prototypes are those prototypes that are created before RTL; rapid prototypes are those created during RTL development; and physical, or silicon, prototypes are those systems that emerge after the availability of first working silicon (Figure 1).

The increased use of IP (intellectual property), the need for additional productivity, increasing verification complexity, and the desire for microarchitectural exploration are some of the changes affecting hardware developers. Abstraction is important for productivity and exploration, but cycle accuracy is still important for detailed verification. Meanwhile, software development, debugging, and bring-up were traditionally performed when first silicon came back from the fab, putting it on a critical path. When problems were discovered in the hardware or the software interface, it was usually the software that was expected to change. This situation is no longer acceptable. The earlier that software development can begin, the more likely it is that developers will achieve success. Software requires speed, and silicon and virtual prototypes are the ideal platforms on which to perform this task. However, when detailed visibility into both hardware and software is required—such as when working on device drivers or diagnostics—rapid prototyping provides the necessary features.

Inefficiency and the growing ineffectiveness of constrained random verification techniques have exacerbated the need for verification, a critical-path function for hardware development. This step requires longer and more complex sequences to verify functionality because of both growing system complexity and increasingly large data packets (audio, pictures, low-resolution video, high-definition streaming video over wireless). Abstraction is a key element of speeding the execution of these test cases, but cycle accuracy for sign-off verification cannot be ignored. Further, most hardware developers are becoming system architects. Those developers not working on the implementation of high-value blocks are stitching together IP blocks, making sure that data can move around the system in a timely manner, and looking for ways to reduce power consumption.

Prototypes available
Many development teams plan for more than one chip spin because they may want to provide a platform for software bring-up during hardware optimization. Alternatively, they may be using a previous generation of the chip with some surrounding hardware that enables the addition of new capabilities. They may want to map these capabilities into FPGAs. Typically, a custom board accommodated all of the necessary hardware, ensuring the maximum possible execution speed.

The price of developing this type of prototype includes the board cost. If the prototype uses new silicon, it will be available late in the development cycle, and, in most cases, problems that developers find in software integration cannot influence the hardware. If the developers base the prototype on earlier silicon, it may be available earlier in the development flow so that software can influence the current design. These prototypes are generally the fastest in execution, but they have limited visibility and controllability. When problems are found, they can be difficult to diagnose or repeat. The primary developer is the hardware group and may require some unique skills, such as board design. The primary user is the software group. The cost of creating several instances is relatively small and incremental.

Rapid prototyping is the most mature of the group of processes. Hardware, verification, and firmware may use these prototypes, which are cycle-accurate and execute faster than a logic simulator. They coarsely divide into emulation and FPGA prototypes, even though a continuum exists between the categories. Confusion arises because some emulators employ FPGAs, and others don't. To decide between them, weigh how much work the user must do versus what tasks the tool flow automates, and how much visibility the tool provides into the hardware. Emulators place an RTL design on the hardware with minimal user intervention and provide controllability and visibility similar to that of a simulator. FPGA prototypes usually require more assistance from the user, which may include modification to the RTL, partitioning, mapping, and the design and creation of a board onto which to mount the FPGAs (Figure 2). Emulators are more expensive than FPGA prototypes.

The hardware team, initially for its own benefit, usually develops these prototypes. Verification often requires the use of rapid prototypes due to the length of simulation runs. An emulator can turn a multiweek simulation run into minutes, and an FPGA prototype is even faster. However, the emulator still executes at a fraction of real-time speed and is often too slow for software development, except for low-level firmware. Replication cost is high for emulation. FPGA prototypes can be inexpensive if developers use custom boards, but they still cost more than silicon prototypes after amortization of the initial design costs.

Engineers model the emerging family of virtual prototypes at a higher level of abstraction. They are not cycle-accurate because there is no notion of this concept at this stage of design. The prototype need not model all of the functions. The applications for virtual prototypes are software development, hardware-architecture exploration, and verification. An architectural prototype is usually the least complete of the prototypes. In some cases, the prototype models only the bus or the interconnect infrastructure, the memory subsystem, and the generic computation blocks. Traffic generators often feed this prototype to find out how well data moves through the system, to identify bottlenecks that may exist, and to compute throughput requirements for the components. It is rare for this prototype to be used for other purposes.

When high-level synthesis first debuted, it represented an attempt to increase the productivity of the design team. The secondary benefit was an increase in execution speed for verification. The productivity gain of the verification team has been measured to be larger, however, than the gain that the design team experiences. In some companies, the verification team develops the entire prototype and uses it as the comparison model for the hardware. Other teams may then use the same model for software development, and it will remain in sync with the hardware.

Another prototype targets software development, and two variants exist. One can run the same object code as the final target, whereas the second, an SDK (software-development kit), requires recompilation. Developers use SDKs when they do not need to understand anything about the hardware. An example would be an SDK for an iPhone, which allows an application developer to write and debug software without executing it on the device (Figure 3). Within the context of chip development, it is more likely for a developer to construct a binary-compatible model, which may also have built-in hardware- and software-debugging capabilities.

The biggest problem today with the construction of virtual prototypes is model availability. Many suppliers of IP do not yet ship abstract models, and, when internal reuse happens, abstract models for those blocks must be created. This step adds to the time and effort necessary to create the prototype.

Almost all emerging design flows employ one or more prototypes, some of which may connect to the hardware-development flow; some, to the verification flow; and some, to software development. Development of these prototypes is key to system-level success, and this success requires a change in team dynamics so that everyone is working toward the same goal.




Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact