Electronic control system partitioning in autonomous cars
09 Nov 2015 | James Scobie, Mark StachewShare this page with your friends
These new advanced vehicle functions present a whole new challenge to the already complex vehicle electronic systems. They are driving demand for higher compute performance, more connectivity, increased levels of safety and security as well as new system partitioning to offer a cost efficient implementation.
There are many opinions on market development with predictions of up to 75% of all vehicles being fully autonomous by 2040. Consumer acceptance and legislative governance will dictate autonomous vehicle proliferation in the market. OEMs are starting to introduce forms of autonomous vehicles, initially with limited applications such as intelligent cruise control, lane change assist, emergency brake, and parking assist, but we already see higher levels of autonomous control in industrial applications such as farming and mining.
For these autonomous applications, the system must monitor the external surroundings and the vehicle's driver in addition to the vehicle systems and dynamics. It has to observe multiple conditions and activities occurring around the vehicle, predict the most probable outcome of the conditions and events, and then choose the best course of action.
Each decision potentially creates a new driving scenario in addition to the dynamic external conditions and will demand a rapid response in microseconds. All of this translates into an extreme increase in workload for the electronic control systems in today's vehicles.
Autonomous passenger cars will be realized through incrementally adding assisted-drive and navigation assist features that increasingly shift control away from the driver to the vehicle. These new features will require a parallel and unprecedented increase in sensor technology, data bandwidth, processing performance, safety and security capabilities of the electronic control systems.
The initial path already taken by some car makes for the integration of assisted-drive features is that of a partial domain controller. In this example sensor fusion and decision making functions are added to existing distributed vehicle systems connecting through a network interface.
The control system ECU functions are mostly unchanged while adding new application dedicated hardware and adaptations to cooperate with the automated drive system. This approach is acceptable for limited automated or assisted-drive features, but will not be adequate for a fully autonomous vehicle where more control is directed independently by system and not the driver requiring decision making at the vehicle level as well as precise interaction between lower level nodes and data sharing between the sensing and control points for, powertrain, steering and braking, and decision making functions with fast control loop update rates.
Maintaining full local decision making functions in each of the distributed vehicle systems would result in an overly complicated implementation requiring extremely high communication bandwidth, and would likely have inadequate overall response time because of the need to arbitrate between independently made decisions. Since every action taken by an autonomous vehicle system results in a new scenario, the action of each control function must be precisely synchronized.
![]() |
Figure 1: Transformation trends in automotive network for autonomous vehicle applications. |
This massive increase in system interaction and connectivity, including the Cloud and V2X is expected to drive a migration from today's distributed systems to a more centralized. One example of this is where cars implement a high-level vehicle controller managing the overall autonomous strategy.
These vehicle controllers will need microprocessors with high level of performance. In some implementations a lower intermediate class of controller may also be included in the form of domain controller. The centralized system will consolidated decision-making function defining the higher-level response of the vehicle and passing strategy information to the lower levels of the system.
Want to more of this to be delivered to you for FREE?
Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.
Got to make sure you're not a robot. Please enter the code displayed on the right.
Time to activate your subscription - it's easy!
We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.
We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.
-
Robotic glove helps restore hand movements
The device is an improvement from conventional robotic hand rehabilitation devices as it has sensors to detect muscle signals and conforms to the natural movements of the human hand.
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."