The recent multi-core revolution triggered a series of technological challenges involving a wide spectrum of
computing applications. As clock speeds cannot be further increased from last decade’s standards, the higher
number of transistors made available by modern integration technologies is being exploited by integrating
multiple cores on the same chip. Embedded systems are not immune from such a trend: different
commercial-off-the-shelf (COTS) platforms are available for the embedded market, integrating a multi-core
host processor with one or more parallel acceleration devices. These heterogeneous computing systems allow
achieving higher performance with a reduced power consumption, operating a larger number of cores at
A wide variety of new applications is being realized to exploit the immense computing capabilities of such
platforms, concurrently running multiple parallel tasks on the available cores. This is opening up to a series
of technological challenges in the real-time and embedded computing market, ranging from the
parallelization of existing applications, to the simultaneous elaboration of multiple sensor data, to the need
for predictable timing guarantees of applications requiring a prompt interaction with the user/environment.
Consider a driverless car. Current prototypes require processing sets of sensing data coming from different
acquisition devices, like cameras, radars, lidars, GPS, etc. These data are processed to derive a common
understanding of the surrounding environment (sensor fusion) in order to take a decision that will be
enforced by the system actuators. If such a decision is not taken in time, the car “wouldn’t know what to do”.
There is no safe state to enter when the computation of critical activities does not end before a well-defined
deadline. Breaking, stopping, pulling over the car, or simply not doing anything, may all be wrong solutions,
considerably endangering the safety of passengers and pedestrians.
A provisional (but unsafe) solution to the problem in existing prototypes is to over-provision the computing
resources. However, this approach has many drawbacks that affect its practical market viability:
- Since the computing frequency cannot be further increased due to power and thermal problems, over-
provisioning requires increasing the number of processors, and statically partitioning different modules
on each processor/platform, significantly increasing hardware and development costs.
- There is no guarantee that an over-provisioning of computing resources is effective in limiting the
worst-case response time of critical tasks. Actually, many examples are known in the real-time research
community showing that increasing the number of processing units may even increase the worst-case
response time due to additional interferences and overhead.
- The hardware resources may end up being largely underutilized, so that expensive platforms are used
only to deal with corner cases, with a significant impact on the final price.
- Powering up multiple processors and boards significantly increases the overall power consumption,
requiring larger and more expensive power sources.
Resource over-provisioning is therefore significantly limiting the capability of these applications to provide marketable products. As an example, the available prototypes of autonomous cars (see, e.g., Google Car1, Vislab PROUD car2, etc.) integrate multiple expensive general-purpose servers, with a power consumption in the kilowatts, powered by heavy batteries that occupy a large share of the trunk. Less invasive and power- hungry solutions (e.g., ADAS systems integrated in high-end Mercedes, Audi, Toyota, etc.) are only able to provide reduced processing capabilities with limited functionalities for driving assistance (e.g., cross-line detection, parking assistance, etc.), without being able to safely replace the human intervention. Similar examples are available in the avionic domain, industrial automation and robotics. Therefore, there is a strong pressure on real-time application developers to adapt their systems to next- generation COTS platforms. This would satisfy the increasing demand for computing power to support upcoming time-sensitive applications, which require higher performance within bounded power consumption and predictable response time. Failing to address this target would relegate real-time systems to a niche market with reduced performance and stagnating execution speeds.
This project aims at clearing the path towards next-generation real-time applications, removing the technological obstacles that prevented real-time systems from exploiting the performance boost connected to modern parallel computing platforms.