Modelling hardware

Modern processor components such as caches and pipelines complicate the WCET calculation, as the execution time of even just one single instruction can vary greatly depending on the execution history.

A cache miss is not always the worst-case scenario.

Starting with a completely empty cache is not always the worst-case scenario.

Domino effects and timing anomalies have to be accounted for, with each processor presenting its very own set of peculiar challenges.

Many traditional analysis methods are thus deliberately naïve, in that they do not consider cache and pipeline behavior at all. Instead, such methods try to provide safe results by deliberately over­estimating the WCET.

However, in doing so, they are unable to overestimate the actual WCET by just a little. Rather, they have to overestimate it by orders of magnitude.

As the picture above illustrates, even the most basic task imaginable can take 3 clock cycles to execute — or it can take 300.

In the context of your specific application, it may well always take 3. But without exhaustively analyzing all possible cache and pipeline states on all possible execution paths, you have no way of knowing that. Using the naïve method, you have to calculate with 300.

This leads to an unacceptable waste of hardware re­sources. Far too many cycles have to be reserved for the execution of a single task, rather than being used for additional functionality.

Useful new features cannot be implemented. They might all be able to execute safely within the same time frame, under the worst circumstances possible. But you do not know that.

Whenever you do add new functionality, your WCET estimate only goes up, and often drastically. Even if the actual WCET changes only a little, or indeed stays the same.

The figures obtained by the traditional approach are a lot higher than those produced by the aiT-based method. Actually the over­estimation is such that the traditional figures are useless.”— Airbus France
Next slide.