In the last post of this series, I wrote about basic coherent testing. In this post, I will discuss some of the nuances of the specification relative to accesses to overlapping addresses. Since multiple masters may be sharing the same location and the data could be distributed across the caches of different masters, this is an important part of the verification of a coherent system. The interconnect plays a very important role in maintaining coherency for such accesses.
Here’s where you can find more information on Verification IP for AMBA 4 AXI.
The Gory Details
There are three key aspects that the Interconnect should take care of relative to accesses to overlapping transactions.
Sequencing transactions
Consider the example below:
Here, Master 1 and Master 2 want to write to the same location and store it in their local caches, at approximately the same time. For this, Master 1 and Master 2 send MakeUnique transactions (represented in the figure by 1a and 2a). For a moment let’s consider the effects of an incorrect behavior pattern of the interconnect. Let’s say the interconnect sends both Master 1 and Master 2 MakeInvalid snoop transactions (represented by 1b and 2b) corresponding to the MakeUnique transactions it received from Master 2 and Master 1 respectively. Once the masters respond with a snoop response (represented by 1c and 2c), the interconnect sends responses back to the masters (represented by 1d and 2d). When the transactions have completed in both Master 1 and Master 2, both masters update the cache to a Unique State. This violates protocol because a cacheline can be held in a Unique state by only master. Moreover, each master may store a different value in its local cache with both masters incorrectly thinking that they have a unique copy of the cacheline. Clearly, the effect of not sequencing correctly is incoherency as shown in the figure, where two masters have two different views of the data. In order to deal with this, the specification requires that such accesses to overlapping addresses be sequenced. The specification states:
“It is the responsibility of the interconnect to ensure that there is a defined order in which transactions to the same cache line can occur, and that the defined order is the same for all components. In the case of two masters issuing transactions to the same cache line at approximately the same time, then the interconnect determines which of the transactions is sequenced first and which is sequenced last. The arbitration method used by the interconnect is not defined by the protocol. The interconnect indicates the order of transactions to the same cache line by sequencing transaction responses and snoop transactions to the masters. The ordering rules are:
• If a master issues a transaction to a cache line and it receives a snoop transaction to the same cache line before it receives a response to the transaction it has issued, then the snoop transaction is defined as ordered first.
• If a master issues a transaction to a cache line and it receives a response to the transaction before it receives a snoop transaction to the same cache line, then the transaction issued by the master is defined as ordered first.” [1]
In the above example, let us assume that the interconnect gives priority to Master 1. If so, it must send a snoop transaction (1b) to Master 2, wait for the snoop response (1c) and send the response back to Master 1 (1d). At the end of this sequence, Master 1 will have its cacheline in a unique state and may write a value in its cache. The interconnect may then sequence Master 2 and can send a snoop transaction (2b) to Master 1 which will invalidate the cacheline in Master 1, wait for a snoop response (2c) and send the response back to Master 2 (2d). At the end of this sequence, Master 1 has its cacheline invalidated and Master 2 will have its cacheline allocated to a Unique state.
Timing of Snoop Accesses Relative to Responses to Coherent Transactions
The specification lays down some rules on the ordering of responses to coherent transactions and snoop transactions to the same cacheline. These are given below:
“The interconnect must ensure the following:
• if the interconnect provides a master with a response to a transaction, it must not send that master a snoop transaction to the same cache line before it has received the associated RACK or WACK response from that master
• If the interconnect sends a snoop transaction to a master, it must not provide that master with a response to a transaction to the same cache line before it has received the associated CRRESP response from that master.”
An important point to note relative to this aspect of the protocol is that this requirement is not applicable to WriteBack and WriteClean transactions although it is not explicitly stated in the specification. Applying the above rules to WriteBack and WriteClean transactions could lead to a deadlock. This is because a master that receives a snoop transaction to a cacheline is allowed to stall it until any pending WriteBack or WriteClean transactions that it initiated or is about to initiate to the same cacheline is complete. In other words, this master must be allowed to receive a response to the WriteBack or WriteClean transaction before it can allow an incoming snoop to proceed (that is, respond to it). If the above rule is applied to WriteBack or WriteClean transactions, the interconnect will not be able to send a response to the WriteBack or WriteClean transaction since a snoop transaction has already been sent to the master. Therefore, it is important that this rule is not applied to WriteBack and WriteClean transactions.
Re-fetching Data from Memory
In certain circumstances, data may have to be re-fetched from memory. For example, consider that Master 1 issues a ReadShared transaction and Master 2 which has a dirty copy of the cacheline issues a WriteBack transaction. Let us say that the interconnect issues a read from main memory for the ReadShared transaction. After the Read transaction sent to main memory is complete, let us assume that the WriteBack makes progress. After this, any snoop transaction sent by the interconnect will not return data because the WriteBack would have invalidated the cacheline in Master 2. However, if the interconnect uses the data received in the prior read to memory, it will be stale, because a WriteBack transaction has updated memory after the read to memory was issued. It is therefore necessary to re-fetch data from memory and use that data to respond to Master 1. How do we detect issues related to this? These can be detected through coherency checks. In the above example, the ReadShared transaction will be passed clean data and its contents should match that of memory. If it doesn’t, it probably means that the interconnect used stale data to respond to the ReadShared transaction.
Testing Accesses to Overlapping Address
Testing all the scenarios related to accesses to overlapping addresses can be overwhelming. Given a system, there are multiple ports of different interface types which can send transactions to overlapping addresses. However, not all combinations of masters accessing a given address may be valid, because some masters may be allowed to access only certain address spaces and a group of masters may access only a restricted set of the address space and these group of masters form a shareability domain. Add to this, the fact that so many different transaction types can be initiated by a master with different initial states for a cacheline of a given address. The power of randomization and configuration-aware sequences can meet these requirements. A sequence that tests this could do the following:
Key Verification Points
All the verification points mentioned in the previous blog are applicable here as well. In addition to this, the following need to be checked:
In this post, I have described the testing strategy and the key aspects of testing relative to accesses to overlapping addresses. In the next post I will write about testing of Barrier and DVM transactions.
Here’s where you can find more information on Verification IP for AMBA 4 AXI.
Authored by Ray Varghese