TLM-2 0 1
TLM-2 0 1
TLM-2 0 1
The definitive OSCI TLM-2.0 Language Reference Manual was released 27-July-2009 along with version TLM-
2.0.1 of the release kit. In this article we take a look at the new features of TLM-2.0.1 as documented in the
LRM.
Well, the source code has not changed much since TLM-2.0.0. There have been some bug fixes and minor enhancements, and some improvements to
the unit tests and examples, but that's about it. The TLM-2.0.1 code is backward-compatible with TLM-2.0.0 in all respects that you are likely to notice or
care about. The biggest changes are in the documentation. The User Manual from TLM-2.0.0 has been promoted to a definitive Language Reference
Manual (LRM), and now contains many more rules, descriptions and explanations. The purpose of these rules is to ensure interoperability between
TLM-2.0-compliant models. This article discusses the issues that have been clarified in the TLM-2.0 LRM, as well as presenting the minor changes in
the TLM-2.0.1 release kit.
In case you were wondering, TLM-2.0 is not a "language" as such. The "language" being referred to is SystemC. TLM-2.0 builds directly on SystemC.
TLM-2.0 is the name of the standard, whereas 2.0.0 and 2.0.1 are version numbers of the software distribution.
Most of the content of this article is rather technical and is aimed at TLM experts rather than beginners. For experts, this article adds further commentary
and explanation going beyond that contained in the TLM-2.0 Language Reference Manual.
Topics discussed on this page
The base protocol allows both the loosely-timed (LT) and the approximately-timed (AT) coding styles to be used, though these should be thought of as
points along a spectrum of possible coding styles allowed by the base protocol rather than being definitive in themselves.
Use of the base protocol is signified by using the name of the base protocol traits class tlm_base_protocol_types to specialize one of the two standard
socket classes tlm_initiator_socket and tlm_target_socket, for example:
tlm::tlm_initiator_socket<32,tlm::tlm_base_protocol_types> my_socket;
A traits class is a class that typically provides a set of typedefs used to specialize the behavior of another class, usually by having the traits class passed
as a template argument to that other class (as shown here). A TLM-2.0 protocol traits class must provide two typedefs tlm_payload_type and
tlm_phase_type.
Any component using the name of the traits class in this way is obliged to honor all of the rules associated with the base protocol. Most of these rules
are just conventions and are not actually enforced by the software API. Nonetheless, it is only by complying with these rules that TLM-2.0 components
are able to achieve interoperability. The base procotol rules have been considerably refined in the LRM, and some of the important clarifications are
discussed below.
The alternative is not to use the base protocol, signified by using the name of some other protocol traits class when specializing the standard sockets,
for example
tlm::tlm_initiator_socket<32,tlm::my_protocol> my_socket;
When creating your own protocol traits class, you make the rules associated with that protocol, and there are no a priori limitations on the form those
rules might take. For example, you could create a minor variation on the base protocol, perhaps adding an extension to the generic payload and a
couple of extra phases, or you could restrict the interface method calls through the socket to b_transport, or you could replace the generic payload with
an entirely new transaction type. You could even have a single protocol traits class that represents a whole family of related protocols, with the precise
details of the protocol being settled by negotiation at run-time. The point is that when you define the protocol traits class, you make the rules. The only
rule imposed by TLM-2.0 concerning user-defined traits classes is that you cannot directly bind together two sockets specialized with different protocol
traits classes; you have to write an adapter or bridge between them.
But having said "you make the rules", TLM-2.0 sets a clear expectation. When defining a new protocol traits class, you are strongly recommended to
stick as closely as possible to the rules of the base protocol in order to reduce the cost of achieving interoperability between different protocols. If you
need to model the specific details of a given protocol then of course your model will depart from the base protocol. But remembering that the objective is
fast simulation speed through transaction-level modeling, it will often be possible to abstract away from the details of a specific protocol and end up with
something not too far removed from the base protocol.
Think of the TLM-2.0 base protocol as being two things: it is both an abstract memory-mapped bus protocol in its own right and also an exemplar for the
transaction-level modeling of specific protocols.
In summary, if you use the base protocol, you have to keep to the rules of the base protocol, which are quite strictly defined. If you create your own
protocol you can make your own rules, but you lose the ability to bind your sockets to base protocol sockets, thereby limiting interoperability. So a
natural question to ask is whether you can maximize interoperability by mapping your particular protocol onto the base protocol, or whether you need to
define a new protocol traits class. In other words, how far can you stretch the base protocol without breaking it?
Ignorable Extensions
The base protocol can be stretched in two ways without any need to define a new traits class: by adding ignorable extensions to the generic payload,
and by adding ignorable phases to tlm_phase. An extension or extended phase is said to be ignorable if any and every component other than its
creator is permitted to behave as if the extension were absent, regardless of whether or not that or any other component recognizes or understands the
extension.
Again, there are no a priori limitations on the semantics associated with ignorable extensions. For example, an ignorable extension might modify the
behavior of a READ or WRITE command to fail if the initiator does not have the appropriate privilege level, thereby altering the behavior of the standard
generic payload command attribute. The point is that a component on the receiving end of the extended transaction must be allowed to behave as if the
extension did not exist, in other words, to ignore the extension entirely and perform a plain READ or WRITE. If the target were actually obliged to
interpret and honor the extension, a new protocol traits class would be necessary.
One point that emerges from this discussion is that ignorable is not quite the opposite of mandatory when it comes to extensions. Rather, the two
concepts are somewhat independent. Nonetheless, a base protocol component is forbidden from generating an error response due solely to the
absence of an extension. In this sense, an ignorable extension cannot be mandatory. The base protocol does not require the existence of any
extensions.
The definition of "ignorable" applies equally to extensions and to phases. An ignorable phase can be totally ignored by a receiving component (whether
it be upstream or downstream of the sender), and there is no way (within the base protocol) for the sender to know that the phase has been ignored
other than by inferring the fact from the absence of any explicit response. On the other hand, suppose that the ignorable phase is not ignored, but
instead the receiver sends back another ignorable phase in the opposite direction. Using extensions, a set of base protocol components could negotiate
between themselves to use an extended set of phases. Such a protocol negotiation would be outside the scope of the base protocol, but at the same
time would be permitted by the base protocol just so long as any connected component is allowed to ignore all extensions and extended phases entirely
and communicate according to the strict rules of the base protocol alone. In other words, any extended base protocol component must have a fall back
mode where it reverts to the unextended behavior of the base protocol.
To maintain interoperability, there are some rules on how ignorable phases must be handled within the base protocol. A base protocol component that
does not recognize a particular extended phase must not propagate that phase (i.e. pass it further along the call chain between initiator and target)
except where that component is a so-called transparent component, meaning that it is a dumb checker or monitor component with one target socket and
one initiator socket that simply passes on all transactions. Transparent components are expected to propagate every phase immediately, ignorable
phases included. There is a further base protocol restriction that ignorable phases must not be sent before BEGIN_REQ or after END_RESP, that is,
outside the normal lifetime of an unextended base protocol transaction.
As usual, any deviation from these interoperability rules would require you to define a new protocol traits class and use it to specialize your initiator and
target sockets.
In summary:
Initiator sets attributes -> interconnect modifies address -> target copies data
Initiator checks response <- interconnect sends response upstream <- target sets response
It says "extensions excluded" above because the meaning of any extensions is by definition outside the scope of the base protocol.
These rules for modifying generic payload attributes also apply to the debug transport and direct memory interfaces where appropriate.
The construction and destruction of generic payload transaction objects can be prohibitively expensive (due to the implementation of the extension
mechanism). Hence, you are strongly advised to pool or re-use transaction objects. In general the best way to do this is with a memory manager, but
ad-hoc transaction object re-use is also possible when calling b_transport.
The roles of initiator, interconnect, and target can be assigned dynamically. In particular, a component can receive a transaction and inspect its
attributes before deciding whether to play the role of an interconnect and propagate the transaction downstream, or to play the role of a target and
return a response. Specifically, a component that is usually an interconnect plays the role of target when setting an error response.
// Pseudo-code
socket->nb_transport_fw(trans1, BEGIN_REQ, 1000ns);
socket->nb_transport_fw(trans2, BEGIN_REQ, 0ns); // Forbidden until END_REQ received
Only nb_transport has phases. b_transport does not have a phase argument, and so b_transport calls are not subject to request and response
exclusion rules. Blocking transport calls do not interact with non-blocking transport calls when it comes to exclusion rules.
End-of-life of an AT Transaction
An AT transaction may be passed back-and-forth several times across several hops during its lifetime. In the case of a transaction with a memory
manager, control over the the lifetime of the transaction object is distributed across the hops and is co-ordinated by the memory manager through calls
to the acquire and release methods of the generic payload. The only method call provided by TLM-2.0 that co-incides with the end-of-life of a
transaction is the call to the free method of class tlm_mm_interface, which occurs automatically when the transaction reference count reaches zero. In
other words, if you wanted to execute some action at the end-of-life of the transaction, you would have to do so from the free method of the associated
memory manager.
In the case of nb_transport, for each individual hop, the final activity associated with a transaction may be either the final phase of the transaction
(passed as an argument to nb_transport) or TLM_COMPLETED (returned from nb_transport). For the base protocol, the final phase is END_RESP.
Having nb_transport return TLM_COMPLETED is not mandatory. However, if TLM_COMPLETED is returned then no further activity associated with
that transaction is allowed over that particular hop.
For the base protocol and nb_transport, certain phase transitions may be implicit:
The BEGIN_RESP phase infers the existence of the END_REQ phase (if it has not already occurred)
TLM_COMPLETED returned from the target infers both the END_REQ and BEGIN_RESP phases for that transaction.
Because TLM_COMPLETED infers BEGIN_RESP, returning TLM_COMPLETED would be disallowed by the response exclusion rule if there were
already a response in progress over that particular hop.
b_tranport(trans, delay);
The delay is added to the current simulation time to give the time at which the recipient should execute the transaction, that is,
Directory Structure
Compared to TLM-2.0.0, the directory structure of the TLM-2.0.1 release has been collapsed below tlm_h to remove the intermediate tlm_trans
directory. Also, the directories tlm_req_rsp and tlm_analysis have been moved to a new top-level directory tlm_1, which now contains the TLM-1.0
include files and the analysis ports. These changes are backward-compatible with TLM-2.0.0. The user simply needs
#include "tlm.h"
to include the standard TLM-1.0 and TLM-2.0 headers, with the exception of the TLM-2.0 utilities, which must be included separately and explicitly.
TLM-1.0 is still a current OSCI standard, although TLM-2.0 is in a sense independent of TLM-1.0.
Analysis ports were not part of the original TLM-1.0 release, but in some peoples' minds analysis ports are more closely associated with TLM-1.0 than
they are with TLM-2.0. That said, analysis ports can be useful in TLM-2.0 for implementing transacton-level broadcast operations, such as might be
needed to model interrupts.
Generic Payload
The generic payload has a new method update_original_from, which is intended to complement the existing deep_copy_from.
class tlm_generic_payload {
...
void deep_copy_from(const tlm_generic_payload & other);
PEQs
The two PEQs peq_with_get and peq_with_cb_and_phase each have a new method with the following signature:
void cancel_all();
As the name suggests, this method cancels all outstanding events in the queue, leaving the PEQ empty. This can be useful for resetting the state of a
model.
const
There have been minor changes to the signatures of some methods, in particular adding the const modifier in places where it was missing. In each
case, this change allows the methods to be called in contexts where it was previously not permitted to call them. The new signatures are:
class tlm_generic_payload {
...
int get_ref_count() const;
bool has_mm() const;
...
};
class peq_with_get {
...
void notify(transaction_type& trans, const sc_core::sc_time& t);
...
};
class peq_with_cb_and_phase {
...
void notify (tlm_payload_type& t, const tlm_phase_type& p);
void notify (tlm_payload_type& t, const tlm_phase_type& p, const sc_time& when);
...
};
As with the standard SystemC classes derived from sc_object, this method returns a text string containing the class name.
Also, the sc_object names generated by the default constructors for these socket classes have changed slightly (although this is very unlikely to affect
user-level code).
Convenience sockets
A bug has been fixed in the simple_target_socket nb-to-b adapter, which did not correctly implement timing annotations on incoming nb_transport_fw
calls.
The constructors of the convenience sockets have been extended to provide two constructors, a default constructor and a second constructor that takes
a char* argument, just like the standard sockets.
Multi-sockets
The multi-sockets now provide default implementation for the DMI and debug transport methods in the same way as the simple sockets, so there is no
need to register every method. In TLM-2.0.0, it was necessary to register every method explicitly with the multi-sockets.
struct my_module ...
{
tlm_utils::multi_passthrough_target_socket<my_module> socket;
...
my_module(sc_module_name _n)
: sc_module(_n)
{
socket.register_b_transport(this, &my_module::b_transport);
// Defaults provided for the other three methods
}
Quantum Keeper
A new method set_and_sync has been added to class tlm_quantum keeper with the following definition:
As you can see, the method updates the local time, checks whether synchronization is necessary, and if so, synchronizes. The method is provided just
for convenience when executing this common sequence of operations.
Click here to download this page in PDF format. In exchange, we will ask you to enter some personal details. To read about how we use your details,
click here. On the registration form, you will be asked whether you want us to send you further information concerning other Doulos products and
services in the subject area concerned.