(1) Ports are logical connection points between components that can be used for the transfer of control and data between threads or between a thread and a processor or device. Ports are directional, i.e., an output port is connected to an input port. Ports can pass data, events, or both. Data transferred through ports is typed. From the perspective of the application source text, data ports are accessible in the source text as data variables. From the perspective of the application source text, event ports represent event queues whose size is accessible. Incoming events may trigger thread dispatches or mode transitions, or they may simply be queued for processing by the recipient. From the perspective of the application source text, event data ports represent message queues whose content can be retrieved.
(2) The content of incoming ports are frozen at a specified time, by default at dispatch time. This means that the content of the port that is accessible to the recipient does not change during the execution of a dispatch even though the sender may send new values. Properties specify the input and output timing characteristics of ports. Actual event and data transfer may be initiated by the runtime system of the execution platform or by Send_Output runtime service calls in the application source text.
(3) AADL distinguishes between three port categories. Event data ports are ports through which data is sent and received. The arrival of data at the destination may trigger a dispatch or a mode switch. The data may be queued if the destination component is busy. Event data ports effectively represent message ports. Data ports are event data ports with a queue size of one in which the newest arrival is kept. By default arrival of data at data ports does not trigger a dispatch. Data ports effectively represent unqueued ports that communicate state information, such as signal streams that are sampled and processed in control loops. Event ports are event data ports with empty message content. Event ports effectively represent discrete events in the physical environment, such as a button push, in the computing platform, such as a clock interrupt, or a logical discrete event, such as an alarm.
Syntax
port_spec ::=
defining_port_identifier : ( in | out | in out ) port_type
port_refinement ::=
defining_port_identifier : refined to
( in | out | in out ) port_type
port_type ::=
data port [ data_unique_component_classifier_reference
| data_component_prototype_identifier ]
| event data port [ data_unique_component_classifier_reference
| data_component_prototype_identifier ]
| event port
Naming Rules
(N1) A defining port identifier must adhere to the naming rules specified for all features (see Section 8).
(N2) The defining identifier of a port refinement declaration must also appear in a feature declaration of a component type being extended and must refer to a port or an abstract feature.
(N3) The unique component type identifier of the data classifier reference must be the name of a data component type. The data implementation identifier, if specified, must be the name of a data component implementation associated with the data component type.
(N4) The prototype identifier of a prototype reference, if specified, must exist in the namespace of the component type or feature group type that contains the feature declaration.
Legality Rules
(L1) Ports can be declared in subprogram, thread, thread group, process, system, processor, virtual processor, and device component types.
(L2) Data and event data ports may be incompletely defined by not specifying the data component classifier reference or data component implementation identifier of a data component classifier reference. The port definition can be completed using refinement.
(L3) Data, event, and event data ports may be refined by adding a property association. The data component classifier declared as part of the data or event data port declaration being refined does not need to be included in this refinement.
(L4) The port category of a port refinement must be the same as the category of the port being refined, or the port being refined must be an abstract feature.
(L5) The port direction of a port refinement must be the same as the direction of the feature being refined. If the feature being refined is an abstract feature without direction, then all port directions are acceptable.
Standard Properties
-- Properties specifying the source text variable representing the port
Source_Name: aadlstring
Source_Text: inherit list of aadlstring
-- property indicating whether port connections are required or optional
Required_Connection : aadlboolean => true
-- The protocol the source text supporting the port is assumed to make use of
Allowed_Connection_Binding_Class:
inherit list of classifier(processor, virtual processor, bus, virtual bus, device, memory)
-- Optional property for device ports
Device_Register_Address: aadlinteger
-- data port connection timing
Timing : enumeration (sampled, immediate, delayed) => sampled
-- Input and output rate and time
Input_Rate: Rate_Spec => [ Value_Range => 1.0 .. 1.0; Rate_Unit => PerDispatch; Rate_Distribution => Fixed; ]
Input_Time: list of IO_Time_Spec => ([ Time => Dispatch; Offset => 0.0 ns .. 0.0 ns;])
Output_Rate: Rate_Spec => [ Value_Range => 1.0 .. 1.0; Rate_Unit => PerDispatch; Rate_Distribution => Fixed; ]
Output_Time: list of IO_Time_Spec => ([ Time => Completion; Offset => 0.0 ns .. 0.0 ns;])
-- Port specific compute entrypoint properties for event and event data ports
Compute_Entrypoint: classifier ( subprogram classifier )
Compute_Execution_Time: Time_Range
Compute_Deadline: Time
-- Properties specifying binding constraints for variables representing ports
Allowed_Memory_Binding_Class:
inherit list of classifier (memory, system, processor)
Allowed_Memory_Binding: inherit list of reference (memory, system, processor)
Actual_Memory_Binding: inherit list of reference (memory)
-- In port queue properties
Overflow_Handling_Protocol: enumeration (DropOldest, DropNewest, Error)
=> DropOldest
Queue_Size: aadlinteger 0 .. Max_Queue_Size => 1
Queue_Processing_Protocol: Supported_Queue_Processing_Protocols => FIFO
Fan_Out_Policy: enumeration (Broadcast, RoundRobin, Selective, OnDemand)
Urgency: aadlinteger 0 .. Max_Urgency
Dequeued_Items: aadlinteger
Dequeue_Protocol: enumeration ( OneItem, MultipleItems, AllItems ) => OneItem
Semantics
(4) A port specifies a logical connection point in the interface of a component through which incoming or outgoing data and events may be passed. Ports may be named in connection declarations. Ports that pass data are typed by naming a data component classifier reference.
(5) A data or event data port maps to a static variable in the source text that represents the data buffer or queue. By default the variable is accessible by the same name as the port name. A different name mapping can be specified with the Source_Name and Source_Text properties. The Allowed_Memory_Binding and Allowed_Memory_Binding_Class properties indicate the memory (or device) hardware the port resources reside on.
(6) Event and event data ports may dispatch a port specific Compute_Entrypoint. This permits threads with multiple event or event data ports to execute different source text sequences for events arriving at different event ports. If specified, the port specific Compute_Execution_Time and Compute_Deadline takes precedence over those of the containing thread.
(7) Ports are directional. An out port represents output provided by the sender, and an in port represents input needed by the receiver. An in out port represents both an in port and an out port. Incoming connection(s) and outgoing connection(s) of an in out port may be connected to the same component or to different components. For a data port, the in out port maps to a port variable in the source text. This means that the source text will overwrite the existing incoming value of the port when writing the output value to the port variable. The queues of incoming event data ports and event ports may require a port variable that holds the queue content that is frozen during the execution of a thread. In the case of event data ports, the outgoing data in the implementation may utilize a separate port variable.
(8) Ports that provide output, i.e., out ports or in out ports, are referred to as outgoing port. Ports that provide input, i.e., in ports or in out ports, are referred to as incoming ports.
(9) A port can require a connection or consider it as optional as indicated by the Required_Connection property. In the latter case it is assumed that the component with this port can function without the port being connected.
(10) Ports appear to the thread as input and output buffers, accessible in source text as port variables.
(11) Data and event data ports are used to transmit data between threads.
(12) Data ports are intended for transmission of state data such as sensor data streams. Therefore, no queuing is supported for data ports. A thread can determine whether the input buffer of an in data port has new data at this dispatch by checking the port status through a Get_Count service call, which is accessible through the port variable through a Get_Value service call. If no new data value has been received the old value is made available.
(13) Event data ports are intended for message transmission, i.e., the queuing of the event and associated data at the port of the receiving thread. A receiving thread can get access to one or more data element in the queue according to the Dequeue_Protocol and Dequeued_Items properties (see Section 8.3.3). The number of queued event data elements accessible to a thread can be determined through the port variable using the Get_Count service call. Individual element of the queue can be retrieved via the port variable using the Get_Value and Next_Value service calls. If the queue is empty the most recent data value is available.
(14) Event ports are intended for event and alarm transmission, i.e., the queuing of events at the port of the receiving thread, possibly resulting in a dispatch or mode transition. A receiving thread can get access to one or more events in the queue according to the Dequeue_Protocol and the Dequeue_Items property. The number of queued events accessible to a thread can be determined through the port variable by making a Get_Count service call.
(15) The role of an aggregate data port is to make a collection of data from multiple outgoing data ports available in a time-consistent manner. Time consistency in this context means that if a set of periodic threads is dispatched at the same time to operate on data, then the recipients of their data see either all old values or all new values. This is accomplished by declaring a data port, whose data classifier has an implementation with data components corresponding to the data of the individual data ports. The functionality of an aggregate data port can be viewed as a thread whose only role is to collect the data values from several in data ports and make them available as an aggregate data record; on the receiving side an equivalent thread takes passes on the elements of the aggregate data record on to the respective out data ports of receiving threads. The function may be optimized by mapping the data ports of the individual threads into a data area representing the aggregate data port variable. This aggregate is then transferred as a single unit.
(16) Data, events, and event data arriving through incoming ports is made available to the receiving thread, processor, or device at a specified input time. For a data port the input that is available through a port variable is a data value, while for an event or event data port it can be one or more elements from the port queue according to a specified dequeuing protocol (see Section 8.3.3). From that point on any newly arriving data, event, or event data is not available to the receiving component until the next dispatch, i.e., the content of an incoming port that is accessible to the application code does not change while the thread completes its execution.
(17) By default, port input is frozen at dispatch time. For periodic threads or devices this means that input is sampled at fixed time intervals.
(18) The Input_Time property can be used to explicitly specify an input time for ports. This can be done for all ports by specifying the property value for the thread, or it can be specified separately for each port.
(19) The following property values for Input_Time are supported to specify the input time to be the dispatch time (Dispatch), any time during execution relative to the amount of execution time from the start (Start) or from the completion (Completion), and the fact that no input occurs (NoIO):
· Dispatch_Time: (the default value) input is frozen at dispatch time; the time reference is clock time t = 0.
· Start, time range: input is frozen at a specified amount of execution time from the beginning of execution. The time is within the specified time range. The time range must have positive values. Startlow ≤ c ≤ Starthigh.
· Completion, time range: input is frozen at a specified amount of execution time relative to execution completion. The time is within the specified time range. A negative time range indicates execution time before completion. ccomplete + Completionlow ≤ c ≤ ccomplete + Completionhigh, where ccomplete represents the value of c at completion time.
· NoIO: input is not frozen. In other words, the port is excluded from making new input available to the source program. This allows users to specify that a subset of ports to provide input. The property value can be mode specific, i.e., a port can be excluded in one mode and included in another mode.
(20) The Input_Time property can have a list of values. In this case it indicates that input is frozen multiple times for the execution of a dispatch. If a thread has multiple input times specified, then the content of an incoming port is frozen multiple times during a single dispatch.
(21) The input may be frozen at dispatch time (Input_Time property value of Dispatch) as part of the underlying runtime system, or it may be frozen through a Receive_Input service call in the source text (Input_Time property value of Start or Completion).
(22) The input of other ports that can trigger dispatch is not frozen. Input of the remaining ports is frozen according to the specified input time.
(23) If a dispatch condition is specified then the logic expression determines the combination of event and event data ports that trigger a dispatch, and whose input is frozen as part of the dispatch. The input of other ports that can trigger dispatch is not frozen. The input of other ports that can trigger dispatch is not frozen. Input of the remaining ports is frozen according to the specified input time.
(24) If an event port is associated with a component (including thread) containing modes and mode transition, and the mode transition names the event port, then the arrival of an event is a mode change request and it is processed according to the mode switch semantics (see Sections 12 and 13.6).
(25) By default, the output time, i.e., the time output is transmitted to connected components, is the completion time for data ports. By default, for event and event data ports the output time occurs anytime during the execution through a Send_Output service call.
(26) The Output_Time property can be used to explicitly specify an output time for ports. This can be done for all ports by specifying the property value for the thread, or it can be specified separately for each port.
(27) The following property values for Output_Time are supported to specify the output time to be the dispatch time (Dispatch), any time during execution relative to the amount of execution time from the start (Start) or from the completion (Completion) including at completion time, the deadline (Deadline), and the fact that no input occurs (NoIO):
· Start, time range: output is transmitted at a specified amount of execution time relative to the beginning of execution. The time is within the specified time range. The time range must have positive values. Startlow ≤ c ≤ Starthigh.
· Completion, time range: output is transmitted at a specified amount of execution time relative to execution completion. The time is within the specified time range. A negative time range indicates execution time before completion. ccomplete + Completionlow ≤ c ≤ ccomplete + Completionhigh, where ccomplete repesents the value of c at completion time. The default is completion time with a time range of zero, i.e., it occurs at c = ccomplete.
· Deadline: output is transmitted at deadline time; the time reference is clock time rather than execution time. t = Deadline. This allows for static alignment of output time of one thread with the Dispatch_Time input time of another thread with a Dispatch_Offset.
· NoIO: output is not transmitted. In other words, the port is excluded from transmitting new output from the source text. This allows users to specify that a subset of ports to provide output. The property value can be mode specific, i.e., a port can be excluded in one mode and included in another mode.
(28) The Output_Time property can have a list of values. In this case it indicates that output is transmitted multiple times as part of the execution of a dispatch.
(29) The output may be transmitted at completion time or deadline as part of the underlying runtime system, or it may be transmitted through a Send_Output service call in the source text.
(30) If the output time of the originating port is Completion_Time while the input time of the receiving port is Dispatch and the sender and receiver are in the same synchronization domain, then the output is received at the next dispatch equal to or later than the deadline. To accommodate the transfer the actual transfer may be initiated before the deadline. In the case of the connection crossing synchronization domains, the input is received at the dispatch following the completion of the transfer.
(31) The Input_Rate and Output_Rate properties specify the number of times per dispatch (perDispatch) or per second (perSecond) at which input and output is expected to occur at the port with the associated property. By default the input and output rate of ports is once per dispatch. The rate can be fixed or according to a distribution.
(32) An input or output rate higher than once per dispatch indicates that multiple inputs or multiple outputs are expected during a single dispatch. An input or output rate lower than once per dispatch indicates that inputs or outputs are not expected at every dispatch.
(33) If an Input_Time or Output_Time property is specified, then the values must be consistent with the rate. If the rate is specified in terms of seconds and a period is specified for the thread or device with the port, then the period value must also be consistent with the other values. In the case of an Input_Time or Output_Time property value of NoIO the rate value does not apply.
-- a thread that gets input partway into execution and sends output
-- before completion.
thread TightLoop
features
sensor: in data port
{Input_Time => ((Time=>Start;Offset=>10 us .. 15 us;));} ;
actuator: out data port
{Output_Time => ((Time=>Completion;Offset=>10 us .. 11 us;));} ;
end TightLoop;
(34) Event and event data ports can have a queue associated with them. By default, the incoming event ports and event data ports of threads, devices, and processors have queues. The output from the ultimate source of a semantic port connection is added into this queue, if the ultimate destination component is actively processing. The default port queue size is 1 and can be changed by explicitly declaring a Queue_Size property association for the port.
(35) The Queue_Size, Queue_Processing_Protocol, and Overflow_Handling_Protocol port properties specify queue characteristics. If an event arrives and the number of queued events (and any associated data) is equal to the specified queue size, then the Overflow_Handling_Protocol property determines the action. If the Overflow_Handling_Protocol property value is Error, then an error occurs for the thread. The thread can determine the port that caused the error by calling the standard Dispatch_Status runtime service. For Overflow_Handling_Protocol property values of DropNewest and DropOldest, the newly arrived or oldest event in the queue event is dropped.
(36) Queues will be serviced according to the Queue_Processing_Protocol, by default in a first-in, first-out order (FIFO). When an aperiodic, sporadic, timed, or hybrid thread declares multiple in event and event data ports in its type that can be dispatch triggers and more than one of these queues are nonempty, the port with the higher Urgency property value gets serviced first. If several ports with the same Urgency are non-empty, then the Queue_Processing_Protocol is applied across these ports and must be the same for them. In the case of FIFO the oldest event will be serviced (global FIFO). It is permitted to define and use other algorithms for picking among multiple non-empty queues. Disciplines other than FIFO may be used for managing each individual queue.
(37) By default, one item is dequeued and made available to the receiving application through the port variable. The Dequeue_Protocol property specifies different dequeuing options.
· OneItem: (default) a single frozen item is dequeued and made available to the source text unless the queue is empty. The Next_Value service call has no effect.
· AllItems: all items that are frozen at input time are dequeued and made available to the source text via the port variable, unless the queue is empty. Individual items become accessible as port variable value through the Next_Value service call.
· MultipleItems: multiple items can be dequeued one at a time from the frozen queue and made available to the source text via the port variable. One item is dequeued and its value made available via the port variable with each Next_Value service call. Any items not dequeued remain in the queue and are available for the next dispatch.
(38) The Get_Count service call indicates how many items have been made available to the source text. A value of zero indicates that no new item is available via a data port, event port, or event port variable. A value greater than zero indicates that one or more fresh values are available.
(39) A port may have a Fan_Out_Policy property. This property indicates how the content is transferred through outgoing connections. The content can be passed to all recipients (Broadcast), or the output is distributed evenly to the recipients (RoundRobin), to one recipient based on content/routing information (Selective), or to the next recipient ready to be dispatched (OnDemand). Broadcast, RoundRobin, and Selective pass on data and events without queuing it, while OnDemand requires a queue that is serviced by the recipients. The size of the queue and other queue characteristics are specified as properties of the port with the fan-out.
(40) An event or event data port with a fan-out policy of OnDemand allows us to model a queue being serviced by multiple recipients. For example, a queue on an incoming thread group port that is connected to multiple threads allows sender output to be queued in a single queue and be serviced by multiple threads (see also Section 9.2.6).
(41) Any subprogram, thread, device, or processor with an outgoing event port, i.e., out event, out event data, in out event, in out event data, can be the source of an event. During a single dispatch execution, a thread may raise zero or more events and transmit zero or more event data through Send_Output runtime service calls. It may also raise an event at completion through its predeclared Complete port (see Section 5.4) and transmit event data through event data ports that contain new values that have not been transmitted through explicit Send_Output runtime service calls.
(42) Events are received through in event, in out event, in event data, and in out event data ports, i.e., incoming ports. If such an incoming port is associated with a thread and the thread does not contain a mode transition naming the port, then the event or event data arriving at this port is added to the queue of the port. If the thread is aperiodic or sporadic and does not have its Dispatch event connected, then each event and event data arriving and queued at any incoming ports of the thread results in a separate request for thread dispatch.
Examples
package Patterns
public
thread Voter
features
Input: in data port [3];
Output: out data port;
end Voter;
thread Processing
features
Input: in data port;
Result: out data port;
end Processing;
thread group Redundant_Processing
features
Input: in data port;
Result: out data port;
end Redundant_Processing;
thread group implementation Redundant_Processing.basic
subcomponents
processing: thread Processing [3];
voting: thread voter;
connections
voteconn: port processing.Result -> voting.Input {Connection_Pattern => ((One_To_One));};
procconn: port Input -> processing.Input;
recon: port voting.Output -> Result;
end Redundant_Processing.basic;
end Patterns;
(43) The application program interface for the following services is defined in the applicable source language annex of this standard. They are callable from within the source text.
(44) A Send_Output runtime service allows the source text of a thread to explicitly cause events, event data, or data to be transmitted through outgoing ports to receiver ports. The Send_Output service takes a port list parameter that specifies for which ports the transmission is initiated. The send on all ports is considered to occur logically simultaneously. Send_Output is a non-blocking service. An exception is raised if the send fails with exception codes indicating the failing port and type of failure.
subprogram Send_Output
features
OutputPorts: in parameter <implementation-dependent port list>;
-- List of ports whose output is transferred
SendException: out event data; -- exception if send fails to complete
end Send_Output;
NOTES: The Send_Output runtime service replaces the Raise_Event service in the original AADL standard.
(45) A Put_Value runtime service allows the source text of a thread to supply a data value to a port variable. This data value will be transmitted at the next Send_Output call in the source text or by the runtime system at completion time or deadline.
subprogram Put_Value
features
Portvariable: requires data access; -- reference to port variable
DataValue: in parameter; -- value to be stored
DataSize: in parameter; - size in bytes (optional)
end Put_Value;
(46) A Receive_Input runtime service allows the source text of a thread to explicitly request port input on its incoming ports to be frozen and made accessible through the port variables. Any previous content of the port variable is overwritten, i.e., any previous queue content not processed by Next_Value calls is discarded. The Receive_Input service takes a parameter that specifies for which ports the input is frozen. Newly arriving data may be queued, but does not affect the input that thread has access to (see Section 9.1). Receive_Input is a non-blocking service.
subprogram Receive_Input
features
InputPorts: in parameter <implementation-dependent port list>;
-- List of ports whose input is frozen
end Receive_Input;
(47) In the case of data ports the value is made available without requiring a Next_Value call. The Get_Count will return the value 1 if the value has been updated, i.e., is fresh. If the data is not fresh, the value zero is returned.
(48) In the case of event data ports each data value is retrieved from the queue through the Next_Value call and made available as port variable value. Subsequent calls to Get_Value or direct access of the port variable will return this value until the next Next_Value call.
(49) In case of event ports and event data ports the queue is available to the thread, i.e., Get_Count will return the size of the queue. If the queue size is greater than one the Dequeued_Items property and Dequeue_Protocol property may specify that more than one element is made accessible to the source text of a thread.
(50) A Get_Value runtime service shall be provided that allows the source text of a thread to access the current value of a port variable. The service call returns the data value. Repeated calls to Get_Value result in the same value to be returned, unless the current value is updated through a Receive_Input call or a Next_Value call.
subprogram Get_Value
features
Portvariable: requires data access; -- reference to port variable
DataValue: out parameter; -- value being retrieved
DataSize: in parameter; - size in bytes (optional)
end Get_Value;
(51) A Get_Count runtime service shall be provided that allows the source text of a thread to determine whether a new data value is available on a port variable, and in case of queued event and event data ports, who many elements are available to the thread in the queue. A count of zero indicates that no new data value is available.
subprogram Get_Count
features
Portvariable: requires data access; -- reference to port variable
CountValue: out parameter BaseTypes::Integer; -- content count of port variable
end Get_Count;
(52) A Next_Value runtime service shall be provided that allows the source text of a thread to get access to the next queued element of a port variable as the current value. A NoValue exception is raised if no more values are available.
subprogram Next_Value
features
Portvariable: requires data access; -- reference to port variable
DataValue: out parameter; -- value being retrieved
DataSize: in parameter; -- size in bytes (optional)
NoValue: out event port; -- exception if no value is available
end Next_Value;
(53) A Updated runtime service shall be provided that allows the source text of a thread to determine whether input has been transmitted to a port since the last Receive_Input service call.
subprogram Updated
features
Portvariable: in parameter <implementation-dependent port reference>;
-- reference to port variable
FreshFlag: out parameter BaseTypes::Boolean; -- true if new arrivals
end Updated;
Processing Requirements and Permissions
(54) For each data or event data port declared for a thread, a system implementation method must provide sufficient buffer space within the associated binary image to unmarshall the value of the data type. Adequate buffer space must be allocated to store a queue of the specified size for each event data port. The applicable source language annex of this standard defines data variable declarations that correspond to the data or event data features. Buffer variables may be allocated statically as part of the source text data declarations. Alternatively, buffer variables may be allocated dynamically while the process is loading or during thread initialization. A method of implementing systems may require the data declarations to appear within source files that have been specified in the source text property. In some implementations, these declarations may be automatically generated for inclusion in the final set of source text. A method of implementing systems may allow direct visibility to the buffer variables. Runtime service calls may be provided to access the buffer variables.
(55) The type mark used in the source variable declaration must match the type name of the port data component type. Language-specific annexes to this standard may specify restrictions on the form of a source variable declaration to facilitate verification of compliance with this rule.
(56) For each event or event data port declared for a thread, a method of implementing the system must provide a source name that can be used to refer to that event within source text. The applicable source language annex of this standard defines this name and defines the source constructs used to declare this name within the associated source text. A method of implementing systems may require such declarations to appear within source files that have been specified in the source text property. In some implementations, these declarations may be automatically generated for inclusion in the final set of source text.
(57) If any source text associated with a software component contains a runtime service call that operates on an event, then the enumeration value used in that service call must have a corresponding event feature declared for that component.
(58) A method of processing specifications is permitted to use non-standard property definitions and associations to define alternative queuing disciplines.
(59) A method of implementing systems is permitted to optimize the number of port variables necessary to perform the transmission of data between ports as long as the semantics of such connections are maintained. For example, the source text variable representing an out data port and the source text variable representing the connected in data port may be mapped to the same memory location provided their execution lifespan does not overlap.
Examples
package Nav_Types public
data GPS properties Source_data_Size => 30 Bytes; end GPS;
data INS properties Source_data_Size => 20 Bytes; end INS;
data Position_ECEF properties Source_data_Size => 30 Bytes; end Position_ECEF;
data Position_NED properties Source_data_Size => 30 Bytes; end Position_NED;
end Nav_Types;
package Navigation
public
process Blended_Navigation
features
GPS_Data : in data port Nav_Types::GPS;
INS_Data : in data port Nav_Types::INS;
Position_ECEF : out data port Nav_Types::Position_ECEF;
Position_NED : out data port Nav_Types::Position_NED;
properties
-- the input rate of GPS is twice that of INS
Input_Rate => ( Value_Range => 50.0 .. 50.0; Rate_Unit => perSecond , Rate_Distribution => Fixed ) applies to GPS_Data;
Input_Rate => (Value_Range => 100.0 .. 100.0; Rate_Unit => perSecond , Rate_Distribution => Fixed ) applies to INS_Data;
end Blended_Navigation;
process implementation Blended_Navigation.Simple
subcomponents
Integrate : thread;
Navigate : thread;
end Blended_Navigation.Simple;
end Navigation;