SlideShare uma empresa Scribd logo
1 de 28
Baixar para ler offline
Distributed Deployment



                                                                  version 1.0
                                                                       12/13/11




PathMATE Technical Notes



                                                         Pathfinder Solutions
                                                         Wrentham, MA, USA
                                                     www.pathfindermda.com
                                                          +1 508-568-0068




   copyright 1995-2011 Pathfinder Solutions LLC, all rights reserved
Table Of Contents
1    Introduction .......................................................................................... 1

2    Overview............................................................................................... 1
     2.1    Design Elements............................................................................ 1
        2.1.1   Task...................................................................................... 1
        2.1.2   Process.................................................................................. 2
        2.1.3   Processor............................................................................... 2
     2.2     Configurations and Connections ....................................................... 3
     2.3    Modeling in the Multi-Process World ................................................. 6
        2.3.1   NodeManagement ................................................................... 7
        2.3.2   Routing Parameters ................................................................. 7
        2.3.3   Intertask Contention and the PfdSyncList ................................... 7

3    Deploying to Distributed Topologies...................................................... 7
     3.1     Structural Design Elements ............................................................. 8
     3.2    Topology Identification and Allocation ............................................... 8
        3.2.1   Task Identification................................................................... 8
        3.2.2   Task Priorities......................................................................... 9
        3.2.3   ProcessType Definition ............................................................10
     3.3    Topology Definition – the Process Configuration Table ........................12
        3.3.1   Single Process Single Deployment ............................................13
        3.3.2   Single Process Multi Deployment (SPMD) ...................................13
        3.3.3   Process Configuration Table .....................................................13

4    Messaging and Connectivity ................................................................ 15
     4.1     Elements .....................................................................................15
     4.2    Interprocessor and Endianness .......................................................16
        4.2.1    Build-in Type Data Item Serialization ........................................18
        4.2.2    User-Defined Type Serialization – Advanced Realized Types .........18
     4.3    Communication Errors ...................................................................19
        4.3.1  Multi-Task.............................................................................19
        4.3.2  Model-Level Interface – SW:RegisterForErrorGroup.....................19
     4.4     Generated Topology Summary Report .............................................20
     4.5     Trace Debugging ..........................................................................20

A.   Thread Proliferation in Hand Written Systems .................................... 22
     PI-MDD, Asynchronous Domain Interactions, and The Event-Driven Paradigm.22




                                Table Of Figures

                                                     ii
Figure 1: Default Single-Task System Topology .................................................. 3
Figure 2: Example Two-Task System Topology ................................................... 3
Figure 3: Simplified Task Symbology ................................................................ 4
Figure 4: Multi-Task, Multi-Processor Topology ................................................... 5
Figure 5: Simplified Process Symbology ............................................................ 6
Figure 6: SPMD Deployment ............................................................................ 6
Figure 7: Default System Topology ................................................................... 8
Figure 8: One Process Two-Task System Topology .............................................. 9
Figure 9: Example Multi-Task, Multi-Processor Topology .....................................12
Figure 10: Example SPMD Deployment.............................................................14




                                                   iii
Technical Note: Distributed Deployment




1 Introduction
The PI-MDD method for constructing software for complex and high performance systems
separates the complexities of the problem space subject matters from the strategies and
details of the implementation platforms the system executes within. While programming
language and target operating system are key aspects of this implementation platform, they
represent one-time selections, often made by organizational inertia. Perhaps the most
demanding facet of this separation discipline is that the most complex and creative aspects of
platform revolve around deploying a complex software system across a distributed execution
unit topology with a range of processors, processes and tasks.

Modern software systems at nearly all levels of complexity execute with some degree of
parallel execution. Even the simplest systems apply separate threads of control to manage the
lowest level of communication input and output. Conceptually coherent “single” systems often
involve highly synchronized activities executing on separate processors. In PI-MDD one of the
most fundamental disciplines pushes nearly all aspects of this complexity from the modeling
space. So how is this clearly important, nearly ubiquitous, and non-trivial concern addressed
with PI-MDD and PathMATE? The distributed deployment of PI-MDD models to multiple
execution units is managed with an integrated set of model markings (properties), specialized
code generation rules/templates and a flexible set of implementation mechanisms. The PI-
MDD models remain independent of this target environment topology, and can be deployed to
a range of topologies to support unit testing, system testing, and alternative deployment
architectures.


2 Overview
2.1 Design Elements
2.1.1 Task
Task – a separate thread of execution control that can run interleaved with or in parallel with
other tasks in the same Process. Tasks within the same Process can address the same
memory space. Tasks are supported by specific OS-level mechanisms. Even on “bare boards”
without a multi-tasking OS, there is a main Task, and interrupt vectors are considered to run in
a second Interrupt Task.

Modeled Task – a Task that was explicitly identified as part of the deployment of model
elements through the TaskID marking. Modeled tasks are managed automatically by
PathMATE mechanisms, including start, stop, nonlocalDispatchers, inter-task incident queuing,
and task-local memory pools. One instance of the PfdTask class (SW_Task in C) manages
each Modeled Task that is executing modeled elements.

Each Modeled Task is designated to run on a specific ProcessType.

Note: When a realized domain is marked to run in a specific task, the resulting Task is still a
Modeled Task because it was created from marking a model element – the realized domain,
and have all the conveniences of modeled domains in this regard. For the purposes of this
document, a reference to a Task from here forward will mean a Modeled Task unless explicitly
designated otherwise.



                                               1
Technical Note: Distributed Deployment



Incident – An instance of a modeled Signal or of an IncidentHandle (callback) is an Incident.
Each Incident is designated to run within a specific Task, by the Task allocation of either its
destination class or target service/operation.

NonlocalDispatcher – For each domain service and class operation that can be invoked (from
PAL) from another task, a corresponding method is automatically generated to provide a
function that can be directly invoked at the code level from any Task. A nonlocalDispatcher
creates an IncidentHandle to its corresponding domain service or class operation and delivers
this incident.

Realized Task – a task that was not identified as part of the deployment of model elements,
and is controlled either by PathMATE mechanisms or realized code independent of the Modeled
Tasks. Realized tasks may be started by PathMATE to support specific mechanism execution –
eg communication receiver and sender tasks – or may be started and managed by completely
by realized code and therefore are unknown to PathMATE mechanisms. For the purposes of
this document, any reference to a Realized Task from here forward will be explicitly
designated.
2.1.2 Process
Process – Generally contained within an executable, a Process is a set of one or more Tasks
that can run interleaved with each other, or genuinely in parallel, controlled by a single
scheduler, and sharing a memory space. Separate Processes cannot address the same general
memory space. PathMATE supports the external identification of a process via IP address and
port. One or more Processes may run on a Processor. Some deployment environments
(RTOSs) will only support a single Process running on a single processor. For the purposes of
this technical note in this scenario where a Processor is executing a single executable with one
or more Tasks will be termed to have a single Process executing. One instance of the
PfdProcess class (SW_Process in C) manages each Process that is executing modeled
elements.

Process Identifier (PID) - A numerical value used to uniquely address each process instance
within an intercommunicating set of PathMATE-based Processes. This is specified in the
Process Configuration Table and can be used as part of a Routing Parameter.

Process Type - A “type” of executable built with a potentially unique set of PI-MDD elements,
and with a specific set of conditional compiler directives, deployed together as a Process.
Typically each different executable is a Process Type. One or more instances of a Process Type
may be running at any give time in a system, each with their own unique Process Identifier.
Model elements are deployed to a Process Type via the ProcessType marking.

Process Configuration Table – A table constructed at runtime within each PathMATE Process
that identifies (Process Type and Process ID) itself and the process instances that it can
connect to. This table is either loaded from a file at startup time or may be constructed from
interprocess (socket) messages received throughout system processing. The Process
Configuration Table in each process instance must have consistent subsets of information with
all other Process that it can connect to.
2.1.3 Processor
Processor – a distinct physical processor, that may run one of more Processes. For the
purposes of this technical note the Processor is a secondary concept, and the Process is used
to bound deployed elements that run on a specific Processor.



                                               2
Technical Note: Distributed Deployment



2.2 Configurations and Connections
2.2.1 Single Task
From an application perspective, the work of the system is done in the Tasks by PIM
processing – model actions executing. Each Task performs this work by dispatching Incidents
which may cause actions to run. The Incidents are held in an Incident Queue.




                        Figure 1: Default Single-Task System Topology

2.2.2 Multi Task
If two or more Tasks run within a Process, then a task-safe Inter-Task Incident Queue (ITIQ) is
used to queue Incidents between Tasks. PIM Processing may generate Incidents that are
queued locally, or on another Task’s ITIQ.




                        Figure 2: Example Two-Task System Topology




                                              3
Technical Note: Distributed Deployment



This internal Task structure is common for all Modeled Tasks. For simplicity from here forward
Tasks will be shown as a simple outline and name:




                             Figure 3: Simplified Task Symbology

2.2.3 Multi Process
When a System is deployed to two or more Processes, TCP/IP sockets are used to convey
messages between them. Incidents – service handles or events – are send between processes
and dispatched upon receipt to PIM tasks where they initiated PIM processing. Each incident to
be sent is converted to a PfdSockMessage through Serialization, where the data members of
the incident and it’s parameter values are encoded into the stream of bytes within the
PfdSockMessage.

In each Process a number of realized communication tasks are automatically created and
managed by the PfdProcess. One receiver task receives messages from any source, and a
sender task for each destination Process instance manages the connection with and sends
messages to that process. Each sender task has an Inter-Task Message Queue (ITMQ) where
each outbound PfdSockMessage is queued by local PIM processing for sending. The ITMQ is
implemented with a PfdSyncList containing instances of PfdSockMessageOut.




                                              4
Technical Note: Distributed Deployment



In the topology shown below there is a single instance of ProcessType MAIN and a single
instance of ProcessType DEVICE_CONTROL.




                       Figure 4: Multi-Task, Multi-Processor Topology




                                              5
Technical Note: Distributed Deployment



This internal Process structure is common for all Processes. For simplicity from here forward
Process will be shown as a simple outline and name, containing only their Modeled Tasks:




                            Figure 5: Simplified Process Symbology

2.2.4 Multi Process SPMD
In some systems there are can be more than one instance of a Process of a single
ProcessType. This general capability is termed Single Process Multi Deployment (SPMD).
Often each different instance is deployed to its own processor. Note the varying PIDs in the
diagram below.




                                  Figure 6: SPMD Deployment


2.3 Modeling in the Multi-Process World
With the underlying asynchronous nature of PI-MDD model element interactions and the
explicit PI-MDD preference of asynchronous interactions at the domain level (and below),
proper PI-MDD models should generally be well formed for a range of deployment options.
Certainly this is a solid start, and leaves the Application Modeler and the Target Deployment


                                               6
Technical Note: Distributed Deployment



Designer on good joint footing to develop and effective deployment. But at some point there
may be aspects of the PI-MDD model that may need adjustments to facilitate proper
deployment.
2.3.1 NodeManagement
In systems that fully utilize complex multi-processing target environments, often times there is
explicit interaction between high-level application control (“mission-level” logic) and detailed
aspects of the platform and its resources. In this realm these interactions form a bona fide
Problem Space subject matter: NodeManagement. A Domain can be created to encapsulate
interactions with topology awareness, local processing resources and multi-process
communication mechanisms. This can alleviate application domains of the need to somehow
to intelligently and flexibly respond to details about topology and resources localize these
capabilities in one component.
2.3.2 Routing Parameters
To explicitly route the invocation of a domain service to specific process id/task id combination,
a domain service can specify a parameter as a <<Routing>> parameter. A parameter of type
DestinationHandle or Group<DestinationHandle> can have it’s Stereotype marking set to
Routing, which causes its nonlocalDispatcher to be generated with appropriate routing code,
using the parameter runtime value(s).

But where does a caller get the right values? Often times a class can have the proper routing
information instantiated at startup (via a static initializer, XML file or binary instance data) and
use these attributes. The SoftwareMechanisms domain services DestinationHandleLocal() and
DestinationHandleRemote() provide encoding of specified Process IDs and Task IDs into
DestinationHandles at runtime.

In systems with more complex or dynamic topologies, a NodeManagement domain can
maintain the appropriate body of routing data. It can publish services to provide this data at
the level of abstraction appropriate to preserve the application’s independence of specific
topologies. The caller can then go to NodeManagement to get timely and appropriate routing
information.
2.3.3 Intertask Contention and the PfdSyncList
A simple starting point for deploying a domain within a Process Type is to allocate the domain
– in its entirety – to execute within a single task. This way there is no contention between
tasks because domain share no resources that they need to explicitly protect.

However there are legitimate design contexts where it is advantageous to deploy a single
domain to multiple tasks in a single process. In this context a danger emerges when an
element of the domain accesses a shared assets from one task when the same asset can be
accessed from another task. In this case a protection mechanism is required. These shared
assets are class instance containers – both for an association (across from a many participant)
and for a class instance population. The ThreadSafe marking is supported on the Association
and the Class. Setting it to “T” will generate a PfdSyncList for the instance container, which
uses an internal PfdReentrantMutex to make access safe for intertask use.


3 Deploying to Distributed Topologies
Structural Design is the PI-MDD activity where the execution units for the deployment
environment are identified and model elements are allocated to them.


                                                 7
Technical Note: Distributed Deployment



3.1 Structural Design Elements
The construction, interconnection and deployment of multi-processor systems from PI-MDD
models requires:
   - The Model: While the Domain is the primary initial focus for deployment, elements
      within a domain can also be separately allocated to their separate execution units,
      including the Domain Service, Class, Class Operation, and State.
   -   The Topology: Tasks and process types are identified via markings ProcessType and
       TaskID, applied to a range of model elements.
   -   Generated Code, Projects: While general mechanism layer code exist to support
       Distributed Deployment, nothing specific to any topology exists until after
       Transformation. Markings drive the generation of build folders for each ProcessType
       and code tailored to the target topology.
   -   Run-Time Data: Each specific instance of a Process is identified via a Process ID in the
       Process Configuration Table. While the actual image file code content for each instance
       of a given ProcessType is identical (they all use the same executable file), individual
       Process instances can be configured to behave differently via class instance and link
       data. These can be captured in XML or binary instance data files which are deployed
       with the correct process instance and/or working directory.

3.2 Topology Identification and Allocation
The identification of execution units starts with simple default allocations. If no topology
markings are specified at all, the default configuration is a single ProcessType named “MAIN”,
with a single Task named “SYS_TASK_ID_MAIN”.




                              Figure 7: Default System Topology

In this configuration the system project files are generated with compiler symbol settings that
hide the inter-task and inter-process code in the PathMATE Mechanisms layer. While realized
communications code can always be included in the system, it will not have any PathMATE
generated communications mechanisms.
3.2.1 Task Identification
Additional Tasks are identified by the “TaskID” marking, which can be applied to analysis
elements of type Domain, DomainService, Object, or ObjectService. The default allocation for
Domains is SYS_TASK_ID MAIN. The default TaskID for all other elements is the TaskID of
their containing element.




                                               8
Technical Note: Distributed Deployment



A TaskID can be set to any identifier name, however by convention all fixed task ids are in the
form SYS_TASK_ID_<name>. The task id SYS_TASK_ANY has special meaning, indicating all
actions in the marked element are executed locally within the calling task.

An additional special TaskID value, DYNAMIC, indicates a new task is started, or retrieved from
the task pool and the object/service is run in that task. Instances of DYNAMIC classes This
rich capability is described in the separate Tech Note “PathMATE Dynamic Tasks”.

Example: Deploy a system to a topology with two tasks: SYS_TASK_ID_MAIN and
SYS_TASK_ID_LOGGING. The following marking deploys the Instrumentation domain to the
‘LOGGING task, thereby causing the generation of a multi-task system:
                Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGING

This one marking line causes the system to deploy in this configuration, with the
Instrumentation domain actions running in the “PIM processing” bubble in the
SYS_TASK_ID_LOGGING task, and the remainder of the system running in the “PIM
processing” bubble in the SYS_TASK_ID_MAIN task:




                      Figure 8: One Process Two-Task System Topology

An invocation to an Instrumentation domain service from PIM processing (a model action)
within the SYS_TASK_ID_MAIN task domain is generated as a call to that domain service’s
nonlocalDispatcher counterpart. This nonlocalDispatcher creates a PfdServiceHandle to the
target service on the SYS_TASK_ID_LOGGING task and delivers it – placing it on the ITIQ for
SYS_TASK_ID_LOGGING.
3.2.2 Task Priorities
The enumeration pfdos_priority_e defined in pfd_os.hpp specifies the task priorities that are
generally available:
    SYS_TASK_PRIORITY_HIGHEST
    SYS_TASK_PRIORITY_HIGHER
    SYS_TASK_PRIORITY_NORMAL



                                               9
Technical Note: Distributed Deployment



    SYS_TASK_PRIORITY_LOWER
    SYS_TASK_PRIORITY_LOWEST
Modeled Tasks
Any model element that can be marked with TaskID can also optionally be marked with
TaskPriority, specifying one of the above values. The default is
SYS_TASK_PRIORITY_NORMAL.
NOTE: all model elements that explicitly set a TaskID must all have the same TaskPriority.

Mechanisms Realized Tasks
The priorities of realized tasks started by PathMATE mechanisms are controlled by the following
compiler symbols.
      task priority symbol              description      default value
      PFD_MAIN_OOA_THREAD_PRIORITY     OOA processing task         SYS_TASK_PRIORITY_NORMAL
      PFD_RECEIVER_THREAD_PRIORITY     TCP receiver task           SYS_TASK_PRIORITY_NORMAL
      PFD_TRANSMIT_THREAD_PRIORITY     TCP sender task             SYS_TASK_PRIORITY_NORMAL
      PFD_IE_THREAD_PRIORITY           Spotlight connection task   SYS_TASK_PRIORITY_LOWEST
      PFD_INPUT_THREAD_PRIORITY        Driver input task           SYS_TASK_PRIORITY_NORMAL


The following definition pattern in pfd_os.hpp supports the external specification of a priority
for each type of realized task:
      #ifndef PFD_TRANSMIT_THREAD_PRIORITY
      #define PFD_TRANSMIT_THREAD_PRIORITY                 SYS_TASK_PRIORITY_NORMAL
      #endif
In this manner the system Defines marking can be used to override default realized task
priorities.
3.2.3 Task Stack
By default, the stack size allocated to each Task is controlled by OS defaults. When the task is
started a 0 is passed into the OS call and it determines the size of the stack.
You can specify a default stack size for all Tasks via the DefaultStackSize system marking, eg:
     System,SimpleOven,DefaultStackSize,500000

In addition to controlling size, the actual memory allocated for use as the task stack can be
allocated explicitly, allowing custom realized code to monitor it for overrun, etc. By marking
the system with a non-0 DefaultStackSize and defining the compile flag
PATH_ALLOCATE_EXPLICIT_STACK, a stack is explicitly allocated in pfd_start_task_param(). If
the Task being started is a modeled task, the instance of PfdTask that it is running has it’s
stack data members set with this info. (Platform support may vary; inspect pfd_os.cpp
pfd_start_task_param() to see if your platform is supported.)
3.2.4 ProcessType Definition
By default a single ProcessType is generated – MAIN. Build file generation produces a single
build folder called MAIN, and produces a single executable. By default all domains and domain
services are allocated to the ProcessType MAIN. Additional ProcessTypes can be defined by
allocating a domain or domain service via the ProcessType marking. For example
        Domain,*.ExternalDeviceControl,ProcessType,DEVICE_CONTROL

Once more ProcessTypes are defined (in addition to MAIN) code is generated to communicate
between multiple processes. Each ProcessType generates a build folder with the ProcessType
name, and generates code that starts the domains and tasks configured for that ProcessType.



                                                   10
Technical Note: Distributed Deployment



The code also automatically starts the PfdProcess’s realized communication tasks to handle
inter-process communications.

With the definition of ProcessTypes, PfdIncident::deliver() – used by nonlocalDispatchers – is
extended where needed to handle inter-process invocation scenarios, automatically using the
PfdProcess’s realized communication tasks.

Example: Deploy a system to a topology with two ProcessTypes. ProcessType MAIN has two
tasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_LOGGING. ProcessType DEVICE_CONTROL
has two tasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_REGISTER_IO. The following marking
deploys the DeviceControl and HardwareIF domains to the DEVICE_CONTROL ProcessType,
thereby causing the generation of a multi-process system:
               Domain,*.DeviceControl,ProcessType,DEVICE_CONTROL
               Domain,*.HardwareIF,ProcessType,DEVICE_CONTROL
Because all other domains by default are allocated to ProcessType MAIN, and we want
SoftwareMechanisms services to run locally wherever they are called, we allocate this domain
to ProcessType ANY, TaskID SYS_TASK_ANY:
               Domain,*.SoftwareMechanisms,ProcessType,ANY
               Domain,*.SoftwareMechanisms,TaskID,SYS_TASK_ANY
To retain the Instrumentation deployment configuration, and to run the HardwareIF domain in
its own task we add these markings:
               Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGING
               Domain,*.HardwareIF,TaskID,SYS_TASK_ID_REGISTER_IO

These 6 markings cause the system to deploy in this configuration:




                                              11
Technical Note: Distributed Deployment




                   Figure 9: Example Multi-Task, Multi-Processor Topology



To specify a default process type other than MAIN, the default ProcessType can be changed via
the system marking DefaultProcessType, for example:
                          System,*,DefaultProcessType,MY_PROCESS

3.3 Topology Definition – the Process Configuration Table




                                             12
Technical Note: Distributed Deployment



3.3.1 Single Process Single Deployment
Once an executable is built for each ProcessType, one or more instances of the executable may
be started. By default PathMATE assumes a simple configuration where a single instance of
each ProcessType is run. The PathMATE PfdProcess class has a Process Configuration Table to
keep track of the configuration of Process Instances that run in the system. Each Process
tracks the processes it can connect to with the following information:
              Process ID   ProcessType IP address        TCP port

              Table 1: Process Configuration Table Fields

A key stop in understanding this Process Configuration Table is in the generated
gc/sys/system_indices.hpp, where the ProcessTypes constants are defined:
       /* define process types for the system */
       enum
       {
               SYS_PROCESS_TYPE_MAIN = 0,
               SYS_PROCESS_TYPE_DEVICE_CONTROL = 1,
               SYS_PROCESS_TYPE__count = 2
       };


The generated code includes a default Process Configuration Table that allows one instance of
each ProcessType. The example system depicted in Figure 9: Example Multi-Task, Multi-
Processor Topology has the following default Process Configuration Table:
             Process ID   ProcessType     IP address      TCP port
                 0              0        127.0.0.1          5501
                 1              1        127.0.0.1          5502

              Table 2: Default Process Configuration Table



3.3.2 Single Process Multi Deployment (SPMD)
Some system configurations call for more than one instance of a given ProcessType be
executing at the same time. Essentially this permits two or more entries in the Process
Configuration Table to specify a given ProcessType.

In cases where a single instance of a given ProcessType is configured, the system can continue
to use implicit routing of services to that process based only on ProcessType, as was done in
the Single Process Single Deployment configuration. However for cases where two or more
instances of a given ProcessType are configured, the system requires a way for the application
to control in which process instance a given services will run.

A Domain Service with a parameter marked as a Routing Parameter routes the service
invocation to the Process ID and Task specified by this parameter. In this manner a PathMATE
system can utilize multiple Process Instances of the same Process Type.
3.3.3 Process Configuration Table
To specify a custom target topology a Process Configuration Table is constructed in a comma-
separated file, specifying a row in the table for each Process instance. Each row must have a
unique Process ID and combination of IP address and TCP port.

This example table shows our system with 4 instances of the DEVICE_CONTROL process
running simultaneously, as depicted below (SimpleOven_with_4_DCs.txt):


                                                13
Technical Note: Distributed Deployment



             Process ID    ProcessType      IP address      TCP port
                 0              0           43.2.0.1         6000
                 1              1          43.2.1.21         7001
                 2              1          43.2.1.31         7002
                 3              1          43.2.1.41         7003
                 4              1          43.2.1.51         7004

              Table 3: Example SPMD Process Configuration Table




                             Figure 10: Example SPMD Deployment

The location of the Process Configuration Table file is provided to the process executable at run
time via the “-config” command line arguments, eg:
       SimpleOven-DEVICE_CONTROL.exe -config SimpleOven_with_4_DCs.txt

A simple strategy to ensure each Process Instance has process configuration information
consistent with all other Process Instances is to build a single Process Configuration Table file
and provide it to all Process Instances when they start. However if some subsets of Process
Instances do not communicate with other subsets of Process Instances, each Process Instance
can have it’s own version of Process Configuration Table. Not all copies need to contain all
Process Instance entries. The following rules apply:
     When specified, each Process Instance (identified by Process ID) must specify the same
       Process Type, IP Address and TCP port in all Process Configuration Table files
     A Process Instance can only send messages to a Process Instance it knows about via its
       own Process Configuration Table file




                                               14
Technical Note: Distributed Deployment



3.3.3.1 Additional Fields – UDP Port and Spotlight Port
To support external realized code that may require UDP port information and to allow the
manual specification of Spotlight ports, the Process Configuration Table carries two additional
fields for this information:
   Process ID      ProcessType   IP address       TCP port    UDP port Spotlight Port

                Table 4: Additional Process Configuration Table Fields

The Spotlight port is used by the Spotlight instrumentation code. In addition, these static
methods are provided by the PfdProcess class for realized code-level access to the topology
information:
       // PROCESS TABLE ACCESSORS

       // Get   IP address for specified process
       static   int getPidFromIpAddress(int ip_address);
       // Get   IP address for specified process
       static   int getIpAddress(DestinationHandle dest);
       // Get   TCP port for specified process
       static   int getTcpPort(DestinationHandle dest);
       // Get   UDP port for specified process
       static   int getUdpPort(DestinationHandle dest);
       // Get   Spotlight port for specified process
       static   int getSpotlightPort(DestinationHandle dest);
       // Get   Spotlight port for current process
       static   int getSpotlightPort();


3.3.3.2 Adding to the Process Configuration Table at Run Time
Immediately preceding the first message to carry application data, a connect message is sent
to the destination Process Instance. The connect message carries the sender’s PID, IP address
and TCP listener port. If the destination Process Instance did not already have an entry in its
Process Configuration Table for the sender Process Instance, one is added.

3.3.3.3 Minimal Process Configuration Table
To facilitate deployments to targets without files systems or otherwise without topology
configuration files, a Process Instance can discover and build their system topology table at run
time. To suppress the default process configuration, start the process executable with the “-
pid” and “-port” command line argument,s eg:
        SimpleOven-DEVICE_CONTROL.exe –pid 4 –port 7004

Do not specify a process config table. Other processes connecting to it will add to its internal
process config table via the information provided in their connect messages.

Key limitation: a Process Instance with a Minimal Process Configuration Table cannot send
messages to other processes until those processes send a message to this Process Instance
first.


4 Messaging and Connectivity
4.1    Elements
The fundamental unit of messaging in a PI-MDD model is the IncidentHandle. This is a
PathMATE Mechanism that identifies a specific domain service, class operation or signal to be
invoked/generated, and carries the parameter values needed for any input parameters defined



                                                  15
Technical Note: Distributed Deployment



by that service. An asynchronous mechanism, IncidentHandles cannot be constructed for a
service/operation with return values or output parameters. IncidentHandles are often referred
to by the term Incident. There are two subtypes of IncidentHandle – the Event and the
ServiceHandle. ServiceHandles handle domain service and class operation invocations, so they
are the type of IncidentHandle most commonly encountered.

In a PI-MDD model the invocation of a domain service or class operation is specified in PAL
with a construct that looks just like a synchronous function/procedure/method invocation from
common implementation languages. Depending on the marking of the model for topology, the
tasks/process context for the caller may be in a different execution unit (task/process) than
the action being invoked. This requires the resolution of the PAL invocation with a range of
possible implementations. The following term classify these resolutions:
    - Local: the caller and the target service/operation are within the same task
   -   Inter-task: the caller and the target service/operation are within the same process,
       but between different tasks
   -   Inter-process: the caller and the target service/operation are between different
       processes (there is no distinction between node-local inter-process and inter-processor
       inter-process communications)

incident type          locality        communication mechanism
Operation                              Local synchronous function/method invocation; local
Invocation             Local           IncidentHandle dispatch
                       Inter-task      convert(*1); queue on target task ITIQ
                                       convert(*1); send via socket; upon receipt queue on
                       Inter-Process   target task ITIQ
IncidentHandle
CALL                   Local           local dispatch
                       Inter-task      queue on target task ITIQ
                                       send via socket; upon receipt queue on target task
                      Inter-Process    ITIQ
NOTES:
*1 - The operation invocation is converted automatically to an IncidentHandle.

              Table 5: Incident Communication Mechanisms

4.2    Connection Initiation and Reconnection
When a Process starts up, the Process Configuration Table is established as outlined in section
3.3.3 Process Configuration Table. For each row in the Table a Connection is created to
maintain the current state of the messaging connection to each known remote process.
However at this time no actual connections to other Process instances are started. Instead
they are started on demand, when an action in the local Process causes a message (incident)
to be sent to a remote Process.


4.2.1 Connect-on-Initial-Send
When an outbound message is queued, the state of the Connection to the destination Process
is checked. If it is not connected at this time connection is initiated. This initial outbound
message is held in a queue until the connection is successfully established. Then the message,
along with any others that may have been queued up, are send to the destination. Once all



                                              16
Technical Note: Distributed Deployment



are sent, the connection is left up – in a usable state. Later as other messages are send to
this same destination, they can simply be sent using the established connection.
4.2.2 Retry Initial Connection Delay
If the initial attempt to establish a connection to a remote Process fails, reconnection will be
attempted every PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds. If not specified
by the user, this compiler symbol has a default value of 1000.
4.2.3 Reconnecting an Established Connection
If a connection fails that had already been established and is in use, a priority is put on trying
to quickly restore it. A limited number of reconnection attempts are tried immediately after
failure is detected, without any delay between them. The number is controlled by the compiler
symbol PATH_IMMEDIATE_RECONNECT_MAX_COUNT. If not specified by the user, this compiler
symbol has a default value of 5. If the connection cannot be reestablished within
PATH_IMMEDIATE_RECONNECT_MAX_COUNT iterations, reconnection attempts are continued every
PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds.
4.2.4 Inter-Send Delay
The default task priorities place sender tasks at a lower priority, and this is expected to allow
for PIM processing to continue even if sender tasks are fully occupied by outbound message
traffic. To facilitate the even distribution of message activity across all sending tasks and the
receiver task, a “sleep” (via pfd_sleep) of 0 milliseconds is used after each individual socket
send call to release the processor if needed. This sleep also happens between individual socket
message packet sends for large messages. (large as defined by your current TCP-IP stack
configuration).
If the user wishes to further throttle outbound message traffic,
PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be specified. If not specified by the
user, this compiler symbol has a default value of 0.

Alternatively if the user wants to eliminate the inter-packet “sleep” altogether,
PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be set to -1, and the pfd_sleep call is
skipped.

4.3    Outbound Message Queues
When a PIM processing Task delivers an Incident to be sent interprocess, it is serialized into a
PfdSockMessageOut and placed on the outbound message queue for the Connection
corresponding to its destination. The queue is a PfdSyncList, and is limited in length based on
the current socket connection state of the Connection.
PATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT specifies the maximum number of messages
held by the queue when disconnected. If not specified by the user, this compiler symbol has a
default value of 128. PATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT specifies the queue limit
when connected, and its default is 1024.

If PIM processing attempts to enqueue an outbound message when the outbound message
queue is at its capacity, the oldest message is discarded.


4.4    Interprocessor and Endianness
The inter-processor communication pattern is the same as the inter-process case with sockets
providing connectivity, but an additional concern for inter-processor communication is byte

                                               17
Technical Note: Distributed Deployment



ordering. In an effort to ensure consistency of communications and readability of all messages
between processors of both types of endian-ness, all messages are constructed with Network
Byte Ordering (big endian). The C++ industry standard approach for serialization with a
parent serializable class and a data factory for deserialization is implemented with the
PfdSerializable mechanism class. This perform the proper message construction and decoding
automatically for all PI-MDD modeled elements and built-in types.
4.4.1 Build-in Type Data Item Serialization
Incident parameter values are serialized and transmitted between Processes automatically
when the invocation of the target service/operation/event crosses Process boundaries.

4.4.1.1 Special Type Handling
To ensure complete encoding of a data item of a user defined type that is implemented as a 64
bit integer, the user defined type is marked with an ExternalType value of “long long” or
“unsigned long long”.

Real values are implemented with the double type, and are serialized with the IEEE Standard
for Floating-Point Arithmetic (IEEE 754).

An IncidentHandle passed as a parameter is serialized in its entirety – including all of its own
parameter values – and sent to the destination.

In the case of the built-in type Handle and user-defined pointer-based types derived from it,
only the pointer value will be transmitted. The expectation is Handle pointers are not be
dereferenced on anywhere except where they were created.

4.4.2 User-Defined Type Serialization – Advanced Realized Types
The user may define a class at the realized code level that can implement a model-level user-
defined type. If data items (parameters) of this type end up crossing process boundaries,
PathMATE serialization mechanisms may be applied to aid in this communication.

An Advanced Realized Type (ART) is created by inheriting from the PathMATE inheriting from
the PfdSerializable mechanism class from your realized class. The user then provides their
own class-specific implementations for the virtual serialization and deserialization methods
specified in PfdSerializable. In addition the model-level user-defined type is marked with
Serializeable = TRUE.

For C++ and Java, the underlying implementation class for the data type must inherit from the
PfdSerializeable class, and implement the virtual methods insertToBuffer and
extractFromBuffer. Network-safe serialization and de-serialization functions are provided in
msg_base.hcpp/hpp for all supported serializable scalar types.

In C, the following additional properties are required to specify the serialization functions:
    - SerializeFunction=<name of serialize function>
   -   DeserializeFunction=<name of deserialize function>
These serialize and deserialize functions must match the following function pointer type
definitions defined in sw_msg_base.h:
typedef void (*sw_serialize_function_t)(message_buffer_t *msg_buf, int *msg_len,
    void* target, bool_t is_ascii);




                                                18
Technical Note: Distributed Deployment



typedef void* (*sw_deserialize_function_t)(message_buffer_t *msg_buf, int
   *msg_len, bool_t is_ascii);
Additional (existing) markings are also helpful in providing all the information needed to
conveniently apply these capabilities:
   - ExternalType=<implementation type this user-defined type maps to>
   -   IncludeFile=<name of file defining above ExternalType>

4.5    Communication Errors
The Software Mechanisms Error Registrar is the central point for error handler callback
registration and error reporting. Errors are grouped, and error handlers are registered for
groups. If no user-defined error callback has been registered for a group and an error is
reported against the group, a default error handler is applied. In addition to pre-defined error
groups, the user can define their own error groups and error codes.

4.5.1 Multi-Task
The Error Registrar provides a process-wide point of error notification registration for
application-level domains (and realized code). Model-level callbacks can be registered for
notification of an error is received under a specified ErrorGroup.

4.5.2 Model-Level Interface – SW:RegisterForErrorGroup
The service SoftwareMechanisms:RegisterForErrorGroup(Integer error_group, IncidentHandle
error_handler) handles callback registration for any group of errors reported via
SoftwareMechanisms::ReportError() and from built-in mechanism-level error reporting. Errors
are grouped by type, and an error handlers is registered for a specific group. If no user-
defined error callback has been registered for a group and an error is reported against the
group, a default error handler is applied.

User-provided error handlers must conform to the ErrorHandle Incident Profile - publishing the
same parameters as the SoftwareMechanisms:DefaultErrorHandler service (Integer
error_group, Integer error_code).
In addition to pre-defined error groups, the user can define their own groups.
If a communication error happens the provided callback will loaded with values for
error_group and error_code, and then called. For interprocess communication errors, the
SW_ERROR_GROUP_COMMUNICATIONS is used. This group includes the following error
codes:
       SW_ERROR_CODE_COMMUNICATIONS_ACCEPT_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_ADDRESS_FAILURE,
       SW_ERROR_CODE_COMMUNICATIONS_BIND_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_CONNECT_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_DISCONNECT,
       SW_ERROR_CODE_COMMUNICATIONS_INCONSISTENT_CONNECTIONS,
       SW_ERROR_CODE_COMMUNICATIONS_LISTEN_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_NETWORK_INIT_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_OTHER,
       SW_ERROR_CODE_COMMUNICATIONS_RECEIVE_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_SEND_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_SOCKET_CREATE_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_SOCKET_SHUTDOWN_FAILED,
       SW_ERROR_CODE_COMMUNICATIONS_SOCKET_ERROR,
       SW_ERROR_CODE_COMMUNICATIONS_TIMEOUT,
       SW_ERROR_CODE_COMMUNICATIONS_CONNECT_MESSAGE_MISMATCH



                                               19
Technical Note: Distributed Deployment



See error_codes.hpp for the most complete list of pre-defined error codes.


4.6 Generated Topology Summary Report
Each time code is generated, TopologyReport.txt is generated into the _info subfolder of the
deployment folder. This is the system topology report emitting the ProcessType and TaskID of
all deployable elements. This report can be helpful for complex systems as a definitive
reference of the actual ProcessType and TaskID of each item, resolving potential ambiguities
with defaults and multiple properties files.

4.7 Trace Debugging
Compiler preprocessor definitions are used to control trace debugging – printing to the standard error stream
cerr.
    PATH_DEBUG_INTERPROCESS_HL – Enables trace debugging for summary-level interprocess
       processing
    PATH_DEBUG_INTERPROCESS – Enables trace debugging for all levels of interprocess
       processing (Automatically activates PATH_DEBUG_INTERPROCESS_HL).
    PATH_DEBUG_INTERPROCESS_MSG – Enables trace debugging for all interprocess message
       sending and receipt
    PATH_DEBUG_TASK_TRACE – Enables trace debugging for intertask processing

4.8 Realized High-Speed Communications
In general each outbound socket message is created by taking an outbound incident and
serializing it into a PfdSockMessage, as identified in section 2.2.3 Multi Process. In some
specific cases a message is sent repeated times with very little or no changes to its contents
between transmissions. If this happens frequently or with very large incident parameter ART
payloads, the designer may discover the complete re-serialization of the incident is too slow to
meet message processing deadlines.

In these cases, it can be possible to avoid this repeated serialization overhead by writing
realized code that constructs a PfdSockMessage and sends it directly with PfdTopology::
enqueueOutboundRawMessage(). This realized code can create an instance of a
PfdSockMessageOut, send it, poll for send completion, then modify specific data values within
it, and resend this same instance all with relatively high efficiency. This process sidesteps the
safer and generally more convenient Incident-based approach, but also avoids that approach's
buffer allocation and full serialization processing.

Steps:
    Create a raw socket message PfdSockMessageOut instance, usually by manually
       serializing an Incident in realized code
    Call deferDeletion() on the raw message to prevent it from being deleted after each
       send.
    Save off a pointer to this raw message for repeated use
    Enqueue the raw message by calling PfdProcess::sendRawMessageInterProcess().
    Before updating specific data values in the raw message for a subsequent send, call
       isComplete() on the message and ensure it returns TRUE
    Update specific data values in the raw message for the next send
    Send again with PfdProcess::sendRawMessageInterProcess().
    don't forget to put your toys away when play time is over: delete raw_message;



                                                     20
Technical Note: Distributed Deployment




                                         21
Technical Note: Distributed Deployment




A. Thread Proliferation in Hand Written Systems
     Software development for complex systems since the dawn of 3gls (FORTRAN, COBOL,
     C, Pascal, Ada, C++, Java, etc) for high-performance systems is expressed nearly
     universally as long synchronous sequences of subroutine/procedure/function/method
     invocations. This serialization of processing is more a product of the single-
     dimensionality of these programming languages than any inherent simplicity in the
     problem spaces being addressed.
     The advent of multitasking and multiprocessor systems has driven 3gl programmers to
     break single-dimensional synchronous processing into separate sequences – threads.
     This decomposition has met varying levels of success, and introduced specific system
     partitioning techniques trying to take advantage of “apparent” parallelism offered by
     OS-level tasks. The subsequent rise in available of multi-core systems (and their
     “actual” parallelism) has further spurred the this move to use multi-threading.
     As development organizations gain proficiency with OS-level multi threading and with
     the absence of any other way to achieve parallelism (the fundamentally synchronous
     nature of programming languages hasn’t changed) typical complex system architectures
     have experienced a sharp rise in the number of OS-level threads employed. With this
     rise comes the computational costs of these complex OS mechanisms, and the
     development and maintenance costs of more complex designs. In many cases an
     overabundance of threads has resulted in systems with lower performance even with
     more capable processors and memory.
     PI-MDD offers an alternative to some of this harmful thread proliferation.

PI-MDD, Asynchronous Domain Interactions, and The Event-Driven
Paradigm
     The use of PI-MDD models breaks some of the constraints of synchronous sequences of
     subroutine/procedure/function/method invocations. Domain Modeling pushes system
     design from the very beginning to a base of primarily asynchronous interactions. The
     direct support of the callback in the UML Action Language via the IncidentHandle, and
     with state machines available within domains, there are a wide selection of
     asynchronous elements to augment the basic synchronous function/method call. With
     PathMATE automation the model-level function call (to a domain service or class
     operation) can also be generated with an implementation that uses an asynchronous
     IncidentHandle as needed. These asynchronous primitives break the simplistic lock that
     synchronous programming languages put on basic behavioral expression.
     The result of these asynchronous forms of expression is a PI-MDD system that has a
     large number of simple entities operating in asynchronous independence from each
     other, even when deployed to the same OS thread of control. OS threads are no longer
     needed to work around blocking calls or to interweave long chains of operations.
     Augmenting this fundamentally asynchronous form of behavioral expression, PI-MDD
     components can also be deployed to separate OS threads as needed - when blocking
     (or “nearly blocking” – long latency) calls to realized code function must be made.
     Typically these are found with communication sends and receives, files operations, or
     calls into legacy code still organized in long, synchronous call chains. Additional tasks




                                             22
Technical Note: Distributed Deployment



     are also applied to manage priority so a few, higher priority elements are allocated to a
     high priority task separate from the main task with ordinary priority.
     The net result of this is the allocation of much larger fragments of processing to fewer
     OS threads, while still realizing greater interleaving of processing through
     fundamentally asynchronous behavioral expression. The need for inter-task
     synchronization mechanisms is greatly reduced, and therefore run-time overhead and
     general complexity is reduced.




                                             23
Technical Note: Distributed Deployment




               B. Marking Summary
       marking name                applies to               default value                        description

                          Domain, Service, Class,
TaskID                    Operation                     SYS_TASK_ID_MAIN       Allocate action processing to a Task


ProcessType               Domain, Service               <DefaultProcessType>   Allocate action processing to a Process Type

                                                                               Indicate this DestinationHandle specifies
Routing                   Parameter                           <none>           destination.

                                                                               "T" generates a PfdSyncList for the
ThreadSafe                Class, Association                      F            instance container

                                                                               Specifies stack size for all Tasks; 0 indicates
DefaultStackSize          System                                  0            use OS default

                                                                               Allows the default ProcessType name to be
DefaultProcessType        System                               MAIN            changed

                          Domain, Service, Class,                              Sets the priority of the task for this analysis
TaskPriority              Operation                 SYS_TASK_PRIORITY_NORMAL   element; must be a pfdos_priority_e literal

                                                                               Allows specification of compiler symbol
Defines                   System, Domain                      <none>           values in the markings


ExternalType              User Defined Type                     void*          Implementation type for an ART

                                                                               Include file for the implementation type for an
IncludeFile               User Defined Type                   <none>           ART

                                                                               TRUE indicates this is an ART, inheriting
Serializable              User Defined Type                    FALSE           from PfdSerializable




                                                             24
Technical Note: Distributed Deployment




          C. Compiler Symbol Summary
                 symbol name                        default value                            description
PFD_MAIN_OOA_THREAD_PRIORITY                 SYS_TASK_PRIORITY_NORMAL   Default task priority for each OOA processing task
PFD_RECEIVER_THREAD_PRIORITY                 SYS_TASK_PRIORITY_NORMAL   Default task priority for the TCP receiver task
PFD_TRANSMIT_THREAD_PRIORITY                 SYS_TASK_PRIORITY_NORMAL   Default task priority for each TCP sender task
                                                                        Default task priority for the Spotlight connection
PFD_IE_THREAD_PRIORITY                       SYS_TASK_PRIORITY_LOWEST   task
PFD_INPUT_THREAD_PRIORITY                    SYS_TASK_PRIORITY_NORMAL   Default task priority for the Driver input task
                                                                        Define this to cause a stack to be allocated
PATH_ALLOCATE_EXPLICIT_STACK                        <not defined>       explicitly, separate from the start task OS call
                                                                        Time between sender task attempts to reconnect
PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS                100             with destination process, in milliseconds
                                                                        Max number of times the sender task attempts to
                                                                        reconnect with destination process without waiting
PATH_IMMEDIATE_RECONNECT_MAX_COUNT                       5              between
                                                                        Time between interprocess message packat sends
                                                                        (socket send calls), in milliseconds; 0 means
PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS               0              pfd_sleep(0); -1 means no sleep at all
                                                                        Max number of pending outbound messages
                                                                        queues for a single destination while not
PATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT                  128             connected to that destination
                                                                        Max number of pending outbound messages
                                                                        queues for a single destination while connected to
PATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT                     1024            that destination
                                                                        Define this to turn on high level "printf" debugging
PATH_DEBUG_INTERPROCESS_HL                          <not defined>       of interprocess mechanisms
                                                                        Define this to turn on full "printf" debugging of
PATH_DEBUG_INTERPROCESS                             <not defined>       interprocess mechanisms
                                                                        Define this to turn on "printf" debugging of
PATH_DEBUG_INTERPROCESS_MSG                         <not defined>       interprocess message sending
                                                                        Define this to turn on "printf" debugging of PfdTask
PATH_DEBUG_TASK_TRACE                               <not defined>       mechanics




                                                       25

Mais conteúdo relacionado

Mais procurados

ProModel Process Simulation Projects
ProModel Process Simulation ProjectsProModel Process Simulation Projects
ProModel Process Simulation Projectsdilbertdave
 
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...vtunotesbysree
 
Building blocks (Game Architecture)
Building blocks (Game Architecture)Building blocks (Game Architecture)
Building blocks (Game Architecture)Rajkumar Pawar
 
Structured Vs, Object Oriented Analysis and Design
Structured Vs, Object Oriented Analysis and DesignStructured Vs, Object Oriented Analysis and Design
Structured Vs, Object Oriented Analysis and DesignMotaz Saad
 
Software Design Document
Software Design DocumentSoftware Design Document
Software Design DocumentNadia Nahar
 

Mais procurados (8)

ProModel Process Simulation Projects
ProModel Process Simulation ProjectsProModel Process Simulation Projects
ProModel Process Simulation Projects
 
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
VTU 5TH SEM CSE SOFTWARE ENGINEERING SOLVED PAPERS - JUN13 DEC13 JUN14 DEC14 ...
 
Building blocks (Game Architecture)
Building blocks (Game Architecture)Building blocks (Game Architecture)
Building blocks (Game Architecture)
 
Integratedbook
IntegratedbookIntegratedbook
Integratedbook
 
Christian Morbidoni (Net7) - WP3
Christian Morbidoni (Net7) - WP3Christian Morbidoni (Net7) - WP3
Christian Morbidoni (Net7) - WP3
 
Structured Vs, Object Oriented Analysis and Design
Structured Vs, Object Oriented Analysis and DesignStructured Vs, Object Oriented Analysis and Design
Structured Vs, Object Oriented Analysis and Design
 
Sw Software Design
Sw Software DesignSw Software Design
Sw Software Design
 
Software Design Document
Software Design DocumentSoftware Design Document
Software Design Document
 

Destaque (7)

Multi Process Message Formats
Multi Process Message FormatsMulti Process Message Formats
Multi Process Message Formats
 
Index Based Instance Identification
Index Based Instance IdentificationIndex Based Instance Identification
Index Based Instance Identification
 
Memory Pools for C and C++
Memory Pools for C and C++Memory Pools for C and C++
Memory Pools for C and C++
 
PathMATE Transformation Maps Mutx Controls
PathMATE Transformation Maps Mutx ControlsPathMATE Transformation Maps Mutx Controls
PathMATE Transformation Maps Mutx Controls
 
Interprocess Message Formats
Interprocess Message FormatsInterprocess Message Formats
Interprocess Message Formats
 
Binary Instance Loading
Binary Instance LoadingBinary Instance Loading
Binary Instance Loading
 
Test Driven Development (TDD)
Test Driven Development (TDD)Test Driven Development (TDD)
Test Driven Development (TDD)
 

Semelhante a Distributed Deployment Model Driven Development

DBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_SolutionDBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_SolutionSyed Zaid Irshad
 
Fractalia manager whitepaper_en_5_2_2
Fractalia manager whitepaper_en_5_2_2Fractalia manager whitepaper_en_5_2_2
Fractalia manager whitepaper_en_5_2_2Fractalia
 
[EN] PLC programs development guidelines
[EN] PLC programs development guidelines[EN] PLC programs development guidelines
[EN] PLC programs development guidelinesItris Automation Square
 
Intro to embedded systems programming
Intro to embedded systems programming Intro to embedded systems programming
Intro to embedded systems programming Massimo Talia
 
CRM EHP3 landscape guide
CRM EHP3 landscape guide CRM EHP3 landscape guide
CRM EHP3 landscape guide SK Kutty
 
BPTX_2010_1_11320_0_259762_0_96386 (1)
BPTX_2010_1_11320_0_259762_0_96386 (1)BPTX_2010_1_11320_0_259762_0_96386 (1)
BPTX_2010_1_11320_0_259762_0_96386 (1)Tomáš Milata
 
REMOTE RADIO HEAD TESTING: 5G case study
REMOTE RADIO HEAD TESTING: 5G  case studyREMOTE RADIO HEAD TESTING: 5G  case study
REMOTE RADIO HEAD TESTING: 5G case studyJOSE T Y
 
Github-Source code management system SRS
Github-Source code management system SRSGithub-Source code management system SRS
Github-Source code management system SRSAditya Narayan Swami
 
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1tungdientu
 
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...Banking at Ho Chi Minh city
 
Sap s4 hana 1709 op sap api-master guide
Sap s4 hana 1709 op sap api-master guideSap s4 hana 1709 op sap api-master guide
Sap s4 hana 1709 op sap api-master guidemutia_arum
 
Modifying infor erp_syte_line_5140
Modifying infor erp_syte_line_5140Modifying infor erp_syte_line_5140
Modifying infor erp_syte_line_5140rajesh_rolta
 

Semelhante a Distributed Deployment Model Driven Development (20)

Bslsg131en 1
Bslsg131en 1Bslsg131en 1
Bslsg131en 1
 
DBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_SolutionDBMS_Lab_Manual_&_Solution
DBMS_Lab_Manual_&_Solution
 
Fractalia manager whitepaper_en_5_2_2
Fractalia manager whitepaper_en_5_2_2Fractalia manager whitepaper_en_5_2_2
Fractalia manager whitepaper_en_5_2_2
 
[EN] PLC programs development guidelines
[EN] PLC programs development guidelines[EN] PLC programs development guidelines
[EN] PLC programs development guidelines
 
Intro to embedded systems programming
Intro to embedded systems programming Intro to embedded systems programming
Intro to embedded systems programming
 
CRM EHP3 landscape guide
CRM EHP3 landscape guide CRM EHP3 landscape guide
CRM EHP3 landscape guide
 
BPTX_2010_1_11320_0_259762_0_96386 (1)
BPTX_2010_1_11320_0_259762_0_96386 (1)BPTX_2010_1_11320_0_259762_0_96386 (1)
BPTX_2010_1_11320_0_259762_0_96386 (1)
 
thesis-2005-029
thesis-2005-029thesis-2005-029
thesis-2005-029
 
REMOTE RADIO HEAD TESTING: 5G case study
REMOTE RADIO HEAD TESTING: 5G  case studyREMOTE RADIO HEAD TESTING: 5G  case study
REMOTE RADIO HEAD TESTING: 5G case study
 
KHAN_FAHAD_FL14
KHAN_FAHAD_FL14KHAN_FAHAD_FL14
KHAN_FAHAD_FL14
 
Github-Source code management system SRS
Github-Source code management system SRSGithub-Source code management system SRS
Github-Source code management system SRS
 
Thesis
ThesisThesis
Thesis
 
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
 
Program Directory for IBM Ported Tools for z/OS
Program Directory for IBM Ported Tools for z/OSProgram Directory for IBM Ported Tools for z/OS
Program Directory for IBM Ported Tools for z/OS
 
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...
Ibm tivoli monitoring for network performance v2.1 the mainframe network mana...
 
document
documentdocument
document
 
It410 toc
It410 tocIt410 toc
It410 toc
 
Sap s4 hana 1709 op sap api-master guide
Sap s4 hana 1709 op sap api-master guideSap s4 hana 1709 op sap api-master guide
Sap s4 hana 1709 op sap api-master guide
 
Srs
SrsSrs
Srs
 
Modifying infor erp_syte_line_5140
Modifying infor erp_syte_line_5140Modifying infor erp_syte_line_5140
Modifying infor erp_syte_line_5140
 

Último

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxRemote DBA Services
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 

Último (20)

TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Vector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptxVector Search -An Introduction in Oracle Database 23ai.pptx
Vector Search -An Introduction in Oracle Database 23ai.pptx
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 

Distributed Deployment Model Driven Development

  • 1. Distributed Deployment version 1.0 12/13/11 PathMATE Technical Notes Pathfinder Solutions Wrentham, MA, USA www.pathfindermda.com +1 508-568-0068 copyright 1995-2011 Pathfinder Solutions LLC, all rights reserved
  • 2. Table Of Contents 1 Introduction .......................................................................................... 1 2 Overview............................................................................................... 1 2.1 Design Elements............................................................................ 1 2.1.1 Task...................................................................................... 1 2.1.2 Process.................................................................................. 2 2.1.3 Processor............................................................................... 2 2.2 Configurations and Connections ....................................................... 3 2.3 Modeling in the Multi-Process World ................................................. 6 2.3.1 NodeManagement ................................................................... 7 2.3.2 Routing Parameters ................................................................. 7 2.3.3 Intertask Contention and the PfdSyncList ................................... 7 3 Deploying to Distributed Topologies...................................................... 7 3.1 Structural Design Elements ............................................................. 8 3.2 Topology Identification and Allocation ............................................... 8 3.2.1 Task Identification................................................................... 8 3.2.2 Task Priorities......................................................................... 9 3.2.3 ProcessType Definition ............................................................10 3.3 Topology Definition – the Process Configuration Table ........................12 3.3.1 Single Process Single Deployment ............................................13 3.3.2 Single Process Multi Deployment (SPMD) ...................................13 3.3.3 Process Configuration Table .....................................................13 4 Messaging and Connectivity ................................................................ 15 4.1 Elements .....................................................................................15 4.2 Interprocessor and Endianness .......................................................16 4.2.1 Build-in Type Data Item Serialization ........................................18 4.2.2 User-Defined Type Serialization – Advanced Realized Types .........18 4.3 Communication Errors ...................................................................19 4.3.1 Multi-Task.............................................................................19 4.3.2 Model-Level Interface – SW:RegisterForErrorGroup.....................19 4.4 Generated Topology Summary Report .............................................20 4.5 Trace Debugging ..........................................................................20 A. Thread Proliferation in Hand Written Systems .................................... 22 PI-MDD, Asynchronous Domain Interactions, and The Event-Driven Paradigm.22 Table Of Figures ii
  • 3. Figure 1: Default Single-Task System Topology .................................................. 3 Figure 2: Example Two-Task System Topology ................................................... 3 Figure 3: Simplified Task Symbology ................................................................ 4 Figure 4: Multi-Task, Multi-Processor Topology ................................................... 5 Figure 5: Simplified Process Symbology ............................................................ 6 Figure 6: SPMD Deployment ............................................................................ 6 Figure 7: Default System Topology ................................................................... 8 Figure 8: One Process Two-Task System Topology .............................................. 9 Figure 9: Example Multi-Task, Multi-Processor Topology .....................................12 Figure 10: Example SPMD Deployment.............................................................14 iii
  • 4. Technical Note: Distributed Deployment 1 Introduction The PI-MDD method for constructing software for complex and high performance systems separates the complexities of the problem space subject matters from the strategies and details of the implementation platforms the system executes within. While programming language and target operating system are key aspects of this implementation platform, they represent one-time selections, often made by organizational inertia. Perhaps the most demanding facet of this separation discipline is that the most complex and creative aspects of platform revolve around deploying a complex software system across a distributed execution unit topology with a range of processors, processes and tasks. Modern software systems at nearly all levels of complexity execute with some degree of parallel execution. Even the simplest systems apply separate threads of control to manage the lowest level of communication input and output. Conceptually coherent “single” systems often involve highly synchronized activities executing on separate processors. In PI-MDD one of the most fundamental disciplines pushes nearly all aspects of this complexity from the modeling space. So how is this clearly important, nearly ubiquitous, and non-trivial concern addressed with PI-MDD and PathMATE? The distributed deployment of PI-MDD models to multiple execution units is managed with an integrated set of model markings (properties), specialized code generation rules/templates and a flexible set of implementation mechanisms. The PI- MDD models remain independent of this target environment topology, and can be deployed to a range of topologies to support unit testing, system testing, and alternative deployment architectures. 2 Overview 2.1 Design Elements 2.1.1 Task Task – a separate thread of execution control that can run interleaved with or in parallel with other tasks in the same Process. Tasks within the same Process can address the same memory space. Tasks are supported by specific OS-level mechanisms. Even on “bare boards” without a multi-tasking OS, there is a main Task, and interrupt vectors are considered to run in a second Interrupt Task. Modeled Task – a Task that was explicitly identified as part of the deployment of model elements through the TaskID marking. Modeled tasks are managed automatically by PathMATE mechanisms, including start, stop, nonlocalDispatchers, inter-task incident queuing, and task-local memory pools. One instance of the PfdTask class (SW_Task in C) manages each Modeled Task that is executing modeled elements. Each Modeled Task is designated to run on a specific ProcessType. Note: When a realized domain is marked to run in a specific task, the resulting Task is still a Modeled Task because it was created from marking a model element – the realized domain, and have all the conveniences of modeled domains in this regard. For the purposes of this document, a reference to a Task from here forward will mean a Modeled Task unless explicitly designated otherwise. 1
  • 5. Technical Note: Distributed Deployment Incident – An instance of a modeled Signal or of an IncidentHandle (callback) is an Incident. Each Incident is designated to run within a specific Task, by the Task allocation of either its destination class or target service/operation. NonlocalDispatcher – For each domain service and class operation that can be invoked (from PAL) from another task, a corresponding method is automatically generated to provide a function that can be directly invoked at the code level from any Task. A nonlocalDispatcher creates an IncidentHandle to its corresponding domain service or class operation and delivers this incident. Realized Task – a task that was not identified as part of the deployment of model elements, and is controlled either by PathMATE mechanisms or realized code independent of the Modeled Tasks. Realized tasks may be started by PathMATE to support specific mechanism execution – eg communication receiver and sender tasks – or may be started and managed by completely by realized code and therefore are unknown to PathMATE mechanisms. For the purposes of this document, any reference to a Realized Task from here forward will be explicitly designated. 2.1.2 Process Process – Generally contained within an executable, a Process is a set of one or more Tasks that can run interleaved with each other, or genuinely in parallel, controlled by a single scheduler, and sharing a memory space. Separate Processes cannot address the same general memory space. PathMATE supports the external identification of a process via IP address and port. One or more Processes may run on a Processor. Some deployment environments (RTOSs) will only support a single Process running on a single processor. For the purposes of this technical note in this scenario where a Processor is executing a single executable with one or more Tasks will be termed to have a single Process executing. One instance of the PfdProcess class (SW_Process in C) manages each Process that is executing modeled elements. Process Identifier (PID) - A numerical value used to uniquely address each process instance within an intercommunicating set of PathMATE-based Processes. This is specified in the Process Configuration Table and can be used as part of a Routing Parameter. Process Type - A “type” of executable built with a potentially unique set of PI-MDD elements, and with a specific set of conditional compiler directives, deployed together as a Process. Typically each different executable is a Process Type. One or more instances of a Process Type may be running at any give time in a system, each with their own unique Process Identifier. Model elements are deployed to a Process Type via the ProcessType marking. Process Configuration Table – A table constructed at runtime within each PathMATE Process that identifies (Process Type and Process ID) itself and the process instances that it can connect to. This table is either loaded from a file at startup time or may be constructed from interprocess (socket) messages received throughout system processing. The Process Configuration Table in each process instance must have consistent subsets of information with all other Process that it can connect to. 2.1.3 Processor Processor – a distinct physical processor, that may run one of more Processes. For the purposes of this technical note the Processor is a secondary concept, and the Process is used to bound deployed elements that run on a specific Processor. 2
  • 6. Technical Note: Distributed Deployment 2.2 Configurations and Connections 2.2.1 Single Task From an application perspective, the work of the system is done in the Tasks by PIM processing – model actions executing. Each Task performs this work by dispatching Incidents which may cause actions to run. The Incidents are held in an Incident Queue. Figure 1: Default Single-Task System Topology 2.2.2 Multi Task If two or more Tasks run within a Process, then a task-safe Inter-Task Incident Queue (ITIQ) is used to queue Incidents between Tasks. PIM Processing may generate Incidents that are queued locally, or on another Task’s ITIQ. Figure 2: Example Two-Task System Topology 3
  • 7. Technical Note: Distributed Deployment This internal Task structure is common for all Modeled Tasks. For simplicity from here forward Tasks will be shown as a simple outline and name: Figure 3: Simplified Task Symbology 2.2.3 Multi Process When a System is deployed to two or more Processes, TCP/IP sockets are used to convey messages between them. Incidents – service handles or events – are send between processes and dispatched upon receipt to PIM tasks where they initiated PIM processing. Each incident to be sent is converted to a PfdSockMessage through Serialization, where the data members of the incident and it’s parameter values are encoded into the stream of bytes within the PfdSockMessage. In each Process a number of realized communication tasks are automatically created and managed by the PfdProcess. One receiver task receives messages from any source, and a sender task for each destination Process instance manages the connection with and sends messages to that process. Each sender task has an Inter-Task Message Queue (ITMQ) where each outbound PfdSockMessage is queued by local PIM processing for sending. The ITMQ is implemented with a PfdSyncList containing instances of PfdSockMessageOut. 4
  • 8. Technical Note: Distributed Deployment In the topology shown below there is a single instance of ProcessType MAIN and a single instance of ProcessType DEVICE_CONTROL. Figure 4: Multi-Task, Multi-Processor Topology 5
  • 9. Technical Note: Distributed Deployment This internal Process structure is common for all Processes. For simplicity from here forward Process will be shown as a simple outline and name, containing only their Modeled Tasks: Figure 5: Simplified Process Symbology 2.2.4 Multi Process SPMD In some systems there are can be more than one instance of a Process of a single ProcessType. This general capability is termed Single Process Multi Deployment (SPMD). Often each different instance is deployed to its own processor. Note the varying PIDs in the diagram below. Figure 6: SPMD Deployment 2.3 Modeling in the Multi-Process World With the underlying asynchronous nature of PI-MDD model element interactions and the explicit PI-MDD preference of asynchronous interactions at the domain level (and below), proper PI-MDD models should generally be well formed for a range of deployment options. Certainly this is a solid start, and leaves the Application Modeler and the Target Deployment 6
  • 10. Technical Note: Distributed Deployment Designer on good joint footing to develop and effective deployment. But at some point there may be aspects of the PI-MDD model that may need adjustments to facilitate proper deployment. 2.3.1 NodeManagement In systems that fully utilize complex multi-processing target environments, often times there is explicit interaction between high-level application control (“mission-level” logic) and detailed aspects of the platform and its resources. In this realm these interactions form a bona fide Problem Space subject matter: NodeManagement. A Domain can be created to encapsulate interactions with topology awareness, local processing resources and multi-process communication mechanisms. This can alleviate application domains of the need to somehow to intelligently and flexibly respond to details about topology and resources localize these capabilities in one component. 2.3.2 Routing Parameters To explicitly route the invocation of a domain service to specific process id/task id combination, a domain service can specify a parameter as a <<Routing>> parameter. A parameter of type DestinationHandle or Group<DestinationHandle> can have it’s Stereotype marking set to Routing, which causes its nonlocalDispatcher to be generated with appropriate routing code, using the parameter runtime value(s). But where does a caller get the right values? Often times a class can have the proper routing information instantiated at startup (via a static initializer, XML file or binary instance data) and use these attributes. The SoftwareMechanisms domain services DestinationHandleLocal() and DestinationHandleRemote() provide encoding of specified Process IDs and Task IDs into DestinationHandles at runtime. In systems with more complex or dynamic topologies, a NodeManagement domain can maintain the appropriate body of routing data. It can publish services to provide this data at the level of abstraction appropriate to preserve the application’s independence of specific topologies. The caller can then go to NodeManagement to get timely and appropriate routing information. 2.3.3 Intertask Contention and the PfdSyncList A simple starting point for deploying a domain within a Process Type is to allocate the domain – in its entirety – to execute within a single task. This way there is no contention between tasks because domain share no resources that they need to explicitly protect. However there are legitimate design contexts where it is advantageous to deploy a single domain to multiple tasks in a single process. In this context a danger emerges when an element of the domain accesses a shared assets from one task when the same asset can be accessed from another task. In this case a protection mechanism is required. These shared assets are class instance containers – both for an association (across from a many participant) and for a class instance population. The ThreadSafe marking is supported on the Association and the Class. Setting it to “T” will generate a PfdSyncList for the instance container, which uses an internal PfdReentrantMutex to make access safe for intertask use. 3 Deploying to Distributed Topologies Structural Design is the PI-MDD activity where the execution units for the deployment environment are identified and model elements are allocated to them. 7
  • 11. Technical Note: Distributed Deployment 3.1 Structural Design Elements The construction, interconnection and deployment of multi-processor systems from PI-MDD models requires: - The Model: While the Domain is the primary initial focus for deployment, elements within a domain can also be separately allocated to their separate execution units, including the Domain Service, Class, Class Operation, and State. - The Topology: Tasks and process types are identified via markings ProcessType and TaskID, applied to a range of model elements. - Generated Code, Projects: While general mechanism layer code exist to support Distributed Deployment, nothing specific to any topology exists until after Transformation. Markings drive the generation of build folders for each ProcessType and code tailored to the target topology. - Run-Time Data: Each specific instance of a Process is identified via a Process ID in the Process Configuration Table. While the actual image file code content for each instance of a given ProcessType is identical (they all use the same executable file), individual Process instances can be configured to behave differently via class instance and link data. These can be captured in XML or binary instance data files which are deployed with the correct process instance and/or working directory. 3.2 Topology Identification and Allocation The identification of execution units starts with simple default allocations. If no topology markings are specified at all, the default configuration is a single ProcessType named “MAIN”, with a single Task named “SYS_TASK_ID_MAIN”. Figure 7: Default System Topology In this configuration the system project files are generated with compiler symbol settings that hide the inter-task and inter-process code in the PathMATE Mechanisms layer. While realized communications code can always be included in the system, it will not have any PathMATE generated communications mechanisms. 3.2.1 Task Identification Additional Tasks are identified by the “TaskID” marking, which can be applied to analysis elements of type Domain, DomainService, Object, or ObjectService. The default allocation for Domains is SYS_TASK_ID MAIN. The default TaskID for all other elements is the TaskID of their containing element. 8
  • 12. Technical Note: Distributed Deployment A TaskID can be set to any identifier name, however by convention all fixed task ids are in the form SYS_TASK_ID_<name>. The task id SYS_TASK_ANY has special meaning, indicating all actions in the marked element are executed locally within the calling task. An additional special TaskID value, DYNAMIC, indicates a new task is started, or retrieved from the task pool and the object/service is run in that task. Instances of DYNAMIC classes This rich capability is described in the separate Tech Note “PathMATE Dynamic Tasks”. Example: Deploy a system to a topology with two tasks: SYS_TASK_ID_MAIN and SYS_TASK_ID_LOGGING. The following marking deploys the Instrumentation domain to the ‘LOGGING task, thereby causing the generation of a multi-task system: Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGING This one marking line causes the system to deploy in this configuration, with the Instrumentation domain actions running in the “PIM processing” bubble in the SYS_TASK_ID_LOGGING task, and the remainder of the system running in the “PIM processing” bubble in the SYS_TASK_ID_MAIN task: Figure 8: One Process Two-Task System Topology An invocation to an Instrumentation domain service from PIM processing (a model action) within the SYS_TASK_ID_MAIN task domain is generated as a call to that domain service’s nonlocalDispatcher counterpart. This nonlocalDispatcher creates a PfdServiceHandle to the target service on the SYS_TASK_ID_LOGGING task and delivers it – placing it on the ITIQ for SYS_TASK_ID_LOGGING. 3.2.2 Task Priorities The enumeration pfdos_priority_e defined in pfd_os.hpp specifies the task priorities that are generally available:  SYS_TASK_PRIORITY_HIGHEST  SYS_TASK_PRIORITY_HIGHER  SYS_TASK_PRIORITY_NORMAL 9
  • 13. Technical Note: Distributed Deployment  SYS_TASK_PRIORITY_LOWER  SYS_TASK_PRIORITY_LOWEST Modeled Tasks Any model element that can be marked with TaskID can also optionally be marked with TaskPriority, specifying one of the above values. The default is SYS_TASK_PRIORITY_NORMAL. NOTE: all model elements that explicitly set a TaskID must all have the same TaskPriority. Mechanisms Realized Tasks The priorities of realized tasks started by PathMATE mechanisms are controlled by the following compiler symbols. task priority symbol description default value PFD_MAIN_OOA_THREAD_PRIORITY OOA processing task SYS_TASK_PRIORITY_NORMAL PFD_RECEIVER_THREAD_PRIORITY TCP receiver task SYS_TASK_PRIORITY_NORMAL PFD_TRANSMIT_THREAD_PRIORITY TCP sender task SYS_TASK_PRIORITY_NORMAL PFD_IE_THREAD_PRIORITY Spotlight connection task SYS_TASK_PRIORITY_LOWEST PFD_INPUT_THREAD_PRIORITY Driver input task SYS_TASK_PRIORITY_NORMAL The following definition pattern in pfd_os.hpp supports the external specification of a priority for each type of realized task: #ifndef PFD_TRANSMIT_THREAD_PRIORITY #define PFD_TRANSMIT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL #endif In this manner the system Defines marking can be used to override default realized task priorities. 3.2.3 Task Stack By default, the stack size allocated to each Task is controlled by OS defaults. When the task is started a 0 is passed into the OS call and it determines the size of the stack. You can specify a default stack size for all Tasks via the DefaultStackSize system marking, eg: System,SimpleOven,DefaultStackSize,500000 In addition to controlling size, the actual memory allocated for use as the task stack can be allocated explicitly, allowing custom realized code to monitor it for overrun, etc. By marking the system with a non-0 DefaultStackSize and defining the compile flag PATH_ALLOCATE_EXPLICIT_STACK, a stack is explicitly allocated in pfd_start_task_param(). If the Task being started is a modeled task, the instance of PfdTask that it is running has it’s stack data members set with this info. (Platform support may vary; inspect pfd_os.cpp pfd_start_task_param() to see if your platform is supported.) 3.2.4 ProcessType Definition By default a single ProcessType is generated – MAIN. Build file generation produces a single build folder called MAIN, and produces a single executable. By default all domains and domain services are allocated to the ProcessType MAIN. Additional ProcessTypes can be defined by allocating a domain or domain service via the ProcessType marking. For example Domain,*.ExternalDeviceControl,ProcessType,DEVICE_CONTROL Once more ProcessTypes are defined (in addition to MAIN) code is generated to communicate between multiple processes. Each ProcessType generates a build folder with the ProcessType name, and generates code that starts the domains and tasks configured for that ProcessType. 10
  • 14. Technical Note: Distributed Deployment The code also automatically starts the PfdProcess’s realized communication tasks to handle inter-process communications. With the definition of ProcessTypes, PfdIncident::deliver() – used by nonlocalDispatchers – is extended where needed to handle inter-process invocation scenarios, automatically using the PfdProcess’s realized communication tasks. Example: Deploy a system to a topology with two ProcessTypes. ProcessType MAIN has two tasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_LOGGING. ProcessType DEVICE_CONTROL has two tasks - SYS_TASK_ID_MAIN and SYS_TASK_ID_REGISTER_IO. The following marking deploys the DeviceControl and HardwareIF domains to the DEVICE_CONTROL ProcessType, thereby causing the generation of a multi-process system: Domain,*.DeviceControl,ProcessType,DEVICE_CONTROL Domain,*.HardwareIF,ProcessType,DEVICE_CONTROL Because all other domains by default are allocated to ProcessType MAIN, and we want SoftwareMechanisms services to run locally wherever they are called, we allocate this domain to ProcessType ANY, TaskID SYS_TASK_ANY: Domain,*.SoftwareMechanisms,ProcessType,ANY Domain,*.SoftwareMechanisms,TaskID,SYS_TASK_ANY To retain the Instrumentation deployment configuration, and to run the HardwareIF domain in its own task we add these markings: Domain,*.Instrumentation,TaskID,SYS_TASK_ID_LOGGING Domain,*.HardwareIF,TaskID,SYS_TASK_ID_REGISTER_IO These 6 markings cause the system to deploy in this configuration: 11
  • 15. Technical Note: Distributed Deployment Figure 9: Example Multi-Task, Multi-Processor Topology To specify a default process type other than MAIN, the default ProcessType can be changed via the system marking DefaultProcessType, for example: System,*,DefaultProcessType,MY_PROCESS 3.3 Topology Definition – the Process Configuration Table 12
  • 16. Technical Note: Distributed Deployment 3.3.1 Single Process Single Deployment Once an executable is built for each ProcessType, one or more instances of the executable may be started. By default PathMATE assumes a simple configuration where a single instance of each ProcessType is run. The PathMATE PfdProcess class has a Process Configuration Table to keep track of the configuration of Process Instances that run in the system. Each Process tracks the processes it can connect to with the following information: Process ID ProcessType IP address TCP port Table 1: Process Configuration Table Fields A key stop in understanding this Process Configuration Table is in the generated gc/sys/system_indices.hpp, where the ProcessTypes constants are defined: /* define process types for the system */ enum { SYS_PROCESS_TYPE_MAIN = 0, SYS_PROCESS_TYPE_DEVICE_CONTROL = 1, SYS_PROCESS_TYPE__count = 2 }; The generated code includes a default Process Configuration Table that allows one instance of each ProcessType. The example system depicted in Figure 9: Example Multi-Task, Multi- Processor Topology has the following default Process Configuration Table: Process ID ProcessType IP address TCP port 0 0 127.0.0.1 5501 1 1 127.0.0.1 5502 Table 2: Default Process Configuration Table 3.3.2 Single Process Multi Deployment (SPMD) Some system configurations call for more than one instance of a given ProcessType be executing at the same time. Essentially this permits two or more entries in the Process Configuration Table to specify a given ProcessType. In cases where a single instance of a given ProcessType is configured, the system can continue to use implicit routing of services to that process based only on ProcessType, as was done in the Single Process Single Deployment configuration. However for cases where two or more instances of a given ProcessType are configured, the system requires a way for the application to control in which process instance a given services will run. A Domain Service with a parameter marked as a Routing Parameter routes the service invocation to the Process ID and Task specified by this parameter. In this manner a PathMATE system can utilize multiple Process Instances of the same Process Type. 3.3.3 Process Configuration Table To specify a custom target topology a Process Configuration Table is constructed in a comma- separated file, specifying a row in the table for each Process instance. Each row must have a unique Process ID and combination of IP address and TCP port. This example table shows our system with 4 instances of the DEVICE_CONTROL process running simultaneously, as depicted below (SimpleOven_with_4_DCs.txt): 13
  • 17. Technical Note: Distributed Deployment Process ID ProcessType IP address TCP port 0 0 43.2.0.1 6000 1 1 43.2.1.21 7001 2 1 43.2.1.31 7002 3 1 43.2.1.41 7003 4 1 43.2.1.51 7004 Table 3: Example SPMD Process Configuration Table Figure 10: Example SPMD Deployment The location of the Process Configuration Table file is provided to the process executable at run time via the “-config” command line arguments, eg: SimpleOven-DEVICE_CONTROL.exe -config SimpleOven_with_4_DCs.txt A simple strategy to ensure each Process Instance has process configuration information consistent with all other Process Instances is to build a single Process Configuration Table file and provide it to all Process Instances when they start. However if some subsets of Process Instances do not communicate with other subsets of Process Instances, each Process Instance can have it’s own version of Process Configuration Table. Not all copies need to contain all Process Instance entries. The following rules apply:  When specified, each Process Instance (identified by Process ID) must specify the same Process Type, IP Address and TCP port in all Process Configuration Table files  A Process Instance can only send messages to a Process Instance it knows about via its own Process Configuration Table file 14
  • 18. Technical Note: Distributed Deployment 3.3.3.1 Additional Fields – UDP Port and Spotlight Port To support external realized code that may require UDP port information and to allow the manual specification of Spotlight ports, the Process Configuration Table carries two additional fields for this information: Process ID ProcessType IP address TCP port UDP port Spotlight Port Table 4: Additional Process Configuration Table Fields The Spotlight port is used by the Spotlight instrumentation code. In addition, these static methods are provided by the PfdProcess class for realized code-level access to the topology information: // PROCESS TABLE ACCESSORS // Get IP address for specified process static int getPidFromIpAddress(int ip_address); // Get IP address for specified process static int getIpAddress(DestinationHandle dest); // Get TCP port for specified process static int getTcpPort(DestinationHandle dest); // Get UDP port for specified process static int getUdpPort(DestinationHandle dest); // Get Spotlight port for specified process static int getSpotlightPort(DestinationHandle dest); // Get Spotlight port for current process static int getSpotlightPort(); 3.3.3.2 Adding to the Process Configuration Table at Run Time Immediately preceding the first message to carry application data, a connect message is sent to the destination Process Instance. The connect message carries the sender’s PID, IP address and TCP listener port. If the destination Process Instance did not already have an entry in its Process Configuration Table for the sender Process Instance, one is added. 3.3.3.3 Minimal Process Configuration Table To facilitate deployments to targets without files systems or otherwise without topology configuration files, a Process Instance can discover and build their system topology table at run time. To suppress the default process configuration, start the process executable with the “- pid” and “-port” command line argument,s eg: SimpleOven-DEVICE_CONTROL.exe –pid 4 –port 7004 Do not specify a process config table. Other processes connecting to it will add to its internal process config table via the information provided in their connect messages. Key limitation: a Process Instance with a Minimal Process Configuration Table cannot send messages to other processes until those processes send a message to this Process Instance first. 4 Messaging and Connectivity 4.1 Elements The fundamental unit of messaging in a PI-MDD model is the IncidentHandle. This is a PathMATE Mechanism that identifies a specific domain service, class operation or signal to be invoked/generated, and carries the parameter values needed for any input parameters defined 15
  • 19. Technical Note: Distributed Deployment by that service. An asynchronous mechanism, IncidentHandles cannot be constructed for a service/operation with return values or output parameters. IncidentHandles are often referred to by the term Incident. There are two subtypes of IncidentHandle – the Event and the ServiceHandle. ServiceHandles handle domain service and class operation invocations, so they are the type of IncidentHandle most commonly encountered. In a PI-MDD model the invocation of a domain service or class operation is specified in PAL with a construct that looks just like a synchronous function/procedure/method invocation from common implementation languages. Depending on the marking of the model for topology, the tasks/process context for the caller may be in a different execution unit (task/process) than the action being invoked. This requires the resolution of the PAL invocation with a range of possible implementations. The following term classify these resolutions: - Local: the caller and the target service/operation are within the same task - Inter-task: the caller and the target service/operation are within the same process, but between different tasks - Inter-process: the caller and the target service/operation are between different processes (there is no distinction between node-local inter-process and inter-processor inter-process communications) incident type locality communication mechanism Operation Local synchronous function/method invocation; local Invocation Local IncidentHandle dispatch Inter-task convert(*1); queue on target task ITIQ convert(*1); send via socket; upon receipt queue on Inter-Process target task ITIQ IncidentHandle CALL Local local dispatch Inter-task queue on target task ITIQ send via socket; upon receipt queue on target task Inter-Process ITIQ NOTES: *1 - The operation invocation is converted automatically to an IncidentHandle. Table 5: Incident Communication Mechanisms 4.2 Connection Initiation and Reconnection When a Process starts up, the Process Configuration Table is established as outlined in section 3.3.3 Process Configuration Table. For each row in the Table a Connection is created to maintain the current state of the messaging connection to each known remote process. However at this time no actual connections to other Process instances are started. Instead they are started on demand, when an action in the local Process causes a message (incident) to be sent to a remote Process. 4.2.1 Connect-on-Initial-Send When an outbound message is queued, the state of the Connection to the destination Process is checked. If it is not connected at this time connection is initiated. This initial outbound message is held in a queue until the connection is successfully established. Then the message, along with any others that may have been queued up, are send to the destination. Once all 16
  • 20. Technical Note: Distributed Deployment are sent, the connection is left up – in a usable state. Later as other messages are send to this same destination, they can simply be sent using the established connection. 4.2.2 Retry Initial Connection Delay If the initial attempt to establish a connection to a remote Process fails, reconnection will be attempted every PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds. If not specified by the user, this compiler symbol has a default value of 1000. 4.2.3 Reconnecting an Established Connection If a connection fails that had already been established and is in use, a priority is put on trying to quickly restore it. A limited number of reconnection attempts are tried immediately after failure is detected, without any delay between them. The number is controlled by the compiler symbol PATH_IMMEDIATE_RECONNECT_MAX_COUNT. If not specified by the user, this compiler symbol has a default value of 5. If the connection cannot be reestablished within PATH_IMMEDIATE_RECONNECT_MAX_COUNT iterations, reconnection attempts are continued every PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS milliseconds. 4.2.4 Inter-Send Delay The default task priorities place sender tasks at a lower priority, and this is expected to allow for PIM processing to continue even if sender tasks are fully occupied by outbound message traffic. To facilitate the even distribution of message activity across all sending tasks and the receiver task, a “sleep” (via pfd_sleep) of 0 milliseconds is used after each individual socket send call to release the processor if needed. This sleep also happens between individual socket message packet sends for large messages. (large as defined by your current TCP-IP stack configuration). If the user wishes to further throttle outbound message traffic, PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be specified. If not specified by the user, this compiler symbol has a default value of 0. Alternatively if the user wants to eliminate the inter-packet “sleep” altogether, PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS can be set to -1, and the pfd_sleep call is skipped. 4.3 Outbound Message Queues When a PIM processing Task delivers an Incident to be sent interprocess, it is serialized into a PfdSockMessageOut and placed on the outbound message queue for the Connection corresponding to its destination. The queue is a PfdSyncList, and is limited in length based on the current socket connection state of the Connection. PATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT specifies the maximum number of messages held by the queue when disconnected. If not specified by the user, this compiler symbol has a default value of 128. PATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT specifies the queue limit when connected, and its default is 1024. If PIM processing attempts to enqueue an outbound message when the outbound message queue is at its capacity, the oldest message is discarded. 4.4 Interprocessor and Endianness The inter-processor communication pattern is the same as the inter-process case with sockets providing connectivity, but an additional concern for inter-processor communication is byte 17
  • 21. Technical Note: Distributed Deployment ordering. In an effort to ensure consistency of communications and readability of all messages between processors of both types of endian-ness, all messages are constructed with Network Byte Ordering (big endian). The C++ industry standard approach for serialization with a parent serializable class and a data factory for deserialization is implemented with the PfdSerializable mechanism class. This perform the proper message construction and decoding automatically for all PI-MDD modeled elements and built-in types. 4.4.1 Build-in Type Data Item Serialization Incident parameter values are serialized and transmitted between Processes automatically when the invocation of the target service/operation/event crosses Process boundaries. 4.4.1.1 Special Type Handling To ensure complete encoding of a data item of a user defined type that is implemented as a 64 bit integer, the user defined type is marked with an ExternalType value of “long long” or “unsigned long long”. Real values are implemented with the double type, and are serialized with the IEEE Standard for Floating-Point Arithmetic (IEEE 754). An IncidentHandle passed as a parameter is serialized in its entirety – including all of its own parameter values – and sent to the destination. In the case of the built-in type Handle and user-defined pointer-based types derived from it, only the pointer value will be transmitted. The expectation is Handle pointers are not be dereferenced on anywhere except where they were created. 4.4.2 User-Defined Type Serialization – Advanced Realized Types The user may define a class at the realized code level that can implement a model-level user- defined type. If data items (parameters) of this type end up crossing process boundaries, PathMATE serialization mechanisms may be applied to aid in this communication. An Advanced Realized Type (ART) is created by inheriting from the PathMATE inheriting from the PfdSerializable mechanism class from your realized class. The user then provides their own class-specific implementations for the virtual serialization and deserialization methods specified in PfdSerializable. In addition the model-level user-defined type is marked with Serializeable = TRUE. For C++ and Java, the underlying implementation class for the data type must inherit from the PfdSerializeable class, and implement the virtual methods insertToBuffer and extractFromBuffer. Network-safe serialization and de-serialization functions are provided in msg_base.hcpp/hpp for all supported serializable scalar types. In C, the following additional properties are required to specify the serialization functions: - SerializeFunction=<name of serialize function> - DeserializeFunction=<name of deserialize function> These serialize and deserialize functions must match the following function pointer type definitions defined in sw_msg_base.h: typedef void (*sw_serialize_function_t)(message_buffer_t *msg_buf, int *msg_len, void* target, bool_t is_ascii); 18
  • 22. Technical Note: Distributed Deployment typedef void* (*sw_deserialize_function_t)(message_buffer_t *msg_buf, int *msg_len, bool_t is_ascii); Additional (existing) markings are also helpful in providing all the information needed to conveniently apply these capabilities: - ExternalType=<implementation type this user-defined type maps to> - IncludeFile=<name of file defining above ExternalType> 4.5 Communication Errors The Software Mechanisms Error Registrar is the central point for error handler callback registration and error reporting. Errors are grouped, and error handlers are registered for groups. If no user-defined error callback has been registered for a group and an error is reported against the group, a default error handler is applied. In addition to pre-defined error groups, the user can define their own error groups and error codes. 4.5.1 Multi-Task The Error Registrar provides a process-wide point of error notification registration for application-level domains (and realized code). Model-level callbacks can be registered for notification of an error is received under a specified ErrorGroup. 4.5.2 Model-Level Interface – SW:RegisterForErrorGroup The service SoftwareMechanisms:RegisterForErrorGroup(Integer error_group, IncidentHandle error_handler) handles callback registration for any group of errors reported via SoftwareMechanisms::ReportError() and from built-in mechanism-level error reporting. Errors are grouped by type, and an error handlers is registered for a specific group. If no user- defined error callback has been registered for a group and an error is reported against the group, a default error handler is applied. User-provided error handlers must conform to the ErrorHandle Incident Profile - publishing the same parameters as the SoftwareMechanisms:DefaultErrorHandler service (Integer error_group, Integer error_code). In addition to pre-defined error groups, the user can define their own groups. If a communication error happens the provided callback will loaded with values for error_group and error_code, and then called. For interprocess communication errors, the SW_ERROR_GROUP_COMMUNICATIONS is used. This group includes the following error codes: SW_ERROR_CODE_COMMUNICATIONS_ACCEPT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_ADDRESS_FAILURE, SW_ERROR_CODE_COMMUNICATIONS_BIND_FAILED, SW_ERROR_CODE_COMMUNICATIONS_CONNECT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_DISCONNECT, SW_ERROR_CODE_COMMUNICATIONS_INCONSISTENT_CONNECTIONS, SW_ERROR_CODE_COMMUNICATIONS_LISTEN_FAILED, SW_ERROR_CODE_COMMUNICATIONS_NETWORK_INIT_FAILED, SW_ERROR_CODE_COMMUNICATIONS_OTHER, SW_ERROR_CODE_COMMUNICATIONS_RECEIVE_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SEND_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_CREATE_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_SHUTDOWN_FAILED, SW_ERROR_CODE_COMMUNICATIONS_SOCKET_ERROR, SW_ERROR_CODE_COMMUNICATIONS_TIMEOUT, SW_ERROR_CODE_COMMUNICATIONS_CONNECT_MESSAGE_MISMATCH 19
  • 23. Technical Note: Distributed Deployment See error_codes.hpp for the most complete list of pre-defined error codes. 4.6 Generated Topology Summary Report Each time code is generated, TopologyReport.txt is generated into the _info subfolder of the deployment folder. This is the system topology report emitting the ProcessType and TaskID of all deployable elements. This report can be helpful for complex systems as a definitive reference of the actual ProcessType and TaskID of each item, resolving potential ambiguities with defaults and multiple properties files. 4.7 Trace Debugging Compiler preprocessor definitions are used to control trace debugging – printing to the standard error stream cerr.  PATH_DEBUG_INTERPROCESS_HL – Enables trace debugging for summary-level interprocess processing  PATH_DEBUG_INTERPROCESS – Enables trace debugging for all levels of interprocess processing (Automatically activates PATH_DEBUG_INTERPROCESS_HL).  PATH_DEBUG_INTERPROCESS_MSG – Enables trace debugging for all interprocess message sending and receipt  PATH_DEBUG_TASK_TRACE – Enables trace debugging for intertask processing 4.8 Realized High-Speed Communications In general each outbound socket message is created by taking an outbound incident and serializing it into a PfdSockMessage, as identified in section 2.2.3 Multi Process. In some specific cases a message is sent repeated times with very little or no changes to its contents between transmissions. If this happens frequently or with very large incident parameter ART payloads, the designer may discover the complete re-serialization of the incident is too slow to meet message processing deadlines. In these cases, it can be possible to avoid this repeated serialization overhead by writing realized code that constructs a PfdSockMessage and sends it directly with PfdTopology:: enqueueOutboundRawMessage(). This realized code can create an instance of a PfdSockMessageOut, send it, poll for send completion, then modify specific data values within it, and resend this same instance all with relatively high efficiency. This process sidesteps the safer and generally more convenient Incident-based approach, but also avoids that approach's buffer allocation and full serialization processing. Steps:  Create a raw socket message PfdSockMessageOut instance, usually by manually serializing an Incident in realized code  Call deferDeletion() on the raw message to prevent it from being deleted after each send.  Save off a pointer to this raw message for repeated use  Enqueue the raw message by calling PfdProcess::sendRawMessageInterProcess().  Before updating specific data values in the raw message for a subsequent send, call isComplete() on the message and ensure it returns TRUE  Update specific data values in the raw message for the next send  Send again with PfdProcess::sendRawMessageInterProcess().  don't forget to put your toys away when play time is over: delete raw_message; 20
  • 25. Technical Note: Distributed Deployment A. Thread Proliferation in Hand Written Systems Software development for complex systems since the dawn of 3gls (FORTRAN, COBOL, C, Pascal, Ada, C++, Java, etc) for high-performance systems is expressed nearly universally as long synchronous sequences of subroutine/procedure/function/method invocations. This serialization of processing is more a product of the single- dimensionality of these programming languages than any inherent simplicity in the problem spaces being addressed. The advent of multitasking and multiprocessor systems has driven 3gl programmers to break single-dimensional synchronous processing into separate sequences – threads. This decomposition has met varying levels of success, and introduced specific system partitioning techniques trying to take advantage of “apparent” parallelism offered by OS-level tasks. The subsequent rise in available of multi-core systems (and their “actual” parallelism) has further spurred the this move to use multi-threading. As development organizations gain proficiency with OS-level multi threading and with the absence of any other way to achieve parallelism (the fundamentally synchronous nature of programming languages hasn’t changed) typical complex system architectures have experienced a sharp rise in the number of OS-level threads employed. With this rise comes the computational costs of these complex OS mechanisms, and the development and maintenance costs of more complex designs. In many cases an overabundance of threads has resulted in systems with lower performance even with more capable processors and memory. PI-MDD offers an alternative to some of this harmful thread proliferation. PI-MDD, Asynchronous Domain Interactions, and The Event-Driven Paradigm The use of PI-MDD models breaks some of the constraints of synchronous sequences of subroutine/procedure/function/method invocations. Domain Modeling pushes system design from the very beginning to a base of primarily asynchronous interactions. The direct support of the callback in the UML Action Language via the IncidentHandle, and with state machines available within domains, there are a wide selection of asynchronous elements to augment the basic synchronous function/method call. With PathMATE automation the model-level function call (to a domain service or class operation) can also be generated with an implementation that uses an asynchronous IncidentHandle as needed. These asynchronous primitives break the simplistic lock that synchronous programming languages put on basic behavioral expression. The result of these asynchronous forms of expression is a PI-MDD system that has a large number of simple entities operating in asynchronous independence from each other, even when deployed to the same OS thread of control. OS threads are no longer needed to work around blocking calls or to interweave long chains of operations. Augmenting this fundamentally asynchronous form of behavioral expression, PI-MDD components can also be deployed to separate OS threads as needed - when blocking (or “nearly blocking” – long latency) calls to realized code function must be made. Typically these are found with communication sends and receives, files operations, or calls into legacy code still organized in long, synchronous call chains. Additional tasks 22
  • 26. Technical Note: Distributed Deployment are also applied to manage priority so a few, higher priority elements are allocated to a high priority task separate from the main task with ordinary priority. The net result of this is the allocation of much larger fragments of processing to fewer OS threads, while still realizing greater interleaving of processing through fundamentally asynchronous behavioral expression. The need for inter-task synchronization mechanisms is greatly reduced, and therefore run-time overhead and general complexity is reduced. 23
  • 27. Technical Note: Distributed Deployment B. Marking Summary marking name applies to default value description Domain, Service, Class, TaskID Operation SYS_TASK_ID_MAIN Allocate action processing to a Task ProcessType Domain, Service <DefaultProcessType> Allocate action processing to a Process Type Indicate this DestinationHandle specifies Routing Parameter <none> destination. "T" generates a PfdSyncList for the ThreadSafe Class, Association F instance container Specifies stack size for all Tasks; 0 indicates DefaultStackSize System 0 use OS default Allows the default ProcessType name to be DefaultProcessType System MAIN changed Domain, Service, Class, Sets the priority of the task for this analysis TaskPriority Operation SYS_TASK_PRIORITY_NORMAL element; must be a pfdos_priority_e literal Allows specification of compiler symbol Defines System, Domain <none> values in the markings ExternalType User Defined Type void* Implementation type for an ART Include file for the implementation type for an IncludeFile User Defined Type <none> ART TRUE indicates this is an ART, inheriting Serializable User Defined Type FALSE from PfdSerializable 24
  • 28. Technical Note: Distributed Deployment C. Compiler Symbol Summary symbol name default value description PFD_MAIN_OOA_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for each OOA processing task PFD_RECEIVER_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for the TCP receiver task PFD_TRANSMIT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for each TCP sender task Default task priority for the Spotlight connection PFD_IE_THREAD_PRIORITY SYS_TASK_PRIORITY_LOWEST task PFD_INPUT_THREAD_PRIORITY SYS_TASK_PRIORITY_NORMAL Default task priority for the Driver input task Define this to cause a stack to be allocated PATH_ALLOCATE_EXPLICIT_STACK <not defined> explicitly, separate from the start task OS call Time between sender task attempts to reconnect PATH_DELAY_BETWEEN_RECONNECT_ATTEMPTS_MS 100 with destination process, in milliseconds Max number of times the sender task attempts to reconnect with destination process without waiting PATH_IMMEDIATE_RECONNECT_MAX_COUNT 5 between Time between interprocess message packat sends (socket send calls), in milliseconds; 0 means PATH_DELAY_BETWEEN_MESSAGE_PACKET_SENDS_MS 0 pfd_sleep(0); -1 means no sleep at all Max number of pending outbound messages queues for a single destination while not PATH_ENTRY_DISCONNECTED_TX_QUEUE_LIMIT 128 connected to that destination Max number of pending outbound messages queues for a single destination while connected to PATH_ENTRY_CONNECTED_TX_QUEUE_LIMIT 1024 that destination Define this to turn on high level "printf" debugging PATH_DEBUG_INTERPROCESS_HL <not defined> of interprocess mechanisms Define this to turn on full "printf" debugging of PATH_DEBUG_INTERPROCESS <not defined> interprocess mechanisms Define this to turn on "printf" debugging of PATH_DEBUG_INTERPROCESS_MSG <not defined> interprocess message sending Define this to turn on "printf" debugging of PfdTask PATH_DEBUG_TASK_TRACE <not defined> mechanics 25