Preview only show first 10 pages with watermark. For full document please download
Ptolemy Ii, Heterogeneous Concurrent Modeling And Design In Java
Figure 5.3. An improved applet that properly reports errors in model ...... This helps avoid confusion and bugs that may arise from having Java import statements in the ...... If you wish to group methods together, try to name them so that they have a common pre- fix. Static methods are generally mixed with non-static methods.
The inspection paradox concerns the average time that a passenger waits for a bus (more precisely, the expected value). If the busses arrive at regular intervals with interarrival time equal to T, then the expected waiting time is T/2, which is perfectly intuitive. Counterintuitively, however, if the busses arrive according to a Poisson process with mean interarrival time equal to T, the expected waiting time is T, not T/2. These expected waiting times are approximated in this applet by the average waiting time. The applet also shows that actual arrival times for both passengers and busses, and the waiting time of each passenger.
The intuition that resolves the paradox is as follows. If the busses are arriving according to a Poisson process, then some intervals between busses are larger than other intervals. A particular passenger is more likely to arrive at the bus stop during one of these larger intervals than during one of the smaller intervals. Thus, the expected waiting time is larger if the bus arrival times are irregular.
x="xValue" y="yValue"> y="yValue" lowErrorBar="low" highErrorBar="high"> x="xValue" y="yValue" lowErrorBar="low" highErrorBar= "high">
The first form specifies only a Y value. The X value is implied (it is the count of points seen before in this data set). The second form gives both the X and Y values. The third and fourth forms give low and high error bar positions (error bars are use to indicate a range of values with one data point). Points given using the syntax above will be connected by lines if the connected option has been given value "yes" (or if nothing has been said about it). Data points may also be specified using one of the following forms: x="xValue" y="yValue"> y="yValue" lowErrorBar="lew" highErrorBar="high"> x="xValue" y="yValue" lowErrorBar="low" highErrorBar="high">
y="yValue"> x="xValue" y="yValue"> y="yValue" lowErrorBar="low" highErrorBar="high"> x="xValue" y="yValue" lowErrorBar="low" highErrorBar="high">
This causes a break in connected points, if lines are being drawn between points. I.e., it overrides the connected option for the particular data point being specified, and prevents that point from being connected to the previous point.
12.4.5 Bar graphs To create a bar graph, use: You will also probably want the connected option to have value "no." The barWidth is a real number specifying the width of the bars in the units of the X axis. The barOffset is a real number specifying how much the bar of the z'-th data set is offset from the previous one. This allows bars to "peek out" from behind the ones in front. Note that the front-most data set will be the first one.
12.4.6 Histograms To configure a histogram on a set of data, use The binWidth option gives the width of a histogram bin. I.e., all data values within one binWidth are counted together. The binOffset value is exactly like the barOffset option in bar graphs. It specifies by how much successive histograms "peek out." Histograms work only on Y data; X data is ignored.
239
12.5 Old Textual File Format Instances of the PlotBox and Plot classes can read a simple file format that specifies the data to be plotted. This file format predates the PlotML format, and is preserved primarily for backward compatibility. In addition, it is significantly more concise than the PlotML syntax, which can be advantageous, particularly in networked applications. In this older syntax, each file contains a set of commands, one per line, that essentially duplicate the methods of these classes. There are two sets of commands currently, those understood by the base class PlotBox, and those understood by the derived class Plot. Both classes ignore commands that they do not understand. In addition, both classes ignore lines that begin with "#", the comment character. The commands are not case sensitive.
12.5.1 Commands Configuring the Axes The following commands are understood by the base class PlotBox. These commands can be placed in a file and then read via the read() method of PlotBox, or via a URL using the PlotApplet class. The recognized commands include: • • •
TitleText: string XLabel: string YLabel: string
These commands provide a title and labels for the X (horizontal) and Y (vertical) axes. A string is simply a sequence of characters, possibly including spaces. There is no need here to surround them with quotation marks, and in fact, if you do, the quotation marks will be included in the labels. The ranges of the X and Y axes can be optionally given by commands like: • •
XRange: min, max YRange : min, max
The arguments min and max are numbers, possibly including a sign and a decimal point. If they are not specified, then the ranges are computed automatically from the data and padded slightly so that datapoints are not plotted on the axes. The tick marks for the axes are usually computed automatically from the ranges. Every attempt is made to choose reasonable positions for the tick marks regardless of the data ranges (powers of ten multiplied by 1,2, or 5 are used). However, they can also be specified explicitly using commands like: • XTicks: label position, label position, ... • YTicks: label position, label position, ... A label is a string that must be surrounded by quotation marks if it contains any spaces. A position is a number giving the location of the tick mark along the axis. For example, a horizontal axis for a frequency domain plot might have tick marks as follows: XTicks: -PI -3.14159, -PI/2 -1.570795, 0 0, PI/2 1.570795, PI 3.14159
Tick marks could also denote years, months, days of the week, etc. The X and Y axes can use a logarithmic scale with the following commands: • XLog: on • YLog: on The tick labels, if computed automatically, represent powers of 10. The log axis facility has a number
240
of limitations, which are documented in "Limitations" on page 12-243. By default, tick marks are connected by a light grey background grid. This grid can be turned off with the following command: • Grid: off It can be turned back on with • Grid: on Also, by default, the first ten data sets are shown each in a unique color. The use of color can be turned off with the command: • Color: off It can be turned back on with • Color: on Finally, the rather specialized command • Wrap: on enables wrapping of the X (horizontal) axis, which means that if a point is added with X out of range, its X value will be modified modulo the range so that it lies in range. This command only has an effect if the X range has been set explicitly. It is designed specifically to support oscilloscope-like behavior, where the X value of points is increasing, but the display wraps it around to left. A point that lands on the right edge of the X range is repeated on the left edge to give a better sense of continuity. The feature works best when points do land precisely on the edge, and are plotted from left to right, increasing inX. All of the above commands can also be invoked directly by calling the corresponding public methods from some Java code.
12.5.2 Commands for Plotting Data The set of commands understood by the Plot class support specification of data to be plotted and control over how the data is shown. The style of marks used to denote a data point is defined by one of the following commands: • • • • •
Marks: Marks: Marks: Marks: Marks:
none points dots various pixels
Here, points are small dots, while dots are larger. If various is specified, then unique marks are used for the first ten data sets, and then recycled. If pixels is specified, then a single pixel is drawn. Using no marks is useful when lines connect the points in a plot, which is done by default. If the above directive appears before any DataSet directive, then it specifies the default for all data sets. If it appears after a DataSet directive, then it applies only to that data set. To disable connecting lines, use: • Lines: off To re-enable them, use • Lines: on You can also specify "impulses", which are lines drawn from a plotted point down to the x axis.
241
Plots with impulses are often called "stem plots." These are off by default, but can be turned on with the command: • Impulses: on or back off with the command • Impulses: off If that command appears before any DataSet directive, then the command applies to all data sets. Otherwise, it applies only to the current data set. • •
To create a bar graph, turn off lines and use any of the following commands: Bars: on Bars: width
•
Bars: width, offset
The width is a real number specifying the width of the bars in the units of the x axis. The offset is a real number specifying how much the bar of the i-th data set is offset from the previous one. This allows bars to "peek out" from behind the ones in front. Note that the front-most data set will be the first one. To turn off bars, use • Bars: off To specify data to be plotted, start a data set with the following command: • DataSet: string Here, string is a label that will appear in the legend. It is not necessary to enclose the string in quotation marks. To start a new dataset without giving it a name, use: •
DataSet:
In this case, no item will appear in the legend. If the following directive occurs: • ReuseDataSets: on then datasets with the same name will be merged. This makes it easier to combine multiple data files that contain the same datasets into one file. By default, this capability is turned off, so datasets with the same name are not merged. • • •
The data itself is given by a sequence of commands with one of the following forms: x, y draw: x, y move: x, y
• • •
x, y, yLowErrorBar, yHighErrorBar draw: x, y, yLowErrorBar, yHighErrorBar move: x, y, yLowErrorBar, yHighErrorBar
The draw command is optional, so the first two forms are equivalent. The move command causes a break in connected points, if lines are being drawn between points. The numbers x and y are arbitrary numbers as supported by the Double parser in Java (e.g. "1.2", "6.39e-15", etc.). If there are four numbers, then the last two numbers are assumed to be the lower and upper values for error bars. The numbers can be separated by commas, spaces or tabs.
242
12.6 Compatibility Figure 12.10 shows a small set of classes in the compat package that support an older ascii and binary file formats used by the popular pxgraph program (an extension of xgraph to support binary formats). The PxgraphApplication class can be invoked by the pxgraph executable in $PTII/bin. See the PxgraphParser class documentation for information about the file format.
12.7 Limitations • •
•
The plot package is a starting point, with a number of significant limitations. A binary file format that includes plot format information is needed. This should be an extension of PlotML, where an external entity is referenced. If you zoom in far enough, the plot becomes unreliable. In particular, if the total extent of the plot is more than 2 times extent of the visible area, quantization errors can result in displaying points or lines. Note that 232 is over 4 billion. The log axis facility has a number of limitations. Note that if a logarithmic scale is used, then the values must be positive. Non-positive values will be silently dropped. Further log axis limitations are listed in the documentation of the _gridlnit() method in the PlotBox class. Graphs cannot be currently copied via the clipboard. There is no mechanism for customizing the colors used in a plot.
plot package j
Plot
!
PlotAppiet
i
i+_read(input: lnputStream)|
L\
PlotAppltcation
+main(args: StringD) #_about() #_parseArgs(args: StringD) # read(base: URL, input: InputStream)
configures
PxgraphParser PxgraphApplet #_pk>t: Plot +PxgraphParser(plot: Plot) +parseArgs(args: StringQ) +parseArgs(args: StringD, base: URL) +parsePxgraphargs(args: StringD, base: URL) +read(input: nput! jtrea m)
+PxgraphAppletO
PxgraphApplication
+PxgraphApplicationO +PxgraphApplication(args: StringD) +PxgraphApplication(plot: Plot, args: StringD)
FIGURE 12.10. The compat package provides compatibility with the older pxgraph program.
243/244
13 Vergil Authors: Steve Neuendorffer, Edward Lee
13.1 Introduction When the first computers were built, it was possible to program them, but only through an arduous manual process. One of the first pieces of software that was written was a bootloader that simplified the process of reprogramming those computers. For example, the bootloader may load a program into memory from a floppy drive. The bootloader was the first, simplest form of operating system. It provided infrastructure for abstracting the process of initializing the code of computers. The simplest operating system merely provides a mechanism for invoking other programs. Later operating system layered services on top of the bootloader that provided more facilities to ease programming and abstract hardware. Services like file systems, device drivers, and process scheduling provide mechanisms through which user applications use hardware resources. These services provide a simple abstraction layer through which many pieces of computer hardware can be accessed. These operating systems traditionally provided some sort of command shell, such as DOS or bash. In some cases, the invocation mechanism takes the form of a graphical user interface, where icons represent files and applications. Some operating systems also provide more complex application support, such as user preferences, application component management, and file to application binding. These services attempt to make it easier to develop applications, however they are not strictly necessary for developing applications. For example, it is fully possible to write a Windows application without using the registry, or COM objects. However, because these services are integrated into the Operating System at a very low level, using them can be rather tricky. Overwriting the wrong registry entry may prevent the operating system from working properly. Updating a COM object can prevent other applications from working properly. Netscape and Internet Explorer constantly fight over the right to open HTML files. The diffi-
245
culty arises because these services are built into the operating system and also impose requirements on how applications are managed. These types of services are important when building useable applications, but they are not appropriate for inclusion in a low-level operating system. Vergil is a set of infrastructure tools that provides these application support services as another operating system layer. This layer is built on top of the hardware abstraction layer while making minimal use of the operating system's application support infrastructure. Java is the perfect platform on which to build these services, since it provides good hardware abstraction on a wide variety of platforms, but few services for building applications. We have used the infrastructure to build a design application for Ptolemy II, but the infrastructure itself is general. Below we will describe the infrastructural goals, the architecture, and how we have applied the infrastructure to the Ptolemy design application. For information about using the Vergil Application to build a Ptolemy II model, see chapter FIXME.
13.2 Infrastructure The goals of building design application infrastructure are somewhat different from the goals of building a design application. Where an application is often described by the features that it implements or the manipulation that it allows, infrastructure must provide solutions to common problems within a certain area. Below we describe the various pieces of Vergil, and how each one makes it easier to develop consistent, usable design applications.
13.2.1 Design Artifacts The goal of a design application is the creation of a particulart type of design artifact. A design artifact is any electronic entity that is created to serve a specific purpose such as a text file, a circuit design, or a piece of computer software. Design artifacts almost always have a variety of aspects, and it is usually difficult to display all of these aspects at once. Good examples of this are Microsoft PowerPoint presentations. A presentation contains many slides, and each slide can be individually displayed and manipulated. Each slide can contain many different kinds of objects (which are often themselves distinct embedded design artifacts). The presentation itself can also contain timing, narration and navigation information. The PowerPoint application can change the information displayed to emphasize a particular aspect of the presentation, such as a particular slide or a slide overview or a text-only view.
13.2.2 Storage policies The most basic operation that almost any application must perform is the storage and retreival of designs. Most applications store design artifacts as files visible through the operating system, however we would like to be somewhat more general and allow design artifacts to be stored in databases or accessed through the World Wide Web. We believe that URLs are general enough to describe any such location. The infrastructure that we would like build for handling files revolves around a storage policy. The storage policy gives a basic set of consistent rules for how design objects are persistently stored. In plain English, these rules can be simple, or fairly complex. One example of a simple storage policy rule might be that to open a design artifact, the location is specified using a filebrowser dialog. A more complex rule could state that a design artifact cannot be closed unexpectedly without giving the user an opportunity to save. Implementing a storage policy in basic infrastructure is good for several reasons. First of all, it prevents application writers from being concerned with relatively boring
246
parts of an application. Secondly, it is very important for application usability that the storage policy be consistent.
13.2.3 Views A particular design artifact may have different ways that it can be viewed and manipulated. For example, an HTML document may be viewed as rendered HTML, or as plain text with HTML markup. The infrastructure that we have built assumes that each different view of a design artifact is associated with a toplevel frame. The creation of a view is in some respects independent from loading a file. However, when a design artifact is first opened, a default view must be created for it. Furthermore, when the last view of the artifact is destroyed, the artifact should be closed. In this way, the view (or views) of a design artifact are exactly analagous to the file in which the design artifact is stored. When all of the frames are gone, the file is conceptually 'closed' and not accessible. This correspondance has some important ramifications in the design of our infrastructure. Since, from the point of view of the user the frames are the file, they must all display consistent data. Furthermore, opening a design artifact a second time should only create a new frame if the artifact is not already open. If the design artifact is already open, then its views should simply be made visible.
13.3 Architecture The key to the Vergil infrastructure is a set of classes that represent the different parts of common design applications. The common application operations are then expressed in terms of these classes. This makes it easy to create new application tools that are integrated with others built with the infrastructure by simply extending a few classes.
13.3.1 Effigies and Tableaux Each design artifact is represented by an instance of the Effigy class. Each effigy is associated with a URL, corresponding to the location of the persistent storage of the effigy. Each effigy also has an identifier, which is the unique string that identifies the effigy. This identifier should be a string representation of the effigy's URL. Each view of the design artifact is represented by an instance of the Tableau class contained by the design artifact's effigy. Each tableau is associated with a single frame that presents information from the effigy. In some cases, in order to reuse code for tableaux, it is sometimes useful to have an effigy contain other effigies. The static structure diagram for this is shown in figure 13.1.
13.3.2 Effigy Factories Notice that the Effigy base class does not specify how it represents a particular design artifact. This is intentional, since we are building infrastructure and do not wish to restrict ourselves to any particular representation. However, at some point the infrastructure will need to create new effigies that are useful for a particular application. In this situation, the Factory design pattern is appropriate, which is shown in figure 13.2. An example of how the Effigy and EffigyFactory base classes are used is shown in figure 13.3. The example shows an effigy and factory appropriate for handling text documents. The EffigyFactory class contains two factory methods for creating new effigies. The first factory method takes a source URL and is used when opening a file. The second method does not take a source
247
URL and is used when creating a new blank effigy. These two methods roughly correspond to the familiar File->Open and File->New operations. The EffigyFactory base class is also useful for implementing a deference mechanism. The base class can contain other effigy factories and will defer to the first contained factory that succesfully creates a effigy for a given file. This deference mechanism allows the factories to be ordered so that a more specific effigy (such as one that represents HTML structure) can be checked before a more general one (such as an effigy that simply contains a text string).
CompositeEntity
A
1.1
Effigy +identifier: StringAttribute +url: URLAttribute +Effigy(workspace: Workspace) +Effigy(container: CompositeEntity, name : String) +closeTableaux(): boolean +getTableauFactory(): TableauFactory +isModifiable(): boolean +isModified(): boolean +setModifiable(flag: boolean) +setModified(flag: boolean) +setTableauFactory(factory: TableauFactory) +showTableaux() +topEffigy(): Effigy +writeFile(file: File)
Tableau 1..n
1..n
1..1
+size : StringAttribute frame: _JFrame master: boolean +Tableau(container: Effigy, name : String) +close(): boolean +getFrame(): JFrame +getTitle(): String +isEditable(): boolean +isMasterO: boolean +setEditable(flag : boolean) +setFrame(frame: JFrame) +setMaster(flag: boolean) +setTitle(title: String) +show()
Figure 13.1 Static structure diagram for effigies and tableaux.
AbstractClass
AbstractFactory
creator createe
+create(): AbstractClass
T
A
ConcreteClass
creator createe
Figure 13.2 Static structure diagram for the Factory pattern.
248
ConcreteFactory
13.3.3 Tableau Factories Once an effigy has been created, a frame on the screen doesn't actually exist to represent it yet. The frame is created by a tableau, and the tableau is created by another factory. The TableauFactory class implements the same deference mechanism as the EffigyFactory class. The static structure for the tableau factory class, along with the related classes from the text example aboveis shown in figure 13.4. EffigyFactory Compos iteEntity
Kr-
+EffigyFactory(workspace: Workspace) +EffigyFactory(container: CompositeEntity, name: String) +canCreateBlankEffigy(): boolean +createEfflgy(container: CompositeEntity): Effigy +createEffigy(container: CompositeEntity, baseURL: URL, url: URL): Effigy +qetExtension(url: URL): String
Effigy
TextEffigy$Factory
TextEffigy
+Factory(container: CompositeEntity, name : String)
doc: Document +TextEffigy(workspace : Workspace) +TextEffigy(container: CompositeEntity, name : String) +getDocument(): Document +setDocument(doc: Document)
creator createe
Figure 13.3 Static structure that is useful for handling text documents.
CompositeEntity
TableauFactory
Tableau
+TableauFactory(container: Configuration, name: String) +createTableau(effigy: Effigy): Tableau
TextEditorTableau$Factory
creator
TextEditorTableau
createe +Factory(container: CompositeEntity, name : String)
+TextEditorTableau(container: Effigy, name: String)
Figure 13.4 Static structure of how the TableauFactory class, and an example of how tableau factories are used with text documents.
249
13.3.4 Model Directory All effigies in the application are contained (directly, or indirectly in another effigy) in an instance of the ModelDirectory class. The model directory allows entities to be found by identifier. Whenever a design artifact is loaded from a URL, the model directory is searched first to prevent the artifact from being loaded again.
13.3.5 Configurations An instance of the Configuration class represents the configuration of an application. That configuration includes not only the directory of currently open effigies but also the effigy factories and tableau factories. The static structure for the Configuration and ModelDirectory classes is show in figure 13.5.
13.3.6 TableauFrame The TableauFrame class uses the above classes to implement a number of common operations. The intention of this class is that the type-specific subclasses of the Tableau class would create instances of TableauFrame specialized for displaying particular information. Generally, the Top base class implements the menus for these operations and provides some abstract methods that are used for reading and writing files. The TableauFrame class implements these abstract methods. For the rest of this document, the line between the Top and TableauFrame classes is not terribly important, and will be purposefully blurred for sake of clarity. The static structure for the TableauFrame class (and its super classes) is show in figure 13.6.
-t>i
Kh CompositeEntity Configuration
+Configuration(workspace: Workspace) +createPrimaryTableau(effigy: Effigy) +openModel(base : URL, in : URL, identifier: String) B
ModelDirectory
1..1
+ModelDirectory(container: Configuration, name : String) +getEffigy(identifier: String): Effigy 1..n 1..n I I
directory
effigyFactory 1..1
^ TableauFactory
Effigy
EffigyFactory
1..1 I Tableau 1..1
createe
1..n
0.1 tableauFactory
creator
Figure 13.5 Static structure diagram for the Configuration and ModelDirectory classes.
250
13.4 Common operations The goal of the infrastructure classes above is to implement common operations, such as storing and creating new design artifacts, in a consistent fashion. These operations are (for the most part) actually implemented in the TableauFrame base class. Below are descriptions of each of these operations, and how they are implemented using the architecture from the previous section.
13.4.1 Opening an Existing Design Artifact The File->Open menu item first opens a file browser to allow the user to select a URL, and then uses the Configuration to open the URL. The configuration firsts checks the model directory to see if there is already an effigy associated that URL. If there is no such effigy, then the configuration uses its effigy factory to create a new effigy, and then uses its tableau factory to create a tableau for the effigy. Lastly, the tableau is made visible, which results in it creating a frame on the user's screen. The sequence diagram is shown in figure 13.7. In addition, this first tableau is set to be a master, and it is set to be editable if the URL represents a writable location. Alternatively, there may already an effigy present in the directory that is associated with the URL
JFrame
Top #_directory: File #_fileMenu: JMenu #_fileMenultems: JMenultem[] #_helpMenu: JMenu #_helpMenultems: JMenultem[] #_menuBar: JMenuBar #_statusBar: StatusBar .file: File modified: boolean +Top() +centerOnScreen() +isModifiedO: boolean +report(ex: Exception) +report(message: String) +report(message: String, ex: Exception) +setBackground(background: Color) •setModified(b: boolean) #_about() #_addMenusO #_clear() #_close(): boolean #_exitO #_getNameO: String #_help() #_open() #_openURL() #_print() #_read(url: URL) #_saveO: boolean #_saveAs(): boolean #_writeFile(file: File)
I
JPanel
A
StatusBar „progress: JProgressBar ..message: JTextField +StatusBar() +progressBar(): JProgressBar setMessage(message: String)
Tableau
1..1
creator
TableauFrame
New menu item is somewhat similar to opening an existing design artifact. However, only effigy factories that declare that they can create a blank effigy that is not associated with a previous URL may be used. Furthermore, since an application can conceivably create different types of blank effigies, it is not possible to use the effigy factory deference mechanism to determine which effigy factory is used. The user must have another way of specifying which effigy factory will create the blank effigy. When a TableauFrame is created, the File->New menu is populated with a menu item for each possible effigy factory. The name of the menu item is the same as the name of the effigy factory. The sequence diagram for creating a new design artifact is shown in figure 13.8.
t:TableauFrame
^Configuration
b:JFileChooser
c.ef: TextEffigy$Factory
c.d:ModelDirectory c.d.e:TextEffigy
c.d.e.t:TextTableau
shcwOpenDialogO
c.tf: TextTableau$Factory
►
u:URL openModel(u,...) getEffigy(u)
►
null createEffig y(c.d, u, u) new TextE Tigy(c.d, e) c.d.e createTableau(c d.e) new TextTc bleau(c.d.e N c.d.e.t
Figure 13.7 Sequence diagram for opening an existing design artifact.
252
13.4.3 Saving Changes to a Design Artifact The TableauFrame class implements menu items for both File->Save and File->SaveAs. The Save operation rather simple. If the effigy is already associated with a URL that is writable, then the effigy is simply written out to that location. Otherwise, the SaveAs operation is invoked instead. This may occur if the design artifact was created from scratch as a blank effigy, or if the artifact was loaded by HTTP. The SaveAs operation is a bit more complicated. The user specifies a destination URL using a file chooser, just as when opening a new design. However, before writing the file it is necessary to check that the URL does not already exist and that the URL is not already open. In these cases, the user is prompted to be sure that important data is not inadvertently lost by being overwritten.
13.4.4 Closing designs and Exiting the Application The only complexities in implementing these operations are again involved with ensuring that important data is not lost. In this case, we simply ensure that all designs are closed before exiting the application, and that a design is not closed without attempting to save it first. Both of these cases are prevented by setting a flag in each effigy whenever it is modified. If the flag indicates that the effigy has been modified, then the Save operation is invoked before discarding the effigy. Activating the close operation of a frame only results in the tableau associated with that frame being removed. The tableau's effigy and the other tableaux assocated with that effigy are not generally affected. There is a subtlety that arises because the application itself exists separately from any visual representation of it. In other words, a tableau (and therefore a frame) exists for each effigy, but there is no tableau that simply represents the application as a whole1. The subtlety is that closing all the effigies should result in the application exiting. A similar issue occurs for a similar reason with effigies, t:TableauFrame c:Confguration c.ef: TextEffigy$Factory
c.d:ModelDirectory c.d.e:TextEffigy
c.d.e.tTextTableau
c.tf: TextTableau$Factory
creeteEffigy(c.d; new TextE figy(c.d, e) c.d.e createTableau(c d.e) new TextT« bleau(c.d.e t) « c.d.e.t
Figure 13.8 Sequence diagram for creating a new design artifact.
253
and closing all of a tableaux associated with any effigy should result in that effigy being closed.
13.5 Ptolemy Model Visualization We have used the Vergil infrastructure to construct several visualizations that are capable of viewing and manipulating a Ptolemy model. For the most part, these editors are intended to work with any ptolemy kernel model and are not limited to models based on the Actor package or a specific domain. This is an extremely powerful use of the Ptolemy abstract syntax, since it allows manipulation not only of executable models (see Chapter FIXME), but also actor libraries (see Figure FIXME) and the Vergil configuration itself (see Figure 13.5), since they are also based on the Ptolemy Kernel (see Chapter FIXME). This section serves a dual purpose: it describes not only a usable set of application tools, but also a well developed example of using the Vergil infrastructure to present multiple views of a design artifact. In order to represent a Ptolemy model in Vergil, there must be an effigy that has a reference to it. The PtolemyEffigy class maintains this reference, and is also responsible for reporting any change requests (see Chapter FIXME) in the model that fail. It also contains an inner class that is an effigy factory and writes out a model using MoML (see Chapter FIXME). The static structure diagram for these classes is shown in Figure 13.9 There is also an accompaning frame class, PtolemyFrame, that is
1 «interface» | 1 ChangeListener | 1 |
1 1 1
i
i
i
1 1 1 1
I
i
I
I
A
Effigy
'
1 1 1 1
1
EffigyFactory
| 1 1 __1
?
_j
PtolemyEffigy
PtolemyEffigy$Factory
-_model: NamedObj +PtolemyEffigy(workspace: Workspace) +PtolemyEffigy(container: CompositeEntity, name: String) +getModel(): NamedObj +setModel(model: NamedObj)
- Darser: MoMLParser +Factory(container: CompositeEntity, name : String)
< 0..1
1
model
1
1
1 1 1
1 1 1
1 ComponentEntity | 1 a component named "blank" is cloned to create a b ank effigy.
L
^
Figure 13.9 Static Structure for Ptolemy effigies.
1.
Although it is probably good design practice to create an initial effigy and tableau that represent the application and allows the user to open an initial file.
254
intended to be used as shown in Figure 13.10. The tableaux that are capable of creating a frame for a Ptolemy effigy are described in the following sections.
13.5.1 Graph Tableau The Ptolemy graph editor graphically represents the contained entities, ports, and relations of any Ptolemy composite entity. It allows syntax-directed editing of the model and browsing of important design information, such as Actor source code and HTML documentation. A screen shot is shown in Figure 13.11. The left hand side provides a palette of available entities and a high-level navigation window. Entities can be dragged and dropped from the palette. External ports are created by using the toolbar button, and relations can be created from the toolbar button, or by control clicking on the schematic. Links to relations can be created be made by control clicking on a port or a relation. The visualization also allows connections directly from one port to another. These links correspond to a relation that is linked to both ports, but the relation is not explicitly represented itself. Note that although the editor allows any ptolemy model to be edited, it does display some information that is specific to the actor package. For example, ports are rendered differently depending on whether they are input or output ports, and the multiports of the Multiply actor are rendered hollow. The director (in this case, an SDF director) is also displayed as a green box. The classes used to implement this tableau are shown in Figure 13.12. An instance of KernelGraphFrame is created by the tableau. The KernelGraphFrame class overrides the _createGraphPane factory method to create the graph editor itself, while most of the user interface components (like menus and the palette window) are created by the GraphFrame base class. This allows the code in GraphFrame to be reused with a different visual representation, such as the FSM editor described in Section 13.5.2.
Effigy
1.J1
0..1
1..I
TableauFrame
Tableau
7T
1..1
1..n
1..1
ZT PtolemyFrame PtolemyEffigy
■_model: CompositeActor +PtolemyTop(model: CompositeActor) +PtolemyTop(model: CompositeActor, tableau : Tableau) +getModel(): CompositeActor +setModel(model: CompositeActor)
1..1
ConcreteTableau 1..n
0..1
1..1
Figure 13.10 Static structure of the Ptolemy graph editor.
255
ConcreteTableauFrame
13.5.2 FSM Tableau The Ptolemy FSM editor graphically represents the the states and transitions of a Ptolemy FSM domain model. It allows syntax-directed editing of the model, along with links to important design information, such as Actor source code and HTML documentation. A screen shot is shown in Figure 13.13. States can be added by control-clicking on the schematic, or by dragging and dropping from the palette on the left. Transitions are created by control dragging from an existing state. The classes used to implement this tableau are shown in Figure 13.14. An instance of FSMGraphJfijx]
s*l file::C:.'users;neuendor/ptIl/ptoleniy/domains./sdMib/Spectrum.Knil File
View
Edit Graph
Help
_J utilities director library
This composite actor produces a magnitude-only frequency-domain representation of the input The number of inputs required to produce any output is 2Aorder, and the number of outputs produced will be 2Aorder. The output represents frequencies from -pi to pi radians per second, centered at zero frequency.
SDFO
' iflfflllfflfl _J Graphics
Multiply
Waveform
FFT
AbsotuteValueO
DB
uT^H ► — ►—•►'
N
r^;^.
Figure 13.11 Vergil Screenshot.
GraphFrame TableauFrame
PtolemyFrame Tableau
A
+cut() +copy() +getJGraph() +layoutGraph() +paste() +print() #_createGraphPane(): GraphPane #_writeFile(file: File)
IS KernelGraphTableau
creator 1..1
1..1
KernelGraphFrame
createe +_createGraphPane(): GraphPane
Figure 13.12 Static structure of the Ptolemy graph editor.
256
Frame is created by the tableau. The FSMGraphFrame class overrides the _createGraphPane factory method to create the graph editor itself, while most of the user interface components (like menus and the palette window) are created by the GraphFrame base class. Note the similarty to the KernelGraphFrame class described in section 13.5.1
iMxj
P?|file:/C:/userv'neuendor/ptH.''p^
File View Edit graph
Help
utilities I director library [D actor library Cj Graphics
init
O true \
Separate
touched S Togeifa
..L.
(
J
/
-^F_V > ST1_V i| F_V < -STl_V
Figure 13.13 Vergil Screenshot.
GraphFrame TableauFrame
|
PtolemyFrame Tableau
+cut() +copy() +getJGraphO +layoutGraph() +paste() +printO #_createGraphPane(): GraphPane #_writeFile(file: File)
K FSMGraphTableau
creator 1..1
1..1
FSMGraphFrame
createe +_createGraphPane(): GraphPane
Figure 13.14 Static structure of the Ptolemy graph editor.
257
13.5.3 Tree Tableau Disregarding the relations between ports, a Ptolemy model is exactly the same as a hierarchical tree of entities, ports, and attributes. The Tree Editor graphically renders a Ptolemy model in just this way. It is most useful when the attributes of each object, or the hierarchy of objects needs to be emphasized. The current implementation of the Tree Tableau only allows browsing of the model, and is fairly incomplete. It is built using the swing JTree component, and the same base classes are used to display the palette in the Graph Editor described in section 13.5.1. The only difference is that the Tree Tableau uses a FullTreeModel, which includes both entities and attributes, while the palette uses an Entity TreeModel, which only includes entities. The static structure of the ptolemy.vergil.tree package is shown in Figure 13.12.
I I
Tableau
PtolemyFrame
I |
A
"A"
JTree
A
TreeTableau «interface» TreeModel TreeCellRenderer
creator
1..1
1..1
createe
TreeTableauSTreeFrame
"A"
EntityTree Model H_createGraphPane(): GraphPane
PtolemyTreeCellRenderer
FullTreeModel
PTree
Figure 13.15 Static structure of the ptolemy.vergil.tree package.
258/259
Compos KeEntity
CT Domain Author: Jie Liu
14.1 Introduction The continuous-time (CT) domain in Ptolemy II aims to help the design and simulation of systems that can be modeled using ordinary differential equations (ODEs). ODEs are often used to model analog circuits, plant dynamics in control systems, lumped-parameter mechanical systems, lumpedparameter heat flows and many other physical systems. Let's start with an example. Consider a second order differential system, mz(t) + bz(t) + kz(t) = u(t) y(t) = c ■ z(t) z(0) = 10,z(0) = 0.
0)
.
The equations could be a model for an analog circuit as shown in figure 14.1(a), where z is the voltage
r•
(a) A circuit implementation.
(b) A mechanical implementation.
FIGURE 14.1. Possible implementations of the system equations.
260/261
of node 3, and m = R\R2C\C2
(2)
k = R\ -C\ +R2-C2 b = 1 c =
R4 R3+R4
Or it could be a lumped-parameter model of a spring-mass mechanical model for the system shown in figure 14.1(b), where z is the position of the mass, m is the mass, k is the spring constant, b is the damping parameter, and c - 1. In general, an ODE-based continuous-time system has the following form: x = f{x, u, t) y = g(x, u, t) v
*('o)
0>
(3) (4) (5)
where, t € 9t, / > /0, a real number, is continuous time. At any time t, x e 91", an «-tuple of real num / . bers, is the state of the system; «e3i is the m-dimensional input of the system; y e 91 is the Idimensional output of the system; x e 9t is the derivative of x with respect to time /, i.e. x =
dx dt
(6)
Equations (3), (4), and (5) are called the system dynamics, the output map, and the initial condition of the system, respectively. For example, if we define a vector x{t) = z(t)
m
(7)
system (1) can be writen in form of (3)-(5), like x(t) = - 0 1 x(t) + 0 u(t) m -k-b 1/lH
(8)
y(0 = [c o]*(0 x(0) =
The solution, x(t), of the set of ODE (3)-(5), is a continuous function of time, also called a waveform, which satisfies the equation (3) and initial condition (5). The output of the system is then defined as the function of x(f) and u(t), as specified in (4). The precise solution are usually impossible to be found using digital computers. Numerical solutions are approximations of the precise solution. A numerical solution of ODEs are usually done by integrating the right-hand side of (3) on a discrete set
262
of time points to find x(t). Using digital computers to simulate continuous-time systems has been studied for more than three decades. One of the most well-known tools is Spice [63]. The CT domain differs from Spice-like continuous-time simulators in two ways — the system specification is somewhat different, and it is designed to interact with other models of computation.
14.1.1 System Specification There are usually two ways to specify a continuous-time system, the conservation-law model and the signal-flow model [39]. The conservation-law models, like the nodal analysis in circuit simulation [36] and bond graphs [75] in mechanical models, define systems by their physical components, which specify relations of cross and through variables, and the conservation laws are used to compile the component relations into global system equations. For example, in circuit simulation, the cross variables are voltages, the through variables are currents, and the conservation laws are Kirchhoff s laws. This model directly reflects the physical components of a system, thus is easy to construct from a potential implementation. The actual mathematical representation of the system is hidden. In signalflow models, entities in a system are maps that define the mathematical relation between their input and output signals. Entities communicate by passing signals. This kind of models directly reflects the mathematical relations among signals, and is more convenient for specifying systems that do not have an explicit physical implementation yet. In the CT domain of Ptolemy II, the signal-flow model is chosen as the interaction semantics. The conservation-law semantics may be used within an entity to define its I/O relation. There are four major reasons for this decision: 1. The signal-flow model is more abstract. Ptolemy II focuses on system-level design and behavior simulation. It is usually the case that, at this stage of a design, users are working with abstract mathematical models of a system, and the implementation details are unknown or not cared about. 2. The signal flow model is more flexible and extensible, in the sense that it is easy to embed components that are designed using other models. For example, a discrete controller can be modelled as a component that internally follows a discrete event model of computation but exposes a continuous-time interface. 3. The signal flow model is consistent with other models of computation in Ptolemy II. Most models of computation in Ptolemy use message-passing as the interaction semantics. Choosing the signalflow model for CT makes it consistent with other domains, so the interaction of heterogeneous systems is easy to study and implement. This also allows domain polymorphic actors to be used in the CT domain. 4. The signal flow model is compatible with the conservation law model. For physical systems that are based on conservation laws, it is usually possible to wrap them into an entity in the signal flow model. The inputs of the entity are the excitations, like the voltages on voltage sources, and the outputs are the variables that the rest of the system may be interested in. The signal flow block diagram of the system (3) - (5) is shown in figure 14.2. The system dynamics (3) is built using integrators with feedback. In this figure, u, x, x, and y, are continuous signals (waveforms) flowing from one block to the next. Notice that this diagram is only conceptual, most models may involve multiple integrators1. Time is shared by all components, so it is not considered as an input. At any fixed time t, if the "snapshot" values x(t) and «(/) are given, x(t) and j/'
(")
where
the set of the discrete time points. To explicitly illustrate the discretization of time and the difference between the precise solution and the numerical solution, we use the following notation in the rest of the chapter: tn: the n-th time point, to explicitly show the discretization of time. However, we write t, if the index n is not important. • x[tj, tj]: the precise (continuous) state trajectory from time tt to L; • x(tn): the precise solution of (3) at time /„; xt : the numerical solution of (3) at time /„ ; h
n
= t
n ~ tn-1:
ste
P size of the discretization of time. We also write h if the index n in the
265
sequence is not important. For accuracy reason, h may not be uniform. •
foc(t„) -xt II: the 2-normed difference between the precise solution and the numerical solution at step n is called the (global) error at step n; the difference, when we assume xt ...xt
are precise,
is called the local error at step n. Local errors are usually easy to estimate and the estimation can be used for controlling the accuracy of numerical solutions. A general way of numerically simulating a continuous-time system is to compute the state and the output of the system in an increasing order of /„. Such algorithms are called the time-marching algorithms, and, in this chapter, we only consider these algorithms. There are variety of time marching algorithms that differ on how xt is computed given x, ...xt . The choice of algorithms is application dependent, and usually reflects the speed, accuracy, and numerical stability trade-offs.
14.2.2 Fixed-Point Behavior Numerical ODE solving algorithms approximate the derivative operator in (3) using the history and the current knowledge on the state trajectory. That is, at time r„, the derivative of x is approximated by a function of x., ...,x,'it-I ,x.'» , i.e. '0 J
*<„
=
P(Xl0'-'X>n-l'X>J-
(12)
Plugging (3) in this, we get P X
( 'o-X •■•>*, )'
(15)
where FE and Ff are derived from the time /„, the input u{tn), the function f, and the history of x and x. Solving (14) or (15) at a particular time /„ is called an iteration of the CT simulation at tn. Equation (14) can be solved simply by a function evaluation and an assignment. But the solution of (15) is the fixed point of Fj, which may not exist, may not be unique, or may not be able to be found. The contraction mapping theorem [12] shows the existence and uniqueness of the fixed-point solution, and provides one way to find it. Given the map Fj that is a local contraction map (generally true for small enough step sizes) and let an initial guess a0 be in the contraction radius, then the unique fixed point can be found by iteratively computing: a, = FE(a0), a2 = FE(ax), a3 = FE(o2), ...
(16)
Solving both (14) and (15) should be thought of as finding the fixed-point behavior of the system at a particular time. This means both functions FE and F, should not change during one iteration of the simulation. This further implies that the topology of the system, all the parameters, and all the internal states that the firing functions depend on should be kept unchanged. We require that domain
266
polymorphic actors to update internal states only in the postf ire () method exactly for this reason.
14.2.3 ODE Solvers Implemented The following solvers has been implemented in the CT domain. 1. Forward Euler solver: =
*<. + !
X
'n
= *i,
(17)
+ h
"+i'i'n
+
*»+rM,.«v'i.)
2. Backward Euler solver: 'n+1
n+1
'n
=
(18)
'n+1
A
^/ »+r/(*w"w>fl+i)
3. 2(3)-order Explicit Runge-Kutta solver ^0 =
(19)
n+\ ■/(*,„> ",„>*„)
h
K\ = K+i-f(xtii + K0/2,utn + K+i/2,tn + hn + l/2) K
2 = Ä«+rM„ + 3^l/4'M/„ + 3/I„+1/4^n + 3Än+l/4) 2 1 ' 4 x.'n+i — x,'„ + ~^Lfi 9 o ~l~ ~Ki 3 i + ~K-> 9 2 with error control: ^3
(20)
= h
n+\-f(xl„ + 1'Utn + l'tn+0 72 °
if \LTE\ < ErrorTolerance ,xt = x, integration step size is predicted by:
12 '
9
2
8
3
, otherwise, fail. If this step is successful, the next
h„ + 2 = hn+l ■ max(0.5, 0.8 • \](ErrorTolerance)/\LTE\)
(21)
4. Trapezoidal Rule solver: (22) -*,„+
2
(xln+f(xtn+i,uln+],tn+]))
Among these solvers, 1) and 3) are explicit; 2) and 4) are implicit. Also, 1) and 2) do not perform step size control, so are called fixed-step-size solvers; 3) and 4) change step sizes according to error estimation, so are called variable-step-size solvers. Variable-step-size solvers adapt the step sizes according to changes of the system flow, thus are "smarter" than fixed-step-size solvers.
267
14.2.4 Discontinuity The existence and uniqueness of the solution of an ODE (Theorem 1 in Appendix G) allows the right-hand side of (3) to be discontinuous at a countable number of discrete points D, which are called the breakpoints (also called the discontinuous points in some literature). These breakpoints may be caused by the discontinuity of input signal u, or by the intrinsic flow of/ In theory, the solutions at these points are not well defined. But the left and right limits are. So, instead of solving the ODE at those points, we would actually try to find the left and right limits. One impact of breakpoints on the ODE solvers is that history solutions are useless when approximating the derivative of x after the breakpoints. The solver should resolve the new initial conditions and start the solving process as if it is at a starting point. So, the discretization of time should step exactly on breakpoints for the left limit, and start at the breakpoint again after finding the right limit. A breakpoint may be known beforehand, in which case it is called a predictable breakpoint. For example, a square wave source actor knows its next flip time. This information can be used to control the discretization of time. A breakpoint can also be unpredictable, which mean it is unknown until the time it occurs. For example, an actor that varies its functionality when the input signal crosses a threshold can only report a "missed" breakpoint after an integration step is finished. How to handle breakpoints correctly is a big challenge for integrating continuous-time models with discrete models like DE and FSM.
14.2.5 Breakpoint ODE Solvers Breakpoints in the CT domain are handled by adjusting integration steps. We use a table to handle predictable breakpoints, and use the step size control mechanism to handle unpredictable breakpoints. The breakpoint handling are transparent to users, and the implementation details (provided in section 14.7.4) are only needed when developing new directors, solvers, or event generators. Since the history information is useless at breakpoints, special ODE solvers are designed to restart the numerical integration process. In particular, we have implemented the following breakpoint ODE solvers. 1. DerivativeResolver: dx It calculates the derivative of the current state, i.e. — . This is simply done by evaluation the righthand side of (3). At breakpoints, this solver is used for the first step to generate history information for other methods. * 2. ImpulseBESolver: **.♦, =\ X +
C
X,
"+l
+ h
»+r\+1
<23>
/IH i i 'X +
'n
The two time points tn and tn have the same time value. This solver is used for breakpoints at which a Dirac impulse signal appears. Notice that none of these solvers advance time. They can only be used at breakpoints.
268
14.3 CT Actors A CT system can be built up using actors in the ptolemy.domains.ct.lib package and domain polymorphic actors that have continuous behaviors (i.e. all actors that do not implement the SequenceActor interface). The key actor in CT is the integrator. It serves the unique position of wiring up ODEs. Other actors in a CT system are usually stateless. A general understanding is that, in a pure continuous-time model, all the information — the state — of a CT system is stored in the integrators.
14.3.1 CT Actor Interfaces In order to schedule the execution of actors in a CT model and to support the interaction between CT and other domains (which are usually discrete), we provide the following interfaces. • CTDynamicActor. Dynamic actors are actors that contains continuous dynamics in their I/O path. An integrator is a dynamic actor, and so are all actors that have integration relations from their inputs to their outputs. • CTEventGenerator. Event generators are actors that convert continuous time input signals to discrete output signals. CTStatefulActor. Stateful actors are actors that have internal states. The reason to classify this kind of actor is to support rollback, which may happen when a CT model is embedded in a discrete event model. CTStepSizeControlActor. Step size control actors influence the integration step size by telling the director whether the current step is accurate. The accuracy is in the sense of both numerical errors and absence of unpredictable breakpoints. It may also provide information about refining a step size for an inaccurate step and suggesting the next step size for an accurate step. CTWaveformGenerator. Waveform generators are actors that convert discrete input signals to continuous-time output signals. Strictly speaking, event generators and waveform generators do not belong to any domain, but the CT domain is design to handle them intrinsically. When building systems, CT parts can always provide discrete interface to other domains. Neither a loop of dynamic actors nor a loop of non-dynamic actors are allowed in a CT model. They introduce problems on the order that actors should be executed. A loop of dynamic actors can be easily broken by a Scale actor with scale 1. A loop of non-dynamic actors builds an algebraic equation. The CT domain does not support modeling algebraic equations, yet.
14.3.2 Actor Library 1. Integrator: The integrator for continuous-time simulation. An integrator has one input port and one output port. Conceptually, the input is the derivative of the output, and an ordinary differential equation is modeled as an integrator with feedback.
•
An integrator is a dynamic actor, and it emits a token (with value equal to its internal state) at the beginning of the simulation. An integrator is a step size control actor, which estimates local errors at each integration step and controls the accuracy of the solution. An integrator has memory, which is its state. To help resolve the new state from previous states, a set of variables are used: state and its derivative: These are the new state and its derivative at a time point, which have been confirmed by all the step size control actors.
269
• •
tentative state and tentative derivative: These are the state and derivative which have not been confirmed. It is a starting point for other actors to estimate the accuracy of this integration step. history: The previous states and derivatives. An integrator remembers the history states and their derivatives for the past several steps. The history is used by multistep methods. An integrator has one parameter: initialState. At the initialization stage of the simulation, the state of the integrator is set to the initial state. Changes of initialState will be ignored after the simulation starts, unless the initialize () method of the integrator is called again. The default value of this parameter is 0.0. An integrator can possibly have several auxiliary variables. These auxiliary variables are used by ODE solvers to store intermediate states for individual integrators.
2. CTPeriodicalSampler. This event generator periodically samples the input signal and generates events with the value of the input signal at these time points. The sampling rate is given by the samplePeriod parameter, which has default value 0.1. The sampling time points, which are known beforehand, are examples of predictable breakpoints. 3. ZeroCrossingDetector. This is an event generator that monitors the signal coming in from an input port - trigger. If the trigger is zero, then output the token from the input port. Otherwise, there is no output. This actor controls the integration step size to accurately resolve the time that the zero crossing happens. It has a parameter, errorTolerance, which controls how accurately the zero crossing is determined. 4. ZeroOrderHold. This is a waveform generator that converts discrete events into continuous signals. This actor acts as a zero-order hold. It consumes the token when the consumeCurrentEvent () is called. This value will be held and emitted every time it is fired, until the next time consumeCurrentEvent () is called. This actor has one single input port ,one single output port, and no parameters. 5. ThresholdMonitor. This actor controls the integration steps so that the given threshold (on the input) is not crossed in one step. This actor has one input port and one output port. It has two parameters thresholdWidth and thresholdCenter, which have default value le-2 and 0, respectively. If the input is within the range defined by the threshold center and threshold width, then a true token is emitted from the output.
14.3.3 Domain Polymorphic Actors Not all domain polymorphic actors can be used in the CT domain. Whether an actor can be used depends on how the internal states of the actor evolve when executing. • Stateless actors: All stateless actors can be used in CT. In fact, most CT systems are built by integrators and stateless actors. • Timed actors: Timed actors change their states according to the notion of time in the model. All actors that implement the TimedActor interface can be used in CT, as long as they do not also implement SequenceActor. Timed actors that can be used in CT include plotters that are designed to plot timed signals. • Sequence actors: Sequence actors change their states according to the number of input tokens received by the actor and the number of times that the actor is postfired. Since CT is a time driven model, rather than a data driven model, the number of received tokens and the number of postfires do not have a significant semantic meaning. So, none of the sequence actors can be used in the CT domain. For example, the Ramp actor in Ptolemy II changes its state — the next token to emit —
270
corresponding to the number of times that the actor is postfired. In CT, the number of times that the actor is postfired depends on the discretization of time, which further depend on the choice of ODE solvers and setting of parameters. As a result, the slope of the ramp may not be a constant, and this may lead to very counterintuitive models. The same functionality is replaced by a CurrentTime actor and a Scale actor. If sequence behaviors are indeed required, event generators and waveform generators may be helpful to convert continuous and discrete signals.
14.4 CT Directors There are three CT directors — CTMultiSolverDirector, CTMixedSignalDirector, and CTEmbeddedDirector. The first one can only serve as a top-level director, a CTMixedSignalDirector can be used both at the top-level or inside a composite actor, and a CTEmbeddedDirector can only be contained in a CTCompositeActor. In terms of mixing models of computation, all the directors can execute composite actors that implement other models of computation, as long as the composite actors are properly connected (see section 14.5). Only CTMixedSignalDirector and CTEmbeddedDirector can be contained by other domains. The outside domain of a composite actor with CTMixedSignalDirector can be any discrete domain, such as DE, SDF, PN, CSP, etc. The outside domain of a composite actor with CTEmbeddedDirector must also be CT or FSM, if the outside domain of the FSM model is CT. (See also the HSDirector in the FSM domain.)
14.4.1 ODE Solvers There are six ODE solvers implemented in the ptolemy.domains.ct.kernel.solver package. Some of them are specific for handling breakpoints. These solvers are ForwardEulerSolver, BackwardEulerSolver, ExplicitRK23Solver, TrapezoidalRuleSolver, DerivativeResolver, and ImpulseBESolver. They implement the ODE solving algorithms in section 14.2.3 and section 14.2.5, respectively.
14.4.2 CT Director Parameters The CTDirector base class maintains a set of parameters which control the execution. The parameters shared by all CT directors are listed in Table 21 on page 271. Individual directors may have their own (additional) parameters, which will be discussed in the appropriate sections. Table 21: CTDirector Parameters Default Value
Description
Type
errorTolerance
The upper bound of local errors. Actors that perform integration error control (usually integrators in variable step size ODE solving methods) will compare the estimated local error to this value. If the local error estimation is greater than this value, then the integration step is considered inaccurate, and should be restarted with a smaller step sizes.
double
le-4
initialStepSize
This is the step size that users specify as the desired step size. For fixed step size solvers, this step size will be used in all non-breakpoint steps. For variable step size solvers, this is only a suggestion.
double
0.1
maxlterationsPerStep
This is used to avoid the infinite loops in (implicit) fixed-point iterations. If the number of fixed-point iterations exceeds this value, but the fixed point is still not found, then the fixedpoint procedure is considered failed. The step size will be reduced by half and the integration step will be restarted.
int
20
Name
271
Table 21: CTDirector Parameters Name
Default Value
Description
Type
maxStepSize
The maximum step size used in a simulation. This is the upper bound for adjusting step sizes in variable step-size methods. This value can be used to avoid sparse time points when the system dynamic is simple.
double
1.0
minStepSize
The minimum step size used in a simulation. This is the lower bound for adjusting step sizes. If this step size is used and the errors are still not tolerable, the simulation aborts. This step size is also used for the first step after breakpoints.
double
le-5
startTime
The start time of the simulation. This is only applicable when CT is the top level domain Otherwise, the CT director follows the time of its executive director.
double
0.0
stopTime
The stop time of the simulation. This is only applicable when CT is the top level domain. Otherwise, the CT director follows the time of its executive director.
double
1.0
timeResolution
This controls the comparison of time. Since time in the CT domain is a double precision real number, it is sometimes impossible to reach or step at a specific time point If two time points are within this resolution, then they are considered identical.
double
le-10
valueResolution
This is used in (implicit) fixed-point iterations. If in two successive iterations the difference of the states is within this resolution, then the integration step is called converged, and the fixed point is considered reached.
double
le-6
14.4.3 CTMultiSolverDirector A CTMultiSolverDirector has two ODE solvers — one for ordinary use and one specifically for breakpoints. Thus, besides the parameters in the CTDirector base class, this class adds two more parameters as shown in Table 22 on page 272. Table 22: Additional Parameter for CTMultiSolverDirector Name
Description
Type
Default Value
ODESolver
The fully qualified class name for the ODE solver class.
string
"ptolemy.domains.ct kernel. solver.ForwardEulerSolver"
breakpointODESolver
The fully qualified class name for the breakpoint ODE solver class.
string
"ptolemy.domains.ct.kernel.solver.DerivativeResolver"
A CTMultiSolverDirector can direct a model that has composite actors implementing other models of computation. One simulation iteration is done in two phases: the continuous phase and the discrete phase. Let the current iteration be n. In the continuous phase, the differential equations are integrated from time /„_, to tn. After that, in the discrete phase, all (discrete) events which happen at /„ are processed. The step size control mechanism will assure that no events will happen between /„_, and tn.
14.4.4 CTMixedSignalDirector This director is designed to be the director when a CT subsystem is contained in an event-based system, like DE or DT. As proved in [52], when a CT subsystem is contained in the DE domain, the CT subsystem should run ahead of the global time, and be ready for rollback. This director implements this optimistic execution.
272
Since the outside domain is event-based, each time the embedded CT subsystem is fired, the input data are events. In order to convert the events to continuous signals, breakpoints have to be introduced. So this director extends CTMultiSolverDirector, which always has two ODE solvers. There is one more parameter used by this director — the maxRunAheadLength, as shown in Table 23 on page 273. Table 23: Additional Parameter for CTMixedSignalDirector Name
Description
maxRunAheadLength The maximum length of time for the CT subsystem to run ahead of the global time.
Default Value
Type double
1.0
When the CT subsystem is fired, the CTMixedSignalDirector will get the current time t and the next iteration time t' from the outer domain, and take the min(x - x', /) as the fire end time, where / is the value of the parameter maxRunAheadLength. The execution lasts as long as the fire end time is not reached or an output event is not detected. This director supports rollback; that is when the state of the continuous subsystem is confirmed (by knowing that no events with a time earlier than the CT current time will be present), the state of the system is marked. If an optimistic execution is known to be wrong, the state of the CT subsystem will roll back to the latest marked state.
14.4.5 CTEmbeddedDirector This director is used when a CT subsystem is embedded in another continuous time system, either directly or through a hierarchy of finite state machines. This director is typically used in the hybrid system scenario [53]. This director can pass step size control information up to its executive director. To achieve this, the director must be contained in a CTCompositeActor, which will pass the step size control queries from the outer domain to the inner domain. This director extends CTMultiSolverDirector, but has no additional parameters. A major difference between this director and the CTMixedSignalDirector is that this director does not support rollback. In fact, when a CT subsystem is embedded in a continuous-time environment, rollback is not necessary.
14.5 Interacting with Other Domains The CT domain can interact with other domains in Ptolemy II. In particular, we consider interaction among the CT domain, the discrete event (DE) domain and the finite state machine (FSM) domain. Following circuit design communities, we call a composition of CT and DE a mixed-signal model; following control and computation communities, we call a composition of CT and FSM a hybrid system model. There are two ways to put CT and DE models together, depending on the containment relation. In either case, event generators and waveform generators are used to convert the two types of signals. Figure 14.4 shows a DE component wrapped by an event generator and a waveform generator. From the input/output point of view, it is a continuous time component. Figure 14.5 shows a CT subsystem wrapped by a waveform generator and an event generator. From the input/output point of view, it is a discrete event component. Notice that event generators and waveform generators always stay in the CT domain.
273
A hierarchical composition of FSM and CT is shown in figure 14.6. A CT component, by adopting the event generation technique, can have both continuous and discrete signals as its output. The FSM can use predicates on these signals, as well as its own input signals, to build trigger conditions. The actions associated with transitions are usually setting parameters in the destination state, including the initial conditions of integrators.
14.6 CT Domain Demos Here are some demos in the CT domain showing how this domain works and the interaction with other domains.
14.6.1 Lorenz System The Lorenz System (see, for example, pp. 213-214 in [22]) is a famous nonlinear dynamic system that shows chaotic attractors. The system is given by: any CT director f\s r
Event Generator
T! T
TTT,
DE
Waveform Generator
FIGURE 14.4. Embedding a DE component in a CT system.
CTMixedSignalDirector
1L
Waveform Generator
\ A*M)
I -
Event Generator
g(*.u)
FIGURE 14.5. Embedding a CT component in a DE system.
fTMnIti« otverC —--^HSDirector
A
feX
/W\
C Nw
CTEmbeddedDiractor
CTEmbeddedDirector
FIGURE 14.6. Hybrid system modeling.
274
DEDirector
1_I_
(24)
Xj = a(x2-Xj) Xj
^A*
X'ijX]
X*2
X-j
Xi * X'j — D * X->
The system is built by integrators and stateless domain polymorphic actors, as shown in figure 14.7. The result of the state trajectory projecting onto the {xx, x2) plane is shown in figure 14.8. The initial conditions of the state variables are all 1.0. The default value of the parameters are: CT = l,X = 25, b = 2.0.
FIGURE 14.7. Block diagram for the Lorenz system.
^Applet Viewer: ptolemy.do.mi
?;**«: Stop Time: 50.0 Siqma: 10.0 Lambda:
25.0
6:
2.0
FIGURE 14.8. The simulation result of the Lorenz system.
275
14.6.2 Microaccelerometor with Digital Feedback. Microaccelerometors are MEMS devices that use beams, gaps, and electrostatics to measure acceleration. Beams and anchors, separated by gaps, form parallel plate capacitors. When the device is accelerated in the sensing direction, the displacement of the beams causes a change of the gap size, which further causes a change of the capacitance. By measuring the change of capacitance (using a capacitor bridge), the acceleration can be obtained accurately. Feedback can be applied to the beams by charging the capacitors. This feedback can reduce the sensitivity to process variations, eliminate mechanical resonances, and increase sensor bandwidth, selectivity, and dynamic range. Sigma-delta modulation [15], also called pulse density modulation or a bang-bang control, is a digital feedback technique, which also provides the A/D conversion functionality. Figure 14.9 shows the conceptual diagram of system. The central part of the digital feedback is a one-bit quantizer. We implemented the system as Mark Alan Lemkin designed [51]. As shown in the figure 14.10, the second order CT subsystem is used to model the beam. The voltage on the beam-gap capacitor is sampled every T seconds (much faster than the required output of the digital signal), then filtered by a lead compensator (FIR filter), and fed to an one-bit quantizer. The outputs of the quantizer are converted to force and fed back to the beams. The outputs are also counted and averaged every NT seconds to produce the digital output. In our example, the external acceleration is a sine wave. The execution result of the microaccelerometer system is shown in figure 14.11. The upper plot in the figure plots the continuous signals, where the low frequency (blue) sine wave is the acceleration
I
vs FIGURE 14.9. Micro-accelerator with digital feedback
DEDirector
CTMixedSignalDirector
K(l) Integrator
Sampler
-o*Z«roOrderHoki accumulator
FIGURE 14.10. Block diagram for the micro-accelerator system.
276
input, the high frequency waveform (red) is the capacitance measurement, and the squarewave (green) is the zero-order hold of the feedback from the digital part. In the lower plot, the dense events (blue) are the quantized samples of the capacitance measurements, which has value +1 or -1, and the sparse events (red) are the accumulation and average of the previous 64 quantized samples. The sparse events are the digital output, and as expected, they have a sinsoidal shape.
14.6.3 Sticky Point Masses System This sticky point mass demo shows a simple hybrid system. As shown in figure 14.12, there are two point masses on a frictionless table with two springs attaching them to fixed walls. Given initial positions other than the equilibrium points, the point masses oscillate. The distance between the two walls are close enough that the two point masses may collide. The point masses are sticky, in the way so that when they collide, they will sticky together and become one point mass with two springs attached to it. We also assume that the stickiness decays after the collision, such that eventually the I.M'H4JIH,ll^-»'INII^'llfflmff'»ntf1-£
Jjaijsi
Stop Stop Time: 15.0 Sample Rate: 0.02 Feedback Gain: -20.0
ÜHMI
1.0 0.5 "
0.5 1.0 .
ill! 1 1 11 IWUiiil
execution finished.
FIGURE 14.11. Execution result of the microaccelerometor system.
FIGURE 14.12. Sticky point masses system
277
pulling force between the two springs is big enough to pull the point masses apart. This separation gives the two point masses a new set of initial positions, and they oscillate freely until they collide again. The system model, as shown in figure 14.13, has three levels of hierarchy — CT, FSM, and CT. The top level is a continuous time model with two actors, a composite actor that outputs the position of the two point masses, and a plotter that simply plots the trajectories. The composite actor is a finite state machine with two modes, separated and together. In the separated state, there are two differential equations modeling two independent oscillating point masses. There is also an event detection mechanism, implemented by subtracting one position from another and comparing the result to zero. If the positions are equal, within a certain accuracy, then the two point masses collide, and a collision event is generated. This event will trigger a transition from the separated state to the together state. And the actions on the transition set the velocity of the stuck point mass based on Law of Conservation of Momentum. In the together state, there is one differential equation modeling the stuck point masses, and another first order differential equation modeling the exponentially decaying stickiness. There is another expression computing the pulling force between the two springs. The guard condition from the together state to the separated state compares the pulling force to the stickiness. If the pulling force is bigger than the stickiness, then the transition is taken. The velocities of the two separated point masses equal their velocities before the separation. The simulation result is shown in figure 14.14, where the position of the two point masses are plotted.
FIGURE 14.13. Modeling sticky point masses. MlilMIWMIIWHiiHliHIJfHiilfHSIA«WWiiliV«t!tia8Ba
-JB1.51
AppUt
Stickiness Decay:
i-1 0
SSIgg
Sticky Mum
Mass 1 Position • Mass 2 Position ■
5
10
15
20
25
30
35
40
45
50
execution finished.
FIGURE 14.14. The simulation result of the sticky point masses system.
278
14.7 Implementation The CT domain consists of the following packages, ct.kernel, ct.kernel.util, ctkernel.solver, and ct.lib, as shown in figure 14.15.
14.7.1 ct.kernel.util package The ct.kernel.util package provides a basic data structure — Total lyOrderedSet, which is used to store breakpoints. The UML for this package is shown in figure 14.16. A totally ordered set is a set (i.e. no duplicated elements) in which the elements are totally comparable. This data structure is used to store breakpoints since breakpoints are processed in their chronological order.
14.7.2 ct.kernel package The ctkernel package is the key package of the CT domain. It provided interfaces to classify actors. These interfaces are used by the scheduler to generate schedules. The classes, including the CTBaselntegrator class and the ODESolver class, are shown in figure 14.17. Here, we use the del->
ct.kernel
ct lib Integrator CTPeriodicalSampler ThresholdMonitor ZeroCrossingDetector ZeroOrderHold
BreakpointODESolver CTBaselntegrator CTCompositeActor CTDirector CTDynamicActor CTEmbeddedDirector CTEventGenerator CTMixedSignalDirector CTMultiSolverDirector CTScheduler CTSingleSolverDirector CTStatefulActor CTStepSizeContmlActor CTTransparentDirector CTWavefbrmGenerator NumericalNonconvergeException ODESolver
FuzzyDoubleComparator TotallyOrderedSet
ct.gui CTApplet ctkernel.solver Backward EulerSolver DerivativeResolver ExplicitRK23Solver FixedStepSizeSolver ImpulseBESolver TrapezoldalRulSolver
FIGURE 14.15. The packages in the CT domain.
CTDirector
I |
TotallyOrderedSet «Interface» Comparator
.comparator: Comparator ■ set: LinkedList +TotallyOrderedSet(comp: Comparator) +at(index: int) +clear() +contains(obj: Object): boolean +elementList(): List +first(): Object +getComparator(): Comparator +indexOf(obj: Object): int +insert(obj: Object) +isEmpty(): boolean +removeAIILessThan(o: Object) removeAt(index: int) removeFirstO +size(): int
FIGURE 14.16. UML for ct.kernel.util package.
279
+compare(first: Object, second : Object): boolean
7T FuzzyDoubleComparator threshold: double FuzzyDoubleComparatorO +FuzzyDoubleComparator(th: double) +getThreshold(): double +setThreshold(threshold : double)
Mailbox
I TypedAtomicActor] I
«Interface» CTEvwitGinantor
«Interface» CTStotrfulAclor
+hasCurrentEvent(): boolean +emitCurrentEvents()
+goToMarkedState() ♦markStatesQ
«Interface» CTStapS'atControlActor
T.
+CTReceiver() +CTReceiver(container: IQPort)
«Interface» CTWtveformGantntor
«Interface» CTDynMmieAetor
+emitTentativeOutput()
CTR*c*rv*r
+isThisStepAccurate(): boolean +predictedStepSize(): double +refinedStepSize(): double
TypedCompositeActor
+consumeCurrentEvents()
A
CTBaselntegrator
CTCompositoActor
+lnitialState: Parameter +input: TypedlOPort +output: TypedlOPort <_auxVariables: doublet] ■_history: double!.]]] -JnitState: double state : double tentativeDerivative: double tentativeState: double +CTBaselntegrator() +CTBaselntegrator(w: Workspace) +CTBaselntegrator(ca : TypedCompositeActor, name : String) +getAuxVanables(): doublet] getDerivativef): double +getHistory(index: int): double]] getHistoryCapacityO: int +getlnitialState(): double getStatef): double +getTentativeState(): double +setAuxVariables(index : int, value : double) +setHistoryCapacity(cap: int) +setTentativeDerivative(value: double) +setTentativeState(value: double)
delegate
!lnvalidSUt»Exe»Dtion|
+CTCompositeActor() +CTCompositeActor(ws : Workspace) +CTCompositeActor(ca : TypedCompositeActor, nm : String)
NarrwdObj
CTDiractor
ODESohnr container: Director - round: int +ODESolver() +ODESolver(name: String) +ODESolver(w : Workspace, name : String) +getRound() +getHistoryCapacityRequirement(t: int *getlnlegratorAuxVariableCounl(): int +incrRound() *integratorFire(integ: CTBaselntegrator) +integratorisAccurate0nteg: CTBaselntegrator): boolean +integratorPredictedStepSize(integ: CTBaselntegrator): double +resetRound() *reso/veStates(): boolean
JL NunwricalNonconvergeException
+NurnericalNonconvergeException(detail: String) +NumericalNonconvergeException(obj: NamedObj, detail: String) +NumericalNonconvergeException(obj1 : NamedObj, obj2: NamedObj, detail: String)
FIGURE 14.17. UML for ct.kernel package, actor related classes.
280
egation and the strategy design patterns [30][25] in the CTBaselntegrator and the ODESolver classes to support seamlessly changing ODE solvers without reconstructing integrators. The execution methods of the CTBaselntegrator class are delegated to the ODESolver class, and subclasses of ODESolver provide the concrete implementations of these methods, depending on the ODE solving algorithms. CT directors implement the semantics of the continuous time execution. As shown in figure 14.18, directors that are used in different scenarios derive from the CTDirector base class. The CTScheduler class provides schedules for the directors. The ct.kernel.solver package provides a set of ODE solvers. The classes are shown in figure 14.19. In order for the directors to choose among ODE solvers freely during the execution, the strategy design pattern is used again. A director class talks to the abstract ODESolver base class and individual ODE solver classes extend the ODESolver to provide concrete strategies.
14.7.3 Scheduling This section and the following three sections provide technical details and design decisions made in the implementation of the CT domain. These details are only necessary if the readers want to implement new directors or ODE solvers. In general, simulating a continuous-time system (3)-(5) by a time-marching ODE solver involves
CTDJractor
F/xadSMpSrzeSo/ver ■FixedStepSolverO ■FixedStepSolverjn : String)
ODESolver
ExpllcKRK23Solver
TrapezoidalRuleSolver
_S : double _E: double order: int ■ExplicitRK23Solver() -ExplicilRK23Solver(ws : Workspace)
BackwardEulerSolver
+TrapezoidalRuleSolver() +TrapezoidalRuleSotver(ws : Workspace) #JsConverged(): boolean #_setConvergenoe(value: boolean): void *_voteForConvergence(vote : boolean): void
I «inienace» . |Br*akpolntODESolver.
ForwardEulerSolver
-ForwardEulerSolverO ■ForwardEulerSolver(ws : Workspace)
+BackwardEulerSolver() +BackwardEulerSolver(ws: Workspace) #_isConverged(): boolean *_setConvergence(value: boolean): void #_voteForConvergence(vote : boolean): void
I
ImpulseBESolver
DarrVatrVaftMO/var
+lmpulseBESotver() +lmpulseBESolver(ws : Workspace)
+DerivattveResolver() +DerivativeResolver{w: Workspace)
FIGURE 14.19. UML for ct.kernel.solver package.
281
Scheduler
StaticSchedulingDirectorJ container containee
1..1
£" CTDIrector +errorTolerance: Parameter +initStepSize: Parameter +maxlteration: Parameter maxStepSize: Parameter +minStepSize: Parameter +startTime: Parameter +stopTime: Parameter +timeResolution: Parameter +valueResolution: Parameter .breakpoints: TotallyOrderedSet -_currentSolver: ODESolver -_currentStepSize: double -_suggestedNextStepSize: double
CTScheduler
ODESolver
+CTDirector() +CTDirector(ws: Workspace) +CTDirector(ca: CompositeActor, nm : String) +canBelnsideDirector(): boolean W+cabBeTopLevelDirector(): boolean ■getBreakPoints(): TotallyOrderedSet +getCurrentODESolver(): ODESolver +getCurrentStepSize(): double +getlnitialStepSize(): double +getErrorTolerance(): double >getMaxlterations(): int fgetMaxStepSizef): double +getMinStepSize(): double +getStartTime(): double +getStopTime(): double +getSuggestedNextStepSize(): double +getTimeResolution(): double +getValueResolution(): double +isBreakpointlteration(): boolean +produceOutput(): void +setCurrentStepSize(step: double) +setSuggestedNextStepSize(nextstep: double) +updateStates(): void
+CTScheduler() +CTScheduler(w: Workspace) +arithmaticActorlJst(): List •KiynamicActorListO: List +eventGeneratorList(): List +outputSSCActorList(): List +scheduledDynamicActorList(): List +scheduledOutputActorList(): List +scheduledStateTransitionActorList(): List +sinkActorList(): List ■stateTransitionSSCActorListO: List +statefulActorList(): List +waveformGeneratorList(): List
CTMultiSolverDirector +breakpointODESolver: Parameter +ODESolver: Parameter +CTMultiSolverDiredor() +CTMultiSolverDirector(ws: Workspace) +CTMultiSolverDirector(ca: CompositeActor, nm : String) +getBreakpointSolver(): ODESolver +getODESolver(): ODESolver
A
«Interface» CTTransparerrtDirector
CTMixedSignalDi rector +isThisStepAccurate(): boolean +predictedStepSize(): double +reHnedStepSize() •' douWe
_L CTEmbeddedDi rector -_outsideStepSize: double outsideTime: double +CTEmbeddedDirector() +CTEmbeddedDirector(ws: Workspace) +CTEmbeddedDirector(ca: CompositeActor, nm : String)
■HunAheadLength: Parameter inEventPhase: boolean -JterationEndTime: double knownGoodTime: double outsideTime: double +CTMixedSignalDirector() +CTMixedSignalDirector(ws: Workspace) +CTMixedSignalDirector(ca: CompositeActor, nm : String) +getlterationEndTime(): double +getOutsideTime(): double #_catchup{) #_markStates() #_rollback()
FIGURE 14.18. UML for ct.kernel package, director related classes.
282
the following execution steps: 1. Given the state of the system xt ...xt at time points tQ... tn _,, if the current integration step size is h, i.e. tn = tn_, + h, compute the new state xt using the numerical integration algorithms. During the application of an integration algorithm" each evaluation ofthefia, b, t) function is achieved by the following sequence: • Integrators emit tokens corresponding to a; • Source actors emit tokens corresponding to b; • The current time is set to /; • The tokens are passed through the topology (in a data-driven way) until they reach the integrators again. The returned tokens are x| = = f(a,b,t). 2. After the new state xt is computed, test whether this step is successful. Local truncation error and unpredictable breakpoints are the issues to be concerned with, since those could lead to an unsuccessful step. 3. If the step is successful, predict the next step size. Otherwise, reduce the step size and try again. Due to the signal-flow representation of the system, the numerical ODE solving algorithms are implemented as actor firings and token passings under proper scheduling. The scheduler partitions a CT system into two clusters: the state transition cluster and the output cluster. In a particular system, these clusters may overlap. The state transition cluster includes all the actors that are in the signal flow path for evaluating the / function in (3). It starts from the source actors and the outputs of the integrators, and ends at the inputs of the integrators. In other words, integrators, and in general dynamic actors, are used to break causality loops in the model. A topological sort of the cluster provides an enumeration of actors in the order of their firings. This enumeration is called the state transition schedule. After the integrators produce tokens representing xt, one iteration of the state transition schedule gives the tokens representing xt = f(xt, u(t), t) back to the integrators. The output cluster consists of actors that are involved in the evaluation of the output map g in (4). It is also similarly sorted in topological order. The output schedule starts from the source actors and the integrators, and ends at the sink actors. For example, for the system shown in figure 14.3, the state transition schedule is U-G1-G2-G3-A where the order of GI, G2, and G3 are interchangeable. The output schedule is G4-Y The event generating schedule is empty. A special situation that must be taken care of is the firing order of a chain of integrators, as shown in figure 14.20. For the implicit integration algorithms, the order of firings determines two distinct kinds of fixed point iterations. If the integrators are fired in the topological order, namely x, —> x2 in our example, the iteration is called the Gauss-Seidel iteration. That is, x2 always uses the new guess
l
\dt
>
FIGURE 14.20. A chain of integrators.
283
\dt
2 ►
from x, in this iteration for its new guess. On the other hand, if they are fired in the reverse topological order, the iteration is called the Gauss-Jacobi iteration, where x2 uses the tentative output from x, in the last iteration for its new estimation. The two iterations both have their pros and cons, which are thoroughly discussed in [65]. Gauss-Seidel iteration is considered faster in the speed of convergence than Gauss-Jacobi. For explicit integration algorithms, where the new states xt are calculated solely from the history inputs up to xt , the integrators must be fired in their reverse topological order. For simplicity, the scheduler of theCT domain, at this time, always returns the reversed topological order of a chain of integrators. This order is considered safe for all integration algorithms.
14.7.4 Controlling Step Sizes Choosing the right time points to approximate a continuous time system behavior is one of the major tasks of simulation. There are three factors that may impact the choice of the step size. • Error control. For all integration algorithms, the local error at time /„ is defined as a vector norm (say, the 2-norm) of the difference between the actual solution x(tn) and the approximation xt calculated by the integration method, given that the last step is accurate. That is, assuming x
l
,
=
*('»-!)
then
£<= K-*('«)«■
•
•
(25)
It can be shown that by carefully choosing the parameters in the integration algorithms, the local error is approximately of the p-th order of the step size, where p, an integer closely related to the number of/function evaluations in one integration step, is called the order of the integration algorithm, i.e. Et ~ 0((tn - tn_,)p). Therefore, in order to achieve an accurate solution, the step size should be chosen to be small. But on the other hand, small step sizes means long simulation time. In general, the choice of step size reflects the trade-off between speed and accuracy of a simulation. Convergence. The local contraction mapping theorem (Theorem 2 in Appendix G) shows that for implicit ODE solvers, in order to find the fixed point at tn, the map F;( ) in (15) must be a (local) contraction map, and the initial guess must be within an e ball (the contraction radius) of the solution. It can be shown that F,( ) can be made contractive if the step size is small enough. (The choice of the step size is closely related to the Lipschitz constant). So the general approach for resolving the fixed point is that if the iterating function F,( ) does not converge at one step size, then reduce the step size by half and try again. Discontinuity. At discontinuous points, the derivatives of the signals are not continuous, so the integration formula is not applicable. That means the discontinuous points can not be crossed by one integration step. In particular, suppose the current time is t and the intended next time point is t+h. If there is a discontinuous point at t + 8, where 8 < h, then the next step size should be reduced to / + 8. For a predictable breakpoint, the director can adjust the step size accordingly before starting an integration step. However for an unpredictable breakpoint, which is reported "missed" after an integration step, the director should be able to discard its last step and restart with a smaller step size to locate the actual discontinuous point.
Notice that convergence and accuracy concerns only apply to some ODE solvers. For example, explicit algorithms do not have the convergence problem, and fixed step size algorithms do not have the error control capability. On the other hand, discontinuity control is a generic feature that is independent on the choice of ODE solvers.
284
14.7.5 Mixed-Signal Execution DE inside CT. Since time advances monotonically in CT and events are generated chronologically, the DE component receives input events monotonically in time. In addition, a composition of causal DE components is causal [46], so the time stamps of the output events from a DE component are always greater than or equal to the global time. From the view point of the CT system, the events produced by a DE component are predictable breakpoints. Note that in the CT model, finding the numerical solution of the ODE at a particular time is semantically an instantaneous behavior. During this process, the behavior of all components, including those implemented in a DE model, should keep unchanged. This implies that the DE components should not be executed during one integration step of CT, but only between two successive CT integration steps. CT inside DE. When a CT component is contained in a DE system, the CT component is required to be causal, like all other components in the DE system. Let the CT component have local time t, when it receives an input event with time stamp T . Since time is continuous in the CT model, it will execute from its local time /, and may generate events at any time greater or equal to t. Thus we need />x
(26)
to ensure causality. This means that the local time of the CT component should always be greater than or equal to the global time whenever it is executed. This ahead-of-time execution implies that the CT component should be able to remember its past states and be ready to rollback if the input event time is smaller than its current local time. The state it needs to remember is the state of the component after it has processed an input event. Consequently, the CT component should not emit detected events to the outside DE system before the global time reaches the event time. Instead, it should send a pure event to the DE system at the event time, and wait until it is safe to emit it.
14.7.6 Hybrid System Execution Although FSM is an untimed model, its composition with a timed model requires it to transfer the notion of time from its external model to its internal model. During continuous evolution, the system is simulated as a CT system where the FSM is replaced by the continuous component refining the current FSM state. After each time point of CT simulation, the triggers on the transitions starting from the current FSM state are evaluated. If a trigger is enabled, the FSM makes the corresponding transition. The continuous dynamics of the destination state is initialized by the actions on the transition. The simulation continues with the transition time treated as a breakpoint.
285
Appendix G: Brief Mathematical Background Theorem 1. [Existence and uniqueness of the solution of an ODE] Consider the initial value ODE problem x = fix, t) x(t0) = x0
.
(27)
If/satisfies the conditions: 1. [Continuity Condition] Let D be the set of possible discontinuity points; it may be empty. For each fixed x e 5?" and w e 5?m, the function /:9i\ D -> 5*" in (27) is continuous. And Vx G D, the left-hand and right-hand limit f{x, u, x") and f(x, u, x ) are finite. 2. [Lipschitz Condition] There is a piecewise continuous bounded function k :9* -> 51 is the set of non-negative real numbers, such that V/ e 5v, V£, £ e 51 , V« e 5v
ll/lt",')-/(£, u,t)\\ 5{" such that, (29)
V('o) = *o and V(0 =/WO. "(0,0
V/e9t\D.
(30)
This function \y(0 is called the solution through (/0, x0) of the ODE (27).
Theorem 2. [Contraction Mapping Theorem.] If F:5{" -> 51" is a local contraction map at x with contraction radius e, then there exists a unique fixed point of F within the e ball centered at x. I.e. there exists a unique a G 51", ||a-x| f-GH FIGURE 15.3. A Delay actor can be used to break a zero-delay loop.
289
events with time stamp equal to the current time. This concludes one iteration of the model. An iteration, therefore, processes all events on the event queue with the smallest time stamp.
15.1.4 Getting a Model Started Before one of the iterations described above can be run, there have to be initial events in the global event queue. Actors may produce initial pure events or regular output events in their initialize() method. Thus, to get a model started, at least one actor must produce events. All the domain-polymorphic timed sources described in the Actor Libraries chapter produce pure events, so these can be used in DE. We can define the start time to be the smallest time stamp of these initial events.
15.1.5 Pure Events at the Current Time An actor calls fireAt() to schedule a pure event. The pure event is a request to the scheduler to fire the actor sometime in the future. However, the actor may choose to call fireAt() with the time argument equal to the current time. In fact, the preferred method for domain-polymorphic source actors to get started is to have code like the following in their initialize() method: Director director = getDirector0; director.fireAt(this, director.getCurrentTime());
This will schedule a pure event on the event queue with microstep zero and depth equal to that of the calling actor. An actor may also call fireAt() with the current time in its fire() method. This is a request to be refired later in the current iteration. This is managed by queueing a pure event with microstep one greater than the current microstep. In fact, this is only situation in which the microstep is incremented beyond zero.
15.1.6 Stopping Execution • •
Execution stops when one of these conditions become true: The current time reaches the stop time, set by calling the setStopTime() method of the DE director. The global event queue becomes empty.
Events at the stop time are processed before stopping the model execution. The execution ends by calling the wrapup() method of all actors. It is also possible to explicitly invoke the iterate() method of the manager for some fixed number of iterations. Recall that an iteration processes all events with a given time stamp, so this will run the model through a specified number of discrete time steps.
15.2 Overview of The Software Architecture The UML static structure diagram for the DE kernel package is shown in figure 15.4. For model builders, the important classes are DEDirector, DEActor and DEIOPort. At the heart of DEDirector is a global event queue that sorts events according to their time stamps and priorities. The DEDirector uses an efficient implementation of the global event queue, a calendar queue data
290
DEDirector
-&
+binCountFactor: Parameter +isCQAdaptive: Parameter +minBinCount: Parameter +stopTime: Parameter +stopWhenQueuelsEmpty: Parameter •_deadActors: Set eventQueue: DEEventQueue -^microstep; int +DEDirector() +DEDirector(w: Workspace) +DEDirector{c: CompositeActor, name: String) +disabteActor(actor: Actor) +getEventQueue{): DEEventQueue +getStartTime(): double +getStopTime(): double +setStopTime(t}me: double) +stopWhenQueuelsEmpty(f1ag : boolean)
+fire{) +fireAt(a: Actor, time: double) +getCurrentTimeO: double +getNextlterationTime(): double! +initialize() +invalidateSchedule() +newRecerverO ■postfire()
«Interface» Debuggable
*getO: Token ■getContainerO: lOPort +hasRoomO: boolean +hasToken(): boolean *put(t: Token) +setContainer(port: IOPort)\
+prefireO +preinitializeO ♦transfer! nputs(p: lOPort) +transferOutputs(p: lOPort) +wrapup() 0..n
«Interface» DEEvmntQueua
DE Receiver
+clearO: void +get(): DEEvent +isEmpty(): boolean +put(event: DEEvent) : vote +takeQ: DEEvent
.container: lOPort .delay: double depth: int -Jokens: LinkedList
O-
+DEReceiver{) +DEReceiver(contatner: lOPort) +setpelay(delay: double)
r DECQEventQueue
Comparable
-_cQueue: CalendarQueue +DECQEventQueueO +DECQEventQueue(minBinCount; int, binCountFactor: int, isAdaptive: boolean)
DECQEventQueue. DECQComparator binWidth: DEEvent zeroReferenoe: DEEvent
-_actor: Actor -_depth: int -_microstep: int -_receiver: DEReceiver .timeStamp: double .token: Token +DEEvent(rc: DEReceiver, tk: Token, tStamp: double, microstep: int, depth: int) +DEEvent(actor: Actor, timeStamp: double, microstep: int, depth: int) +actorO: Actor +depth(): int +isSimultaneousWith(ev: DEEvent) +microstepO: int +receiver(): DEReceiver +timeStarnpO: double +token(): Token «Interface» Runnabto
TypedlOPort
■mnO
|+broadcast(t: Token) r*-send(channel: int, t: Token)!
n
DEThreadActor ■_delayToSet: HashSet .thread: PtolemyThread -jsWaiting: boolean +DEThreadActor(container: TypedCompositeActor, name: String) +waitForNewlnputs() *waitForNewlnputs(ports: IQPortfJ)
+DEIOPortO +DEIOPort(container: CompositeEntity, name: String) +DEIOPort(c: CompositeEntity, name: String, isinput: boolean, isoutput: boolean) +delayTo(output: lOPort) +getDelayToPortsO: Set
FIGURE 15.4. UML static structure diagram for the DE kernel package.
291
structure [11]. The time complexity for this particular implementation is O(l) in both enqueue and dequeue operations, in theory. This means that the time complexity for enqueue and dequeue operations is independent of the number of pending events in the global event queue. However, to realize this performance, it is necessary for the distribution of events to match certain assumptions. Our calendar queue implementation observes events as they are dequeued and adapts the structure of the queue according to their statistical properties. Nonetheless, the calendar queue structure will not prove optimal for all models. For extensibility, alternative implementations of the global event queue can be realized by implementing the DEEventQueue interface and specifying the event queue using the appropriate constructor for DEDirector. The DEEvent class carries tokens through the event queue. It contains their time stamp, their microstep, and the depth of the destination actor, as well as a reference to the destination actor. It implements the java.lang.Comparable interface, meaning that any two instances of DEEvent can be compared. The private inner class DECQEventQueue.DECQComparator, which is provided to the calendar queue at the time of its construction, performs the requisite comparisons of events. The DEActor class provides convenient methods to access time, since time is an essential part of a timed domain like DE. Nonetheless, actors in a DE model are not required to be derived from the DEActor class. Simply deriving from TypedAtomicActor gives you the same capability, but without the convenience. In the latter case, time is accessible through the director. The DEIOPort class is be used by actors that are specialized to the DE domain. It supports annotations that inform the scheduler about delays through the actor. It also provides two additional methods, overloaded versions of broadcast() and send(). The overloaded versions have a second argument for the time delay, allowing actors to send output data with a time delay (relative to current time). Domain polymorphic actors, such as those described in the Actor Libraries chapter, have as ports instances of TypedlOPort, not DEIOPort, and therefore cannot produce events in the future directly by sending it through output ports. Note that tokens sent through TypedlOPort are treated as if they were sent through DEIOPort with the time delay argument equal to zero. Domain polymorphic actors can produce events in the future indirectly by using the fireAt() method of the director. By calling fireAt(), the actor requests a refiring in the future. The actor can then produce a delayed event during the refiring.
15.3 The DE Actor Library The DE domain has a small library of actors in the ptolemy.domains.de.lib package, shown in figure 15.5. These actors are particularly characterized by implementing both the TimedActor and SequenceActor interfaces. These actors use the current model time, and in addition, assume they are dealing with sequences of discrete events. Some of them use domain-specific infrastructure, such as the convenience class DEActor and the base class DETransformer. The DETransformer class provides in input and output port that are instances of DEIOPort. The Delay and Server actors use facilities of these ports to influence the firing priorities. The Merge actor merges events sequences in chronological order.
15.4 Mutations The DE director tolerates changes to the model during execution. The change should be queued with the director or manager using requestChange(). While invoking those changes, the method invalidateScheduleQ is expected to be called, notifying the director that the topology it used to calculate the 292
priorities of the actors is no longer valid. This will result in the priorities being recalculated the next time prefire() is invoked. An example of a mutation is shown in figures 15.6 and 15.7. Figure 15.7 defines a class that constructs a simple model in its constructor. The model consists of a clock connected to a recorder. The method insertClock() creates an anonymous inner class that extends ChangeRequest. Its execute() method disconnects the two existing actors, creates a new clock and a merge actor, and reconnects the actors as shown in figure 15.6. When the insertClock() method is called, a change request is queue with the manager. The manager executes the request after the current iteration completes. Thus, the change will always be executed between non-equal time stamps, since an iteration consists of processing all events at the current
lib «Interface» TimedActor
iTypedAtomicActorj
-5
«Interface» SequenceActor
"A"
WaitingTime
DETransformer
♦output: TypedlOPort(DoubleToken) +waitee: TypedlOPortfToken) ♦waiter: TypedlOPortfToken) wailing: Vector
♦input: DEIOPort ♦output: DEIOPort
Delay
Server
♦time: Parameter (DoubleToken)
Sampler
Merge
♦newServiceTime: DEIOPort(Doub)eToken) ♦trigger: TypedlOPortfToken) ♦serviceTime: Parameter(DoubleToken) -Jastlnputs: TokenfJ -_nextTimeFree: double
FIGURE 15.5. The library of DE-specific actors.
clock merge clock
recorder
i—►
clock2 tifter
before
FIGURE 15.6. Topology before and after mutation for the example in figure 15.7.
293
recorder
time stamp. Actors that are added in the change request are automatically initialized. Note, however, one subtlety. The last line of the insertClock() method is: _rec.input.createReceivers();
This method call is necessary because the connections of the recorder actor have changed, but since the public class Mutate { public Manager manager; private private private private
Recorder _rec; Clock _clock; TypedCompositeActor _top; DEDirector _director;
public Mutate 0 throws IllegalActionException, NameDuplicationException _top = new TypedCompositeActor!); _top.setName("top"); manager = new Manager(); _director = new DEDirector0; _top.setDirector(_director); _top.setManager(_manager); _clock = new Clock(_top, "clock"); _clock.values.setExpression(" [1.01"); _clock.offsets.setExpression(" [0.0]") ,_clock.period.setExpress ion ("1.0"); _rec = new Recorder(_top, "recorder"); _top.connect(_clock.output, _rec.input);
} public void insertclockt) throws ChangeFailedException { // Create an anonymous inner class ChangeRequest change = new ChangeRequest(_top, "test2") { public void executed throws ChangeFailedException { try { _clock.output.unlinkAll(); _rec.input.unlinkAll(); Clock clock2 = new Clock(_top, "clock2"); clock2.values.setExpression("[2.0]") ; clock2.offsets.setExpression!"[0.5]") ; clock2.period.setExpression("2.0"); Merge merge = new Merge(_top, "merge"); _top.connect(_clock.output, merge.input); _top.connect(clock2.output, merge.input); _top.connect(merge.output, _rec.input); // Any pre-existing input port whose connections // are modified needs to have this method called. _rec.input.createReceivers0 ; ) catch (IllegalActionException ex) { throw new ChangeFailedException(this, ex); } catch (NameDuplicationException ex) { throw new ChangeFailedException(this, ex);
} } )> manager.requestChange(change);
FIGURE 15.7. An example of a class that constructs a model and then mutates it.
294
actor is not new, it will not be reinitialized. Recall that the preinitialize() and initialize() methods are guaranteed to be called only once, and one of the responsibilities of the preinitialize() method is to create the receivers in all the input ports of an actor. Thus, whenever connections to an input port change during a mutation, the mutation code itself must call createReceivers() to reconstruct the receivers. Note that this will result in the loss of any tokens that might already be queued in the preexisting receivers of the ports. It is because of this possible loss of data that the creation of receivers is not done automatically. The designer of the mutation should be aware of the possible loss of data. There is one additional subtlety about mutations. If an actor produces events in the future via DEIOPort, then the destination actor will be fired even if it has been removed from the topology by the time the execution reaches that future time. This may not always be the expected behavior. The Delay actor in the DE library behaves this way, so if its destination is removed before processing delayed events, then it may be invoked at a time when it has no container. Most actors will tolerate this and will not cause problems. But some might have unexpected behavior. To prevent this behavior, the mutation that removes the actor should also call the disableActor() method of the director.
15.5 Writing DE Actors It is very common in DE modeling to include custom-built actors. No pre-defined actor library seems to prove sufficient for all applications. For the most part, writing actors for the DE domain is no different than writing actors for any other domain. Some actors, however, need to exercise particular control over time stamps and actor priorities. Such actors use instances of DEIOPort rather than TypedlOPort. The first section below gives general guidelines for writing DE actors and domain-polymorphic actors that work in DE. The second section explains in detail the priorities, and in particular, how to write actors that declare delays. The final section discusses actors that operate as a Java thread.
15.5.1 General Guidelines •
•
• •
•
The points to keep in mind are: When an actor fires, not all ports have tokens, and some ports may have more than one token. The time stamps of the events that contained these tokens are no longer explicitly available. The current model time is assumed to be the time stamp of the events. If the actor leaves unconsumed tokens on its input ports, then it will be iterated again before model time is advanced. This ensures that the current model time is in fact the time stamp of the input events. However, occasionally, an actor will want to leave unconsumed tokens on its input ports, and not be fired again until there is some other new event to be processed. To get this behavior, it should return false from prefire(). This indicates to the DE director that it does not wish to be iterated. If the actor returns/a/se from postfire(), then the director will not fire that actor again. Events that are destined for that actor are discarded. When an actor produces an output token, the time stamp for the output event is taken to be the current model time. If the actor wishes to produce an event at a future model time, one way to accomplish this is to call the director's fireAt() method to schedule a future firing, and then to produce the token at that time. A second way to accomplish this is to use instances of DEIOPort and use the overloaded send() or broadcast) methods that take a time delay argument. The DEIOPort class (see figure 15.4) can produce events in the future, but there is an important subtlety with using these methods. Once an event has been produced, it cannot be retracted. In par295
•
ticular, even if the actor is deleted before model time reaches that of the future event, the event will be delivered to the destination. If you use fireAt() instead to generate delayed events, then if the actor is deleted (or returns false from postfire()) before the future event, then the future event will not be produced. By convention in Ptolemy II, actors update their state only in the postfire() method. In DE, the fire() method is only invoked once per iteration, so there is no particular reason to stick to this convention. Nonetheless, we recommend that you do in case your actor becomes useful in other domains. The simplest way to ensure this is follow the following pattern. For each state variable, such as a private variable named count, private int _count; create a shadow variable private int _countShadow; Then write the methods as follows: public void fireO { _countShadow = _count;
... perform some computation that may modify _countShadow ... } public boolean postfireO { _count = _countShadow; return super.postfire();
} This ensures that the state is updated only in postfire(). In a similar fashion, delayed outputs (produced by either mechanism) should be produced only in the postfire() method, since a delayed outputs are persistent state. Thus, fireAt() should be called in postfire() only, as should the overloaded send() and broadcast) of DEIOPort.
15.5.2 Examples Simplified Delay Actor. An example of a domain-specific actor for DE is shown in figure 15.8. This actor delays input events by some amount specified by a parameter. The domain-specific features of the actor are shown in bold. They are: • It uses DEIOPort rather than TypedlOPort. • It has the statement: input.delayTo(output); This statement declares to the director that this actor implements a delay from input to output. The actor uses this to break the precedences when constructing the DAG to find priorities. It uses an overloaded send() method, which takes a delay argument, to produce the output. Notice
296
that the output is produced in the postfire() method, since by convention in Ptolemy II, persistent state is not updated in the fire() method, but rather is updated in the postfire() method. Server Actor. The Server actor in the DE library (see figure 15.5) uses a rich set of behavioral properties of the DE domain. A server is a process that takes some amount of time to serve "customers." While it is serving a customer, other arriving customers have to wait. This actor can have a fixed service time (set via the parameter serviceTime, or a variable service time, provided via the input port newServiceTime). A typical use would be to supply random numbers to the newServiceTime port to generate random service times. These times can be provided at the same time as arriving customers to get an effect where each customer experiences a different, randomly selected service time.
package ptolemy.domains.de.lib.test; import import import import import import import import import
ptolemy.actor.TypedAtomicActor; ptolemy.domains.de.kernel.DEIOPort; ptolemy.data.DoubleToken; ptolemy.data.Token; ptolemy.data.expr.Parameter; ptolemy.actor.TypedCompositeActor; ptolemy.kernel.util.IllegalActionException; ptolemy.kernel.util.NameDuplicationException; ptolemy.kerne1.ut i1.Workspace;
public class SimpleDelay extends TypedAtomicActor { public SimpleDelay(TypedCompositeActor container. String name) throws NameDuplicationException, IllegalActionException super(container, name); input = new DEIOPort(this, "input", true, false); output = new DEIOPort(this, "output", false, true); delay = new Parameter(this, "delay", new DoubleToken(1.0)); delay.setTypeEquals(DoubleToken.class); input.delayTo(output);
{
} public Parameter delay; public DEIOPort input; public DEIOPort output; private Token _currentInput; public Object clone(Workspace ws) throws CloneNotSupportedException SimpleDelay newobj = (SimpleDelay)super.clone(ws); newobj.delay = (Parameter)newobj.getAttribute("delay"); newobj.input = (DEIOPort)newobj.getPort("input") ;: newobj.output = (DEIOPort)newobj.getPort("output"); return newobj;
} public void fireO throws IllegalActionException { _currentlnput = input.get (0);
public boolean postfireO throws IllegalActionException { output.sendIO, _currentlnput, ((DoubleToken) delay. getTokenO) .doubleValueO ) ; return super.postfire();
}
FIGURE 15.8. A domain-specific actor in DE.
297
The (compacted) code is shown in figure 15.9. This actor extends DETransformer, which has two public members, input and output, both instances of DEIOPort. The constructor makes use of the delayTo() method of these ports to indicate that the actor introduces delay between its inputs and its output. The actor keeps track of the time at which it will next be free in the private variable _nextTimeFree. This is initialized to minus infinity to indicate that whenever the model begins executing, the server is free. The prefire() method determines whether the server is free by comparing this private variable against the current model time. If it is free, then this method returns true, indicating to the scheduler that it can proceed with firing the actor. If the server is not free, then the prefire() method checks to see whether there is a pending input, and if there is, requests a firing when the actor will become free. It then returns false, indicating to the scheduler that it does not wish to be fired at this time. Note that the prefire() method uses the methods getCurrentTime() and fireAt() of DEActor, which are simply convenient interfaces to methods of the same name in the director. The fire() method is invoked only if the server is free. It first checks to see whether the newServiceTime port is connected to anything, and if it is, whether it has a token. If it does, the token is read and used to update the serviceTime parameter. No more than one token is read, even if there are more in the input port, in case one token is being provided per pending customer. The fire() method then continues by reading an input token, if there is one, and updating _nextTimeFree. The input token that is read is stored temporarily in the private variable _currentlnput. The postfire() method then produces this token on the output port, with an appropriate delay. This is done in the postfire() method rather than the fire() method in keeping with the policy in Ptolemy II that persistent state is not updated in the fire() method. Since the output is produced with a future time stamp, then it is persistent state. Note that when the actor will not get input tokens that are available in the fire() method, it is essential that prefire() return false. Otherwise, the DE scheduler will keep firing the actor until the inputs are all consumed, which will never happen if the actor is not consuming inputs! Like the SimpleDelay actor in figure 15.8, this one produces outputs with future time stamps, using the overloaded send() method of DEIOPort that takes a delay argument. There is a subtlety associated with this design. If the model mutates during execution, and the Server actor is deleted, it cannot retract events that it has already sent to the output. Those events will be seen by the destination actor, even if by that time neither the server nor the destination are in the topology! This could lead to some unexpected results, but hopefully, if the destination actor is no longer connected to anything, then it will not do much with the token.
15.5.3 Thread Actors In some cases, it is useful to describe an actor as a thread that waits for input tokens on its input ports. The thread suspends while waiting for input tokens and is resumed when some or all of its input ports have input tokens. While this description is functionally equivalent to the standard description explained above, it leverages on the Java multi-threading infrastructure to save the state information. Consider the code for the ABRecognizer actor shown in figure 15.10. The two code listings implement two actors with equivalent behavior. The left one implements it as a threaded actor, while the right one implements it as a standard actor. We will from now on refer to the left one as the threaded description and the right one as the standard description. In both description, the actor has two input ports, inportA and inportB, and one output port, outport. The behavior is as follows.
298
package ptolemy.domains.de.lib; import statements ... public class Server extends DETransformer { public DEIOPort newServiceTime; public Parameter serviceTime; private Token _currentlnput; private double _nextTimeFree = Double.NEGATIVE_INFINITY; public Server(TypedCompositeActor container, String name) throws NameDuplicationException, IllegalActionException { super(container, name); serviceTime = new Parameter(this, "serviceTime", new DoubleToken(1.0)); serviceTime.setTypeEquals(DoubleToken.class); newServiceTime = new DEIOPort(this, "newServiceTime", true, false); newServiceTime.setTypeEquals(DoubleToken.class); output.setTypeAtLeast(input); input.delayTo(output); newServiceTime.dalayTo(output); ... attributeChangedCI, clone 0 methods ... public void initialized throws IllegalActionException { super.initialize)); _nextTimeFree = Double.NEGATIVE INFINITY;
} public boolean prefireO throws IllegalActionException { if (getCurrentTimet) >= _nextTimeFree) { return true; } else { // Schedule a firing if there is a pending token so it can be served, if (input.hasToken(O)) { fireAt(_nextTimePree);
} return false;
} } public void fireO throws IllegalActionException { if (newServiceTime.getWidthO > 0 && newServiceTime.hasToken(O)) { DoubleToken time = (DoubleToken)(newServiceTime.get(0)); serviceTime.setToken(time) ,-
} if (input.getWidthO > 0 && input.hasToken(0)) { _currentInput = input.get(0); double delay = ((DoubleToken)serviceTime.getTokenO).doubleValue(); _nextTimeFree = getCurrentTime() + delay; } else { _currentlnput = null;
} } public boolean postfireO throws IllegalActionException { if (_currentInput != null) { double delay = ((DoubleToken)serviceTime.getTokenO).doubleValue() ; output.send(0, _currentInput, delay); return super.postfireO,•
FIGURE 15.9. Code for the Server actor. For more details, see the source code.
299
Produce an output event at outport as soon as events at inportA and inportB occurs in that particular order, and repeat this behavior. Note that the standard description needs a state variable state, unlike the case in the threaded description. In general the threaded description encodes the state information in the position of the code, while the standard description encodes it explicitly using state variables. While it is true that the context switching overhead associated with multi-threading application reduces the performance, we argue that the simplicity and clarity of writing actors in the threaded fashion is well worth the cost in some applications. The infrastructure for this feature is shown in figure 15.4. To write an actor in the threaded fashion, one simply derives from the DEThreadActor class and implements the run() method. In many cases, the content of the run() method is enclosed in the infinite 'while (true)' loop since many useful threaded actors do not terminate. The waitForNewInputs() method is overloaded and has two flavors, one that takes no arguments and another that takes an IOPort array as argument. The first suspends the thread until there is at least one input token in at least one of the input ports, while the second suspends until there is at least one input token in any one of the specified input ports, ignoring all other tokens. In the current implementation, both versions of waitForNewInputs() clear all input ports before the thread suspends. This guarantees that when the thread resumes, all tokens available are new, in the sense that they were not available before the waitForNewInput() method call. The implementation also guarantees that between calls to the waitForNewInputs() method, the rest of the DE model is suspended. This is equivalent to saying that the section of code between calls to the waitForNewInput() method is a critical section. One immediate implication is that the result of the method calls that check the configuration of the model (e.g. hasToken() to check the receiver) will not be invalidated during execution in the critical section. It also means that this should not be viewed as a way to get parallel execution in DE. For that, consider the DDE domain. It is important to note that the implementation serializes the execution of threads, meaning that at public class ABRecognizer extends DEThreadActor ( StringToken msg = new StringToken("Seen AB");
public class ABRecognizer extends DEActor ( StringToken msg = new StringToken("Seen AB" // We need an explicit state variable in // this case, int state = 0;
II the run method is invoked when the thread // is started, public void runt) { while (true) { waitForNewInputs 0; if (inportA.hasToken(0)) { IOPort[] nextinport = (inportB}; waitForNewInputs(nextinport); outport.broadcast(msg);
public void fireO { switch (state) { case 0: if (inportA.hasToken(O)) state = 1; break;
}
(
}
}
case 1: if (inportB.hasToken(0)) ( state = 0; outport.broadcast(msg);
}
} }
FIGURE 15.10. Code listings for two style of writing the ABRecognizer actor.
300
any given time there is only one thread running. When a threaded actor is running (i.e. executing inside its run() method), all other threaded actors and the director are suspended. It will keep running until a waitForNewInputsO statement is reached, where the flow of execution will be transferred back to the director. Note that the director thread executes all non-threaded actors. This serialization is needed because the DE domain has a notion of global time, which makes parallelism much more difficult to achieve. The serialization is accomplished by the use of monitor in the DEThreadActor class. Basically, the fire() method of the DEThreadActor class suspends the calling thread (i.e. the director thread) until the threaded actor suspends itself (by calling waitForNewInputsO). One key point of this implementation is that the threaded actors appear just like an ordinary DE actor to the DE director. The DEThreadActor base class encapsulates the threaded execution and provides the regular interfaces to the DE director. Therefore the threaded description can be used whenever an ordinary actor can, which is everywhere. The code shown in figure 15.11 implements the run method of a slightly more elaborate actor with the following behavior: Emit an output O as soon as two inputs A and B have occurred. Reset this behavior each time the input R occurs. Future work in this area may involve extending the infrastructure to support various concurrency constructs, such as preemption, parallel execution, etc. It might also be interesting to explore new concurrency semantics similar to the threaded DE, but without the 'forced' serialization.
15.6 Composing DE with Other Domains One of the major concepts in Ptolemy II is modeling heterogeneous systems through the use of hierarchical heterogeneity. Actors on the same level of hierarchy obey the same set of semantics rules. Inside some of these actors may be another domain with a different model of computation. This mechanism is supported through the use of opaque composite actors. An example is shown in figure 15.12. The outermost domain is DE and it contains seven actors, two of them are opaque and composite. The opaque composite actors contain subsystems, which in this case are in the DE and CT domains.
15.6.1 DE inside Another Domain The DE subsystem completes one iteration whenever the opaque composite actor is fired by the outer domain. One of the complications in mixing domains is in the synchronization of time. Denote the current time of the DE subsystem by timer and the current time of the outer domain by touter An iteration of the DE subsystem is similar to an iteration of a top-level DE model, except that prior to the iteration tokens are transferred from the ports of the opaque composite actors into the ports of the contained DE subsystem, and after the end of the iteration, the director requesting a refire at the smallest time stamp in the event queue of the DE subsystem. The first of these is done in the transferlnputs() method of the DE director. This method is extended from its default implementation in the Director class. The implementation in the DEDirector class advances the current time of the DE subsystem to the current time of the outer domain, then calls super.transferlnputs(). It is done in order to correctly associate tokens seen at the input ports of the opaque composite actor, if any, with events at the current time of the outer domain, touten and put these events into the global event queue. This mechanism is, in fact, how the DE subsystem synchronize its current time, timen with the current time of the outer domain, /0„ter(Recall that the DE director
301
advances time by looking at the smallest time stamp in the event queue of the DE subsystem). Specifically, before the advancement of the current time of the DE subsystem tinmr is less than or equal to the touter and after the advancement tinner is equal to the touter Requesting a refiring is done in the postfire() method of the DE director by calling the fireAt() method of the executive director. Its purpose is to ensure that events in the DE subsystem are processed on time with respect to the current time of the outer domain, touter Note that if the DE subsystem is fired due to the outer domain processing a refire request, then there may not be any tokens in the input port of the opaque composite actor at the beginning of the DE subsystem iteration. In that case, no new events with time stamps equal to touter will be put into the global event queue. Interestingly, in this case, the time synchronization will still work because timer will be advanced to the smallest time stamp in the global event queue which, in turn, has to be equal
public void run() { try { while (true) ( // In initial state.. waitForNewInputs(); if (R.hasToken(O)) { // Resetting.. continue;
if (A.hasToken(O)) ( // Seen A.. IOPortU ports = (B,R}; waitForNewInputs(ports); if (!R.hasToken(01) { // Seen A then B.. 0. broadcast (new DoubleTokend. 0)) ; IOPortN ports2 = (R}; waitForNewInputs(ports2); else { // Resetting continue;
} } else if (B.hasToken(O)) ( // Seen B.. IOPortH ports = {A,R}; waitForNewInputs(ports); if (!R.hasToken(0)) { // Seen B then A.. 0.broadcast(new DoubleToken(l.O)) ; IOPortH ports2 = {R}; waitForNewInputs(ports2); } else ( // Resetting continue;
1
) // while (true) catch (IllegalActionException e) { getManagerO.notifyListenersOfException(e) ;
FIGURE 15.11. The run() method of the ABRO actor.
302
f
outer because we always request a retire according to that time stamp.
15.6.2 Another Domain inside DE Due to its nature, the opaque composite actor is opaque and therefore, as far as the DE Director is concerned, behaves exactly like a domain polymorphic actor. Recall that domain polymorphic actors are treated as functions with zero delay in computation time. To produce events in the future, domain polymorphic actors request a refire from the DE director and then produce the events when it is refired.
r Discrete Event ~V,
•a *s~x^ ~-»
H
Xl
Edit parameters for SampleDelay
\^/
inltialOutputs: Cö}"
Commit
Add
Remove
Edit Styles
Cancel
FIGURE 16.2. The model of figure 16.1 corrected with an instance of SampleDelay in the feedback loop.
306
16.2.2 Consistency of data rates Consider the SDF model shown in figure 16.3. The model is attempting to plot a sinewave and its downsampled counterpart. However, there is an error because the number of tokens on each channel of the input port of the plotter can never be made the same. The DownSample actor declares that it consumes 2 tokens using the tokenConsumptionRate parameter of its input port. Its output port similarly declasres that it produces only one token, so there will only be half as many tokens being plotted from the DownSampler as from the Sinewave. The fixed model is shown in figure 16.4, which uses two separate plotters. When the model is executed, the plotter on the bottom will fire twice as often as the plotter on the top, since must consume twice as many tokens. Notice that the problem appears because one of the actors (in this case, the DownSample actor) produces or consumes more than one token on one of its ports. One easy way to ensure rate consistency is to use actors that only produce and consume one token at a time. This special case is known as homogenous SDF. Note that actors like the Sequence plotter which do not specify rate parameters are assumed to be homogneous. For more specific information about the rate parameters SDF DownSample Sinewave
i r^n
SequencePlotter
(^Exception
Mp
|
Wfg
solution exists for the balance equations. Graph is not consistent under the SDF domain
NO
Dismiss
Display Stack Trace
FIGURE 16.3. An SDF model with inconsistent rates.
SDF
DownSample
SequencePlotter
»^—HtsSequencePlotter3
Sinewave
FIGURE 16.4. Figure 16.3 modified to have consistent rates.
307
and how they are used for scheduling, see section 16.3.1.
16.2.3 How many iterations? One final issue when using the SDF domain concerns the value of the iterations parameter of the SDF director. In homogenous models one token is usually produced for every iteration. However, when token rates other than one are used, more than one interesting output value may be created for each iteration. For example, consider figure 16.5 which contains a model that plots the Fast Fourier Transform of the input signal. The important thing to realize about this model is that the FFT actor declares that it consumes 256 tokens from its input port and produces 256 tokens from its output port, corresponding to an order 8 FFT. This means that only one iteration is necessary to produce all 256 values of the FFT. Contrast this with the model in figure 16.6. This model plots the individual values of the signal. Here 256 iterations are necessary to see the entire input signal, since only one output value is plotted in each iteration.
16.3 Properties of the SDF domain SDF is an untimed model of computation. All actors under SDF consume input tokens, perform their computation and produce outputs in one atomic operation. If an SDF model is embedded within a timed model, then the SDF model will behave as a zero-delay actor. In addition, SDF is a statically scheduled domain. The firing of a composite actor corresponds to a SDF
Pulse
SequencePlotter3
FFT
FIGURE 16.5. A model that plots the Fast Fourier Transform of a signal. Only one iteration must be executed to plot all 256 values of the FFT, since the FFT actor produces and consumes 256 tokens each firing.
SDF
SequencePlotter3
Pulse
FIGURE 16.6. A model that plots the values of a signal. 256 iterations must be executed to plot the entire signal.
308
single iteration of the contained( 16.3.1) model. An SDF iteration consists of one execution of the precalculated SDF schedule. The schedule is calculated so that the number of tokens on each relation is the same at the end of an iteration as at the beginning. Thus, an infinite number of iterations can be executed, without deadlock or infinite accumulation of tokens on each relation. Execution in SDF is extremely efficient because of the scheduled execution. However, in order to execute so efficiently, some extra information must be given to the scheduler. Most importantly, the data rates on each port must be declared prior to execution. The data rate represents the number of tokens produced or consumed on a port during every firing1. In addition, explicit data delays must be added to feedback loops to prevent deadlock. At the beginning of execution, and any time these data rates change, the schedule must be recomputed. If this happens often, then the advantages of scheduled execution can quickly be lost.
16.3.1 Scheduling The first step in constructing the schedule is to solve the balance equations [48]. These equations determine the number of times each actor will fire during an iteration. For example, consider the model in figure 16.7. This model implies the following system of equations, where ProductionRate and ConsumptionRate are declared properties of each port, and Firings is a property of each actor that will be solved for: Firings{k) X ProductionRate(A 1) = Firings(B) X ConsumptionRate(B 1) Firings(A) X ProductionRate(A2) = Firings(C) X ConsumptionRate(C 1) Firings(C) X ProductionRate(C2) = Firings(B) X ConsumptionRate(B2) These equations express constraints that the number of tokens created on a relation during an iteration is equal to the number of tokens consumed. These equations usually have an infinite number of linearly dependent solutions, and the least positive integer solution for Firings is chosen as the firing vector, or the repetitions vector. The second step in constructing an SDF schedule is dataflow analysis. Dataflow analysis orders the firing of actors, based on the relations between them. Since each relation represents the flow of data, the actor producing data must fire before the consuming actor. Converting these data dependencies to a sequential list of properly scheduled actors is equivalent to topologically sorting the SDF graph, if the graph is acyclic2. Dataflow graphs with cycles cause somewhat of a problem, since such EO ■»faii*.iifam.ii
s~~
Al*
\
«9-H
A w
H
c
_*B1
N
■—9B2
)
C2j
FIGURE 16.7. An example SDF model.
1. This is known as multirate SDF, where arbitrary rates are allowed. Not to be confused with homogenous SDF, where the data rates are fixed to be one.
309
graphs cannot be topologically sorted. In order to determine which actor of the loop to fire first, a data delay must be explicitly inserted somewhere in the cycle. This delay is represented by an initial token created by one of the output ports in the cycle during initialization of the model. The presence of the delay allows the scheduler to break the dependency cycle and determine which actor in the cycle to fire first. In Ptolemy II, the initial token (or tokens) can be sent from any port, as long as the port declares an initProduction property. However, because this is such a common operation in SDF, the Delay actor (see section 16.5) is provided that can be inserted in a feedback look to break the cycle. Cyclic graphs not properly annotated with delays cannot be executed under SDF. An example of a cyclic graph properly annotated with a delay is shown in figure 16.8. In some cases, a non-zero solution to the balance equations does not exist. Such models are said to be inconsistent, and cannot be executed under SDF. Inconsistent graphs inevitably result in either deadlock or unbounded memory usage for any schedule. As such, inconsistent graphs are usually bugs in the design of a model. However, inconsistent graphs can still be executed using the PN domain, if the behavior is truly necessary. Examples of consistent andinconsistent graphs are shown in figure 16.9.
16.3.2 Hierarchical Scheduling So far, we have assumed that the SDF graph is not hierarchical. The simplest way to schedule a hierarchical SDF model is flatten the model to remove the hierarchy, and then schedule the model as usual. This technique allows the most efficient schedule to be constructed for a model, and avoids certain composability problems when creating hierarchical models. In Ptolemy II, a model created using a transparent composite actor to define the hierarchy is scheduled in exactly this way. Ptolemy II also supports a stronger version of hierarchy, in the form of opaque composite actors. In this case, the hierarchical actor appears to be no different from the outside than an atomic actor with no
FIGURE 16.8. A consistent cyclic graph, properly annotated with delays. A one token delay is represented by a black circle. E3 is responsible for setting the tokenlnitProduction parameter on its output port, and creating the two tokens during initialization. This graph can be executed using the schedule El, El, E2, E3, E3.
Note that the topological sort does not correspond to a unique total ordering over the actors. Furthermore, especially in multirate models it may be possible to interleave the firings of actors that fire more than once. This can result in many possible schedules that represent different performance tradeoffs. We anticipate that future schedulers will be implemented to take advantage of these tradeoffs. For more information about these tradeoffs, see [47].
310
hierarchy. The SDF domain does not have any information about the contained model, other than the rate parameters that may be specifed on the ports of the composite actor. The SDF domain is designed so that it automatically sets the rates of external ports when the schedule is computed. Most other domains are designed (conveniently enough) so that their models are compatible with default rate properties assumed by the SDF domain. For a complete description of these defaults, see the description of the SDFScheduler class in section 16.4.2.
16.4 Software Architecture The SDF kernel package implements the SDF model of computation. The structure of the classes in this package is shown in figure 16.10.
16.4.1 SDF Director The SDFDirector class extends the StaticSchedulingDirector class. When an SDF director is created, it is automatically associated with an instance of the default scheduler class, SDFScheduler. This scheduler is intended to be relatively fast, but not designed to optimize for any particular performance goal. The SDF director does not currently restrict the schedulers that may be used with it. For more information about SDF schedulers, see section 16.4.2. The director has a parameter, iterations, which determines a limit on the number of times the director wishes to be fired1. After the director has been fired the given number of times, it will always return false in its postfire() method, indicating that it does not wish to be fired again. The iterations parameter must contain a non-negative integer value. The default value is an IntToken with value 0, indicating that there is no preset limit for the number of times the director will fire. Users will likely
FIGURE 16.9. Two models, with each port annotated with the appropriate rate properties. The model on the top is consistent, and can be executed using the schedule A, A, C, B, B. The model on the bottom is inconsistent because tokens will accumulate between ports C2 and B2. 1. This parameter acts similarly to the Time-to-Stop parameter in Ptolemy Classic.
311
specify a non-zero value in the director of the toplevel composite actor as the number of toplevel iterations of the model. The SDF director also has a vectorizationFactor parameter that can be used to request vectorized execution of a model. This parameter suggests that the director modify the schedule so that instead of firing each actor only once, it is fired vectorizationF actor times using the vectorized iterate method. The specified factor serves only as a suggestion, and the director is free to ignore it or to use a different factor. The vectorizationF actor parameter must contain a positive integer value. The default value is an IntToken with value one, indicating that no vectorization should be done. Note that vectorizing the execution of a model is not necessarily possible if the model contains feedback cycles. At the very least, it is likely that the data delay specified for any cycle must be increased (possibly changing the meaning of the model).
SUtlcSchedulingDirectori !♦
_J 1'1
Ports may have these [ parameters, but are not required to. If the parameters are not present, then appropriate default values are assumed.
Scheduler TypadAiomicAetofi
i
+getSchedule(): Schedule: |+schedule(): Enumeration;
A
TypadlOPort On
SDFDirector
tokenConsumptionRate : int; tokenlnitProduction; int tokenProductionRate : int
iterations: int vectorizationFactor: int +SDFDirector() +SDFDirecton> : Workspace) +SDFDirector(c: CompositeEntity, name : String)
uses parameters to schedule
SDFSchedultr
AmyFIFOQiMiM ■»INFINITE CAPACITY: int ♦DEFAULT CAPACITY : int ♦STARTING ARRAYSIZE : int ♦DEFAULT HISTORY CAPACITY : int -_container: Nameable queuearray: ObjectfJ MstoryList: LinkedList ArrayFIFOQueueO +ArrayFIFOQueue(size: int) +ArrayFIFOQueue(container: Nameable) +ArrayFIFOQueue(container: Nameable, size : int) +ArrayFIFOQueue(model: ArrayFIFOQueue) ♦elements!): CollectionEnumeration ♦getfoffset: int) ♦getCapacityO: int +getContainer(): Nameable ♦getHistoryCapadtyO: int ♦NstoryElementsO: CollectionEnumeration +historySize(): int ♦isEmptyO: boolean +isFull(): boolean +put(o: Object) +putArray(o: ObjectQ) ♦putArrayjarray: ObjectfJ, count: int) ♦setCapacity(capacity: int) ♦setHistoryCapacity(capacity: int) ♦size)): int +take(): Object takeArrayfo: ObjectfJ) +takeArray(o: object fj, count: int)
+SDFScheduler() ♦SDFSchedulerfw: Workspace) +getFiringCount{entity: Entity): int ♦oetTokenConsumptionRatefp: lOPort): int ♦qetTokenlnitProductionfo ; IQPorl): int ►oetTokenProductionRatefp: IQPort)
SDFIOPort
+SDFIOPort(c : ComponentEntity, name ; String) +getTokenConsumptionRate(): int +getTokenlnitProduction(): int tgetTokenProductionRateO: int +setTokenConsumptionRate(rate: int) +setTokenlnitProduction(rate: int) +setTokenProductionRate(rate: int)
1.1
SDFReceiver
1.1
+SDFReceiver() +SDFReceiver(size : int) +SDFReceiver(container: lOPort) +SDFReceiver(container; lOPort, size : int) +elements(): Enumeration +get(offset: int): Token +getCapacity(): int -t-getHistoryCapadtyO : int +historyElements(): Enumeration +historySize(): int +setCapacity(capacity; int) +setHisto(yCapacity(capacity: int) +size(): int
FIGURE 16.10. The static structure of the SDF kernel classes.
312
:+get(): Token |+getArray(lengtri: int): TokenfJ i+getContainer(): lOPort jthasRoomO : boolean i+hasRoom(count: int): boolean :+hasToken(): boolean |<-hasToken(count: int): boolean i+put(t; Token) ;+putArray(tokens ; Token fj, length : int): setContainerfport: lOPort)
The newReceiverO method in SDF directors is overloaded to return instances of the SDFReceiver class. This receiver contains optimized method for reading and writing blocks of tokens. For more information about SDF receivers, see section 16.4.3.
16.4.2 SDF Scheduler The basic SDFScheduler derives directly from the Scheduler class. This scheduler provides unlooped, sequential schedules suitable for use on a single processor. No attempt is made to optimize the schedule by minimizing data buffer sizes, minimizing the size of the schedule, or detecting parallelism to allow execution on multiple processors. We anticipate that more elaborate schedulers capable of these optimizations will be added in the future. The scheduling algorithm is based on the simple multirate algorithm in [48]. Currently, only single processor schedules are supported. The multirate scheduling algorithm relies on the actors in the system to declare the data rates of each port. The data rates of ports are specified using three parameters on each port named tokenConsumptionRate, tokenProductionRate, and tokenlnitProduction. The production parameters are valid only for output ports, while the consumption parameter is valid only for input ports. If a parameter exists that is not valid for a given port, then the value of the parameter must be zero, or the scheduler will throw an exception. If a valid parameter is not specified when the scheduler runs, then default values of the parameters will be assumed, however the parameters are not then created1. After scheduling, the SDF scheduler will set the rate parameters on any external ports of the composite actor. This allows a containing actor, which may represent an SDF model, to properly schedule the contained model, as long as the contained model is scheduled first. To ensure this, the SDF director forces the creation of the schedule after initializing all the actors in the model. This mechanism is illustrated in the sequence diagram in figure 16.11. Disconnected graphs. SDF graphs should generally be connected. If an SDF graph is not connected, then there is some concurrency between the disconnected parts that is not captured by the SDF rate parameters. In such cases, another model of computation (such as process networks) should be used to explicitly specify the concurrency. As such, the current SDF scheduler disallows disconnected graphs, and will throw an exception if you attempt to schedule such a graph. However, sometimes it is useful to avoid introducing another model of computation, so it is possible that a future scheduler will allow disconnected graphs with a default notion of concurrency. Multiports. Notice that it is impossible to set a rate parameter on individual channels of a port. This is intentional, and all the channels of an actor are assumed to have the same rate. For example, when the AddSubtract actor fires under SDF, it will consume exactly one token from each channel of its input plus port, consume one token from each channel of its minus port, and produce one token the single channel of its output port. Notice that although the domain-polymorphic adder is written to be more general than this (it will consume up to one token on each channel of the input port), the SDF scheduler will ensure that there is always at least one token on each input port before the actor fires. Dangling ports. All channels of a port are required to be connected to a remote port under the SDF domain. A regular port that is not connected will always result in an exception being thrown by the 1. The assumed values correspond to a homogeneous actor with no data delay. Input ports are assumed to have a consumption rate of one, output ports are assumed to have a production rate of one, and no tokens are produced during initialization.
313
scheduler. However, the SDF scheduler detects multiports that are not connected to anything (and thus have zero width). Such ports are interpreted to have no channels, and will be ignored by the SDF scheduler.
16.4.3 SDF ports and receivers Unlike most domains, multirate SDF systems tend to produce and consume large blocks of tokens during each firing. Since there can be significant overhead in data transport for these large blocks, SDF receivers are optimized for sending and receiving a block of tokens en masse. The SDFReceiver class implements the Receiver interface. Instead of using the FIFOQueue class to store data, which is based on a linked list structure, SDF receivers use the ArrayFIFOQueue class, which is based on a circular buffer. This choice is much more appropriate for SDF, since the size of the buffer is bounded, and can be determined statically1. a:CompositeActor a.d:SDFDirector
a.b:CompositeActor
a.p:IOPort
a.b.diSDFDirector
s1:SDFScheduler
a.b.p:IOPort
s2:SDFScheduler
initializeQ
initialized
initializeO
getSchedule
►
setRates sc1: Schedule
getSchedule
►
getRates
setRates sc2: Schedule
FIGURE 16.11. The sequence of method calls during scheduling of a hierarchical model.
1. Although the buffer sizes can be statically determined, the current mechanism for creating receivers does not easily support it. The SDF domain currently relies on the buffer expanding algorithm that the ArrayFIFOQueue uses to implement circular buffers of unbounded size. Although there is some overhead during the first iteration, the overhead is minimal during subsequent iterations (since the buffer is guaranteed never to grow larger).
314
The SDFIOPort class extends the TypedlOPort class. It exists mainly for convenience when creating actors in the SDF domain. It provides comvenience methods for setting and accessing the rate parameters used by the SDF scheduler.
16.4.4 ArrayFIFOQueue The ArrayFIFOQueue class implements a first in, first out (FIFO) queue by means of a circular array buffer1. Functionally it is very similar to the FIFOQueue class, although with different enqueue and dequeue performance. It provides a token history and an adjustable, possibly unspecified, bound on the number token it contains. If the bound on the size is specified, then the array is exactly the size of the bound. In other words, the queue is full when the array becomes full. However, if the bound is unspecified, then the circular buffer is given a small starting size and allowed to grow. Whenever the circular buffer fills up, it is copied into a new buffer that is twice the original size.
16.5 Actors Most domain-polymorphic actors can be used under the SDF domain. However, actors that depend on a notion of time may not work as expected. For example, in the case of a TimedPlotter actor, all data will be plotted at time zero when used in SDF. In general, domain-polymorphic actors (such as AddSubtract) are written to consume at most one token from each input port and produce exactly one token on each output port during each firing. Under SDF, such an actor will be assumed to have a rate of one on each port, and the actor will consume exactly one token from each input port during each firing. There is one actor that is normally only used in SDF: the Delay actor. The delay actor is provided to make it simple to build models with feedback, by automatically handling the tokenlnitProduction parameter and providing a way to specify the tokens that are created. Delay Ports: input (Token), output (Token). Parameters: initialOutputs (ArrayToken). During initialization, create a token on the output for each token in the initialOutputs array. During each firing, consume one token on the input and produce the same token on the output.
Adding an array of objects to an ArrayFIFOQueue is implemented using the java.lang.system.arraycopy method. This method is capable of safely removing certain checks required by the Java language. On most Java implementations, this is significantly faster than a hand coded loop for large arrays. However, depending on the Java implementation it could actually be slower for small arrays. The cost is usually negligible, but can be avoided when the size of the array is small and known when the actor is written.
315/316
CSP Domain Author: Neil Smyth Contributors: John S. Davis II, Bilung Lee
17.1 Introduction The communicating sequential processes (CSP) domain in Ptolemy II models a system as a network of sequential processes that communicate by passing messages synchronously through channels. If a process is ready to send a message, it blocks until the receiving process is ready to accept the message. Similarly if a process is ready to accept a message, it blocks until the sending process is ready to send the message. This model of computation is non-deterministic as a process can be blocked waiting to send or receive on any number of channels. It is also highly concurrent. The CSP domain is based on the model of computation (MoC) first proposed by Hoare [37][38] in 1978. In this MoC, a system is modeled as a network of processes communicate solely by passing messages through unidirectional channels. The transfer of messages between processes is via rendezvous, which means both the sending and receiving of messages from a channel are blocking: i.e. the sending or receiving process stalls until the message is transferred. Some of the notation used here is borrowed from Gregory Andrews' book on concurrent programming [4], which refers to rendezvous-based message passing as synchronous message passing. Applications for the CSP domain include resource management and high level system modeling early in the design cycle. Resource management is often required when modeling embedded systems, and to further support this, a notion of time has been added to the model of computation used in the domain. This differentiates our CSP model from those more commonly encountered, which do not typically have any notion of time, although several versions of timed CSP have been proposed [35]. It might thus be more accurate to refer to the domain using our model of computation as the "Timed CSP" domain, but since the domain can be used with and without time, it is simply referred to as the CSP domain.
317
17.2 CSP Communication Semantics At the core of CSP communication semantics are two fundamental ideas. First is the notion of atomic communication and second is the notion of nondeterministic choice. It is worth mentioning a related model of computation known as the calculus of communicating systems (CCS) that was independently developed by Robin Milner in 1980 [59]. The communication semantics of CSP are identical to those of CCS.
17.2.1 Atomic Communication: Rendezvous Atomic communication is carried out via rendezvous and implies that the sending and receiving of a message occur simultaneously. During rendezvous both the sending and receiving processes block until the other side is ready to communicate; the act of sending and receiving is indistinguishable activities since one can not happen without the other. A real world analogy to rendezvous can be found in telephone communications (without answering machines). Both the caller and callee must be simultaneously present for a phone conversation to occur. Figure 17.1 shows the case where one process is ready to send before the other process is ready to receive. The communication of information in this way can be viewed as a distributed assignment statement. The sending process places some data in the message that it wants to send. The receiving process assigns the data in the message to a local variable. Of course, the receiving process may decide to ignore the contents of the message and only concern itself with the fact that a message arrived.
17.2.2 Choice: Nondeterministic Rendezvous Nondeterministic choice provides processes with the ability to randomly select between a set of possible atomic communications. We refer to this ability as nondeterministic rendezvous and herein lies much of the expressiveness of the CSP model of computation. The CSP domain implements nondeterministic rendezvous via guarded communication statements. A guarded communication statement has the form
Process A
Process B
progress
receive(A, var) v.
♦ FIGURE 17.1. Illustrating how processes block waiting to rendezvous
318
\
guard; communication => statements;
The guard is only allowed to reference local variables, and its evaluation cannot change the state of the process. For example it is not allowed to assign to variables, only reference them. The communication must be a simple send or receive, i.e. another conditional communication statement cannot be placed here. Statements can contain any arbitrary sequence of statements, including more conditional communications. If the guard is false, then the communication is not attempted and the statements are not executed. If the guard is true, then the communication is attempted, and if it succeeds, the following statements are executed. The guard may be omitted, in which case it is assumed to be true. There are two conditional communication constructs built upon the guarded communication statements: CIF and CDO. These are analogous to the if and while statements in most programming languages. They should be read as "conditional if and "conditional do". Note that each guarded communication statement represents one branch of the CIF or CDO. The communication statement in each branch can be either a send or a receive, and they can be mixed freely. CIF: The form of a CIF is CIF { G1;C1 => SI; [] G2;C2 => S2; []
}
For each branch in the CIF, the guard {Gl, G2,...) is evaluated. If it is true (or absent, which implies true), then the associated communication statement is enabled. If one or more branch is enabled, then the entire construct blocks until one of the communications succeeds. If more than one branch is enabled, the choice of which enabled branch succeeds with its communication is made nondeterministically. Once the successful communication is carried out, the associated statements are executed and the process continues. If all of the guards are false, then the process continues executing statements after the end of the CIF. It is important to note that, although this construct is analogous to the common */programming construct, its behavior is very different. In particular, all guards of the branches are evaluated concurrently, and the choice of which one succeeds does not depend on its position in the construct. The notation "[]" is used to hint at the parallelism in the evaluation of the guards. In a common if, the branches are evaluated sequentially and the first branch that is evaluated to true is executed. The CIF construct also depends on the semantics of the communication between processes, and can thus stall the progress of the thread if none of the enabled branches is able to rendezvous.
319
CDO: The form of the CDO is CDO { G1;C1 => SI; [] G2;C2 => S2; []
}
The behavior of the CDO is similar to the CIF in that for each branch the guard is evaluated and the choice of which enabled communication to make is taken nondeterministically. However, the CDO repeats the process of evaluating and executing the branches until all the guards return false. When this happens the process continues executing statements after the CDO construct. An example use of a CDO is in a buffer process which can both accept and send messages, but has to be ready to do both at any stage. The code for this would look similar to that in figure 17.2. Note that in this case both guards can never be simultaneously false so this process will execute the CDO forever.
17.2.3 Deadlock A deadlock situation is one in which none of the processes can make progress: they are all either blocked trying to rendezvous or they are delayed (see the next section). Thus, two types of deadlock can be distinguished: real deadlock - all active processes are blocked trying to communicate time deadlock - all active processes are either blocked trying to communicate or are delayed, and at least one processes is delayed.
17.2.4 Time In the CSP domain, time is centralized. That is, all processes in a model share the same time, referred to as the current model time. Each process can only choose to delay itself for some period relative to the current model time, or a process can wait for time deadlock to occur at the current model time. In both cases, a process is said to be delayed. When a process delays itself for some length of time from the current model time, it is suspended until time has sufficiently advanced, at which stage it wakes up and continues. If the process delays itself for zero time, this will have no effect and the process will continue executing. A process can also choose to delay its execution until the next occasion a time deadlock is reached. The process resumes at the same model time at which it delayed, and this is useful as a model can have several sequences of actions at the same model time. The next occasion time deadlock is reached, any
CDO { (room in buffer?); receive(input, beginningOfBuffer) => update pointer to beginning of buffer; [] (messages in buffer?); send(output, endOfBuffer) => update pointer to end of buffer;
} FIGURE 17.2. Example of how a CDO might be used in a buffer
320
processes delayed in this manner will continue, and time will not be advanced. An example of using time in this manner can be found in section 17.3.2. Time may be advanced when all the processes are delayed or are blocked trying to rendezvous, and at least one process is delayed. If one or more processes are delaying until a time deadlock occurs, these processes are woken up and time is not advanced. Otherwise, the current model time is advanced just enough to wake up at least one process. Note that there is a semantic difference between a process delaying for zero time, which will have no effect, and a process delaying until the next occasion a time deadlock is reached. Note also that time, as perceived by a single process, cannot change during its normal execution; only at rendezvous points or when the process delays can time change. A process can be aware of the centralized time, but it cannot influence the current model time except by delaying itself. The choice for modeling time was in part influenced by Pamela [27], a run time library that is used to model parallel programs.
17.2.5 Differences from Original CSP Model as Proposed by Hoare The model of computation used by the CSP domain differs from the original CSP [37] model in two ways. First, a notion of time has been added. The original proposal had no notion of time, although there have been several proposals for timed CSP [35]. Second, as mentioned in section 17.2.2, it is possible to use both send and receive in guarded communication statements. The original model only allowed receives to appear in these statements, though Hoare subsequently extended their scope to allow both communication primitives [38]. One final thing to note is that in much of the CSP literature, send is denoted using a "!", pronounced "bang", and receive is denoted using a "?", pronounced "query". This syntax was what was used in the original CSP paper by Hoare. For example, the languages Occam [14] and Lotos [21] both follow this syntax. In the CSP domain in Ptolemy II we use send and get, the choice of which is influenced by the desire to maintain uniformity of syntax across domains in Ptolemy II that use message passing. This supports the heterogeneity principle in Ptolemy II which enables the construction and interoperability of executable models that are built under a variety of models of computation. Similarly, the notation used in the CSP domain for conditional communication constructs differs from that commonly found in the CSP literature.
17.3 Example CSP Applications Several example applications have been developed which serve to illustrate the modeling capabilities of the CSP model of computation in Ptolemy II. Each demonstration incorporates several features of CSP and the general Ptolemy II framework. Below, four demonstrations have been selected that each emphasize particular semantic capabilities over others. The applications are described here, but not the code. See the directory $PTH/ptolemy/domains/csp/demo for the code. The first demonstration, dining philosophers, serves as a natural example of core CSP communication semantics. This demonstration models nondeterministic resource contention, e.g., five philosophers randomly accessing chopstick resources. Nondeterministic rendezvous serves as a natural modeling tool for this example. The second example, hardware bits contention, models deterministic resource contention in the context of time. As will be shown, the determinacy of this demonstration constrains the natural nondeterminacy of the CSP semantics and results in difficulties. Fortunately these difficulties can be smoothly circumvented by the timing model that has been integrated into the
321
CSP domain. The third demonstration, sieve of Eratosthenes, serves to demonstrate the mutability that is possible in CSP models. In this demonstration, the topology of the model changes during execution. The final demonstration, M/M/l queue, features the pause/resume mechanism of Ptolemy II that can be used to control the progression of a model's execution in the CSP domain.
17.3.1 Dining Philosophers Nondeterministic Resource Contention. This implementation of the dining philosophers problem illustrates both time and conditional communication in the CSP domain. Five philosophers are seated at a table with a large bowl of food in the middle. Between each pair of philosophers is one chopstick, and to eat, a philosopher needs both the chopsticks beside him. Each philosopher spends his life in the following cycle: thinks for a while, gets hungry, picks up one of the chopsticks beside him, then the other, eats for a while and puts the chopsticks down on the table again. If a philosopher tries to grab a chopstick but it is already being used by another philosopher, then the philosopher waits until that chopstick becomes available. This implies that no neighboring philosophers can eat at the same time and at most two philosophers can eat at a time. The Dining Philosophers problem was first dreamt up by Edsger W. Dijkstra in 1965. It is a classic concurrent programming problem that illustrates the two basic properties of concurrent programming: Liveness. How can we design the program to avoid deadlock, where none of the philosophers can make progress because each is waiting for someone else to do something? Fairness. How can we design the program to avoid starvation, where one of the philosophers could make progress but does not because others always go first? This implementation uses an algorithm that lets each philosopher randomly chose which chopstick to pick up first (via a CDO), and all philosophers eat and think at the same rates. Each philosopher and each chopstick is represented by a separate process. Each chopstick has to be ready to be used by either philosopher beside it at any time, hence the use of a CDO. After it is grabbed, it blocks waiting for a message from the philosopher that is using it. After a philosopher grabs both the chopsticks next to him, he eats for a random time. This is represented by calling delay() with the random interval to eat for. The same approach is used when a philosopher is thinking. Note that because messages are passed by rendezvous, the blocking of a philosopher when it cannot obtain a chopstick is obtained for free. This algorithm is fair, as any time a chopstick is not being used, and both philosophers try to use it, they both have an equal chance of succeeding. However this algorithm does not guarantee the absence of deadlock, and if it is let run long enough this will eventually occur. The probability that deadlock
= chopstick = philosopher
Vr^X FIGURE 17.3. Illustration of the dining philosophers problem.
322
occurs sooner increases as the thinking times are decreased relative to the eating times.
17.3.2 Hardware Bus Contention Deterministic Resource Contention. This demonstration consists of a controller, N processors and a memory block. At randomly selected points in time, each processor requests permission from the controller to access the memory block. The processors each have priorities associated with them and in cases where there is a simultaneous memory access request, the controller grants permission to the processor with the highest priority. Due to the atomic nature of rendezvous, it is impossible for the controller to check priorities of incoming requests at the same time that requests are occurring. To overcome this difficulty, an alarm is employed. The alarm is started by the controller immediately following the first request for memory access at a given instant in time. It is awakened when a delay block occurs to indicate to the controller that no more memory requests will occur at the given point in time. Hence, the alarm uses CSP's notion of delay blocking to make deterministic an inherently nondeterministic activity.
17.3.3 Sieve of Eratosthenes Dynamic Topology. This example implements the sieve of Eratosthenes. This is an algorithm for generating a list of prime numbers, illustrated in figure 17.5. It originally consists of a source generating integers, and one sieve filtering out all multiples of two. When the end sieve sees a number that it cannot filter, it creates a new sieve to filter out all multiples ofthat number. Thus after the sieve filtering out multiples of two sees the number three, it creates a new sieve that filters out multiples of three. This then continues with the three sieve eventually creating a sieve to filter out all multiples of five, and so on. Thus after a while there will be a chain of sieves each filtering out a different prime number. If any number passes through all the sieves and reaches the end with no sieve waiting, it must be another prime and so a new sieve is created for it. This demo is an example of how changes to the topology can be made in the CSP domain. Each topology change here involves creating a new CSPSieve actor and connecting it to the end of the chain of sieves.
17.3.4 An M/M/l Queue Pause/Resume. The example in figure 17.6 illustrates a simple M/M/l queue. It has three actors, one representing the arrival of customers, one for the queue holding customers that have arrived and have
Memory
Controller
Alarm
Processor
Processor
Processor
FIGURE 17.4. Processors contending for memory access
323
not yet been served, and the third representing the server. Both the inter-arrival times of customers and the service times at the server are exponentially distributed, which of course is what makes this an M/ M/l queue. This demo makes use of basic rendezvous, conditional rendezvous and time. By varying the rates for the customer arrivals and service times, and varying the length of the buffer, you can see various trade-offs. For example if the buffer length is too short, customers may arrive that cannot be stored and so are missed. Similarly if the service rate is faster than the customer arrival rate, then the server could spend a lot of time idle. Another example demonstrates how pausing and resumption works. The setup is exactly the same
printer
ramp
1 1 4
sieve(2)
I i
ll
2
—3——»
)—
4
—4——»
siev 5(3)
■l
—5— 5
»
—6——► —7——»
sieve(5)
—8-—>
7
»
11
»
—9-—»
9
p
— 10-—► — 11-—►
— 12-—»
1l 11
»
sieve(11) ii 13
»
13
i
r
i
r
1
r
i
'
1 r
i '
FIGURE 17.5. Illustration of Sieve ofEratosthenes for obtaining first six primes.
324
*
1
r
as in the M/M/l demo, except that the thread executing the model calls pause() on the director as soon as the model starts executing. It then waits two seconds, as arbitrary choice, and then calls resume(). The purpose of this demo is to show that the pausing and resuming of a model does not affect the model results, only its rate of progress. The ability to pause and resume a model is primarily intended for the user interface.
17.4 Building CSP Applications For a model to have CSP semantics, it must have a CSPDirector controlling it. This ensures that the receivers in the ports are CSPReceivers, so that all communication of messages between processes is via rendezvous. Note that each actor in the CompositeActor under the control of the CSPDirector represents a separate process in the model.
17.4.1 Rendezvous Since the ports contain CSPReceivers, the basic communication statements send() and get() will have rendezvous semantics. Thus the fact that a rendezvous is occurring on every communication is transparent to the actor code.
17.4.2 Conditional Communication Constructs In order to use the conditional communication constructs, an actor must be derived from CSPActor. There are three steps involved: 1) Create a ConditionalReceive or ConditionalSend branch for each guarded communication statement, depending on the communication. Pass each branch a unique integer identifier, starting from zero, when creating it. The identifiers only need to be unique within the scope ofthat CDO or CIF. 2) Pass the branches to the chooseBranch() method in CSPActor. This method evaluates the guards, and decides which branch gets to rendezvous, performs the rendezvous and returns the identification number of the branch that succeeded. If all of the guards were false, -1 is returned. 3) Execute the statements for the guarded communication that succeeded. A sample template for executing a CDO is shown in figure 17.7. The code for the buffer described in figure 17.7 is shown in figure 17.8. In creating the ConditionalSend and ConditionalReceive branches, the first argument represents the guard. The second and third arguments represent the port and channel to send or receive the message on. The fourth argument is the identifier assigned to the branch. The choice of placing the guard in the constructor was made to keep the syntax of using guarded communication statements to the minimum, and to have the branch classes resemble the Customers arriving
server
FIGURE 17.6. Actors involved in M/M/l demo.
325
guarded communication statements they represent as closely as possible. This can give rise to the case where the Token specified in a ConditionalSend branch may not yet exist, but this has no effect because once the guard is false, the token in a ConditionalSend is never referenced. The other option considered was to wrap the creation of each branch as follows: if (guard) { // create branch and place in branches array } else { // branches array entry for this branch is null
} However this leads to longer actor code, and what is happening is not as syntactically obvious. The code for using a CIF is similar to that in figure 17.7 except that the surrounding while loop is omitted and the case when the identifier returned is -1 does nothing. At some stage the steps involved in using a CIF or a CDO may be automated using a pre-parser, but for now the user must follow the approach described above. It is worth pointing out that if most channels in a model are buffered, it may be worthwhile considering implementing the model in the PN domain which implicitly has an unbounded buffer on every channel. Also, if modeling time is the principal concern, the model builder should consider using the DE domain.
17.4.3 Time If a process wishes to use time, the actor representing it must derive from CSPActor. As explained in section 17.2.4, each process in the CSP domain is able to delay itself, either for some period from the current model time or until the next occasion time deadlock is reached at the current model time.
boolean continueCDO = true; while (continueCDO) { // step 1: ConditionalBranchH branches = new ConditionalBranch[#branchesRequired]; // Create a ConditionalReceive or a ConditionalSend for each branch // e.g. branchestO] = new ConditionalReceive((guard), input, 0, 0) ; // step 2: int result
chooseBranch(branches);
// step 3: if (result == 0) ( // execute statements associated with first branch } else if (result == 1) { // execute statements associated with second branch. } else if ... // continue for each branch ID } else if (result == -1) { // all guards were false so exit CDO. continueCDO = false; } else { // error )
FIGURE 17.7. Template for executing a CDO construct.
326
The two methods to call are delay() and waitForDeadlock(). Recall that if a process delays itself for zero time from the current time, the process will continue immediately. Thus delay(O.O) is not equivalent to waitForDeadlock() If no processes are delayed, it is also possible to set the model time by calling the method setCurrentTimeO on the director. However, this method can only be called when no processes are delayed, because the state of the model may be rendered meaningless if the model time is advanced to a time beyond the earliest delayed process. This method is present primarily for composing CSP with other domains. As mentioned in section 17.2.4, as far as each process is concerned, time can only increase while it is blocked waiting to rendezvous or when delaying. A process can be aware of the current model time, but it should only ever affect the model time by delaying its execution, thus forcing time to advance. The method setCurrentTime() should never be called from a process. By default every model in the CSP domain is timed. To use CSP without a notion of time, do not use the delay() method. The infrastructure supporting time does not affect the model execution if the delayO method is not used.
17.5 The CSP Software Architecture 17.5.1 Class Structure In a CSP model, the director is an instance of CSPDirector. Since the model is controlled by a CSPDirector, all the receivers in the ports are CSPReceivers. The combination of the CSPDirector and boolean guard = false; boolean continueCDO = true; ConditionalBranch[] branches = new ConditionalBranch[2]; while (continueCDO) { // step 1 guard = (_size < depth); branches[0] = new ConditionalReceive(guard, input, 0, 0); guard = (_size > 0);
branches[1] = new ConditionalSend(guard, output, 0, 1, _buffer[_readFrom]); // step 2 int successfulBranch = chooseBranch(branches); // step 3 if (successfulBranch == 0) { _size++; Jbuffer[_writeTo] = branches [0] .getTokenO ; _writeTo = ++_writeTo % depth; } else if (successfulBranch ==1) { _size--;
_readFrom = ++_readFrom % depth; } else if (successfulBranch == -1) { // all guards false so exit CDO // Note this cannot happen in this case continueCDO = false; } else { throw new TerminateProcessException(getName() + ": " + "branch id returned during execution of CDO.");
FIGURE 17.8. Code used to implement the buffer process described in figure 17.7.
327
CSPReceivers in the ports gives a model CSP semantics. The CSP domain associates each channel with exactly one receiver, located at the receiving end of the channel. Thus any process that sends or receives to any channel will rendezvous at a CSPReceiver. Figure 17.9 shows the static structure diagram of the five main classes in the CSP kernel, and a few of their associations. These are the classes that provide all the infrastructure needed for a CSP model. CSPDirector: This gives a model CSP semantics. It takes care of starting all the processes and controls/responds to both real and time deadlocks. It also maintains and advances the model time when necessary. CSPReceiver: This ensures that communication of messages between processes is via rendezvous. CSPActor: This adds the notion of time and the ability to perform conditional communication. ConditionalReceive, ConditionalSend: This is used to construct the guarded communication statements necessary for the conditional communication constructs.
17.5.2 Starting the model The director creates a thread for each actor under its control in its initialize() method. It also invokes the initialize() method on each actor at this time. The director starts the threads in its prefire() method, and detects and responds to deadlocks in its fire() method. The thread for each actor is an instance of ProcessThread, which invokes the prefire(), fire() and postfire() methods for the actor until it finishes or is terminated. It then invokes the wrapup() method and the thread dies. Figure 17.11 shows the code executed by the ProcessThread class. Note that it makes no assumption about the actor it is executing, so it can execute any domain-polymorphic actor as well as CSP domain-specific actors. In fact, any other domain actor that does not rely on the specifics of its parent domain can be executed in the CSP domain by the ProcessThread.
17.5.3 Detecting deadlocks: For deadlock detection, the director maintains three counts: director.initialize 0 => create a thread for each actor update count of active processes with the director call initialize!) on each actor director.prefire0 => start the process threads => calls actor.prefire() calls actor.fire() calls actor.postfiref) repeat. director.fire 0 => handle deadlocks until a real deadlock occurs. director.postfire 0 => return a boolean indicating if the execution of the model should continue for another iteration director. wrapupO => terminate all the processes => calls actor.wrapupO decrease the count of active processes with the director
FIGURE 17.10. Sequence of steps involved in setting up and controlling the model.
328
«Interface» ConditionalBnnchActor
+getCondrtionalBranchControllert): CondttionalBranchController
A~ CSPActor -_conditionalBranchController; ConditionalBranchController -„delayed: boolean -JntemalLock: Object
_actorsBlocked: int _actorsDelayed : int _currentTime: double _delayedActofList: UnkedList jnutationsPending: bootean simulationUntimed: bootean
♦CSPActorO +CSPActor(ws: Workspace) +CSPActor(cont: CompositeActor, name : String) +chooseBranch(branches : CondttionalBranchQ): int +delay() +delay(derta: double) +terminateO *_continueO »_waitForOeadlockQ
+CSPDirectorO +CSPDirector(name: String) +CSPDirector(name: String, ws : Workspace) +getCurrentTime(): double +setCurrentTime(newTime: double) +setUntimed(value: boolean) *_actorBlocked() #_actorDelayed(delta : double, actor: CSPActor) »_actorUnblockedQ
0..n Conditional Branchcontroller ■_blocked: boolean •_branchesActive: int ■JaranchesBlocked: int ■_branchesDelayed: int -_branchTrying: int •_intemalLock: Object -„successful Branch : int -_threadList: Linkedüst
CSPReeeiver
+ConditionalBranchController(container: Actor) +chooseBranch(branches: ConditionaJBranchfl): int +terminate() #_branchBlocked() #_branchFailed{branchNumber: int) #_branchSucceeded(branchNumber: int) #_branchUnblockedO #_isBranchFirst(branchNumber: int): boolean #_releaseFirst(branchNumber; int)
■_condrtionalReceiveWaiting : boolean -_conditlonalSendWarting: boolean ■_container: lOPort ■_getWaiting: bootean •_putWaiting: boolean -_otherParent: CSPActor „rendezvousComptete: bootean ■_simulationPau5ed: boolean -_simulationFinished: boolean - token: Token +CSPReceiverO +CSPReceiver(p: lOPort) +getO: Token +put(token: Token) +getContainer(): Nameable ■HiasRoomO: boolean +hasToken(): boolean +setContainen;parent: lOPort) +setFinish() +setPause(newValue: boolean) «LgetOtherParentO: CSPActor #JsConditionalReceiveWaiting(): boolean iHsCondrtionalSendWaitingO: boolean #_isGetWaitingO: boolean #_isPutWaitlngO: boolean #_setConditionalRecieve(v: boolean, parent: CSPActor) #_setConditionalSend(v: boolean, parent: CSPActor) iTcheckAndWaitO
ConditionalBranch -_alive: boolean ■_branchNumber: int -jguard: boolean -_parerrt: CSPActor #_receiver: CSPReeeiver »token: Token +ConditionalBranch(guard: boolean, port : lOPort, branchID: int) +getlD(>: int ♦getGuardO: boolean ♦getParentO: CSPActor +getReceiver(): CSPReeeiver ♦getTokenO: Token -HsAliveO: boolean +setAlive(value: boolean) *_checkAndWait()
CondHionalSend
ConditionalReceive
+ConditionalSend(guard : boolean, port: lOPort, channel: int, id : int, t: Token) +runQ
+ConditionalReceive(guard : boolean, port: lOPort, channel: int, id : int) +runQ
FIGURE 17.9. Static structure diagram for classes in the CSP kernel.
329
• •
the number of active processes which are threads that have started but have not yet finished the number of blocked processes which is the number of processes that are blocked waiting to rendezvous, and • the number of delayed processes, which is the number of processes waiting for time to advance plus the number of processes waiting for time deadlock to occur at the current model time. When the number of blocked processes equals the number of active processes, then real deadlock has occurred and the fire method of the director returns. When the number of blocked plus the number of delayed processes equals the number of active processes, and at least one process is delayed, then time deadlock has occurred. If at least one process is delayed waiting for time deadlock to occur at the current model time, then the director wakes up all such processes and does not advance time. Otherwise the director looks at its list of processes waiting for time to advance, chooses the earliest one and advances time sufficiently to wake it up. It also wakes up any other processes due to be awakened at the new time. The director checks for deadlock each occasion a process blocks, delays or dies. For the director to work correctly, these three counts need to be accurate at all stages of the model execution, so when they are updated becomes important. Keeping the active count accurate is relatively simple; the director increases it when it starts the thread, and decreases it when the thread dies. Likewise the count of delayed processes is straightforward; when a process delays, it increases the count of delayed processes, and the director keeps track of when to wake it up. The count is decreased when a delayed process resumes. However, due to the conditional communication constructs, keeping the blocked count accurate requires a little more effort. For a basic send or receive, a process is registered as being blocked when it arrives at the rendezvous point before the matching communication. The blocked count is then decreased by one when the corresponding communication arrives. However what happens when an actor is carrying out a conditional communication construct? In this case the process keeps track of all of the branches for which the guards were true, and when all of those are blocked trying to rendezvous, public void run() { try ( boolean iterate = true; while (iterate) { // container is checked for null to detect the termination //of the actor, iterate = false; if ((Entity)_actor).getContainer0 1= null && _actor.prefire 0) { _actor.fire(); iterate = _actor.postfire ();
) }
} catch (TerminateProcessException t) { // Process was terminated early } catch (IllegalActionException e) { _manager.fireExecutionError(e); } finally f try ( _actor.wrapup(); } catch (IllegalActionExeption e) { jnanager.fireExecutionError(e);
}
director.decreaseActiveCount 0;
FIGURE 17.11. Code executed by ProcessThread.run().
330
it registers the process as being blocked. When one of the branches succeeds with a rendezvous, the process is registered as being unblocked.
17.5.4 Terminating the model A process can finish in one of two ways: either by returning false in its prefire() or postfire() methods, in which case it is said to have finished normally, or by being terminated early by a TerminateProcessException. For example, if a source process is intended to send ten tokens and then finish, it would exit its fire() method after sending the tenth token, and return false in its postfire() method. This causes the ProcessThread, see figure 17.11, representing the process, to exit the while loop and execute the finally clause. The finally clause calls wrapup() on the actor it represents, decreases the count of active processes in the director, and the thread representing the process dies. A TerminateProcessException is thrown whenever a process tries to communicate via a channel whose receiver has its finished flag set to true. When a TerminateProcessException is caught in ProcessThread, the finally clause is also executed and the thread representing the process dies. To terminate the model, the director sets the finished flag in each receiver. The next occasion a process tries to send to or receive from the channel associated with that receiver, a TerminateProcessException is thrown. This mechanism can also be used in a selective fashion to terminate early any processes that communicate via a particular channel. When the director controlling the execution of the model detects real deadlock, it returns from its fire() method. In the absence of hierarchy, this causes the wrapup() method of the director to be invoked. It is the wrapup() method of the director that sets the finished flag in each receiver. Note that the TerminateProcessException is a runtime exception so it does not need to be declared as being thrown. There is also the option of abruptly terminating all the processes in the model by calling terminate() on the director. This method differs from the approach described in the previous paragraph in that it stops all the threads immediately and does not give them a chance to update the model state. After calling this method, the state of the model is unknown and so the model should be recreated after calling this method. This method is only intended for situations when the execution of the model has obviously gone wrong, and for it to finish normally would either take too long or could not happen. It should rarely be called.
17.5.5 Pausing/Resuming the Model Pausing and resuming a model does not affect the outcome of a particular execution of the model, only the rate of progress. The execution of a model can be paused at any stage by calling the pause() method on the director. This method is blocking, and will only return when the model execution has been successfully paused. To pause the execution of a model, the director sets a paused flag in every receiver, and the next occasion a process tries to send to or receive from the channel associated with that receiver, it is paused. The whole model is paused when all the active processes are delayed, paused or blocked. To resume the model, the resume() method can similarly be called on the director This method resets the paused flag in every receiver and wakes up every process waiting on a receiver lock. If a process was paused, it sees that it is no longer paused and continues. The ability to pause and resume the execution of a model is intended primarily for user interface control.
331
17.6 Technical Details 17.6.1 Brief Introduction to Threads in Java The CSP domain, like the rest of Ptolemy II, is written entirely in Java and takes advantage of the features built into the language. In particular, the CSP domain depends heavily on threads and on monitors for controlling the interaction between threads. In any multi-threaded environment, care has to be taken to ensure that the threads do not interact in unintended ways, and that the model does not deadlock. Note deadlock in this sense is a bug in the modeling environment, which is different from the deadlock talked about before which may or may not be a bug in the model being executed. A monitor is a mechanism for ensuring mutual exclusion between threads. In particular if a thread has a particular monitor, acquired in order to execute some code, then no other thread can simultaneously have that monitor. If another thread tries to acquire that monitor, it stalls until the monitor becomes available. A monitor is also called a lock, and one is associated with every object in Java. Code that is associated with a lock is defined by the synchronized keyword. This keyword can either be in the signature of a method, in which case the entire method body is associated with that lock, or it can be used in the body of a method using the syntax: synchronized(object) { // synchronized code goes here
} This causes the code inside the brackets to be associated with the lock belonging to the specified object. In either case, when a thread tries to execute code controlled by a lock, it must either acquire the lock or stall until the lock becomes available. If a thread stalls when it already has some locks, those locks are not released, so any other threads waiting on those locks cannot proceed. This can lead to deadlock when all threads are stalled waiting to acquire some lock they need. A thread can voluntarily relinquish a lock when stalling by calling object. wait() where object is the object to relinquish and wait on. This causes the lock to become available to other threads. A thread can also wake up any threads waiting on a lock associated with an object by calling notifyAll() on the object. Note that to issue a notifyAH() on an object it is necessary to own the lock associated with that object first. By careful use of these methods it is possible to ensure that threads only interact in intended ways and that deadlock does not occur. Approaches to locking used in the CSP domain. One of the key coding patterns followed is to wrap each wait() call in a while loop that checks some flag. Only when the flag is set to false can the thread proceed beyond that point. Thus the code will often look like synchronized(object)
{
while(flag) { object.wait();
}
332
The advantage to this is that it is not necessary to worry about what other thread issued the notifyAll() on the lock; the thread can only continue when the notifyAll() is issued and the flag has been set to false. Another approach used is to keep the number of locks acquired by a thread as few as possible, preferably never more than one at a time. If several threads share the same locks, and they must acquire more than one lock at some stage, then the locks should always be acquired in the same order. To see how this prevent deadlocks, consider two threads, threadl and thread!, that are using two locks A and B. If threadl obtains A first, then B, and thread2 obtains B first then A, then a situation could arise whereby threadl owns lock A and is waiting on B, and thread2 owns lock B and is waiting on A. Neither thread can proceed and so deadlock has occurred. This would be prevented if both threads obtained lock A first, then lock B. This approach is sufficient, but not necessary to prevent deadlocks, as other approaches may also prevent deadlocks without imposing this constraint on the program [44]. Finally, deadlock often occurs even when a thread, which already has some lock, tries to acquire another lock only to issue a notifyAH() on it. To avoid this situation, it is easiest if the notifyAll() is issued from a new thread which has no locks that could be held if it stalls. This is often used in the CSP domain to wake up any threads waiting on receivers, for example after a pause or when terminating the model. The class Notify Thread, in the ptolemy.actor.process package, is used for this purpose. This class takes a list of objects in a linked list, or a single object, and issues a notifyAll() on each of the objects from within a new thread. The CSP domain kernel makes extensive use of the above patterns and conventions to ensure the modeling engine is deadlock free.
17.6.2 Rendezvous Algorithm In CSP, the locking point for all communication between processes is the receiver. Any occasion a process wishes to send or receive, it must first acquire the lock for the receiver associated with the channel it is communicating over. Two key facts to keep in mind when reading the following algorithms are that each channel has exactly one receiver associated with it and that at most one process can be trying to send to (or receive from) a channel at any stage. The constraint that each channel can have at most one process trying to send to (or receive from) a channel at any stage is not currently enforced, but an exception will be thrown if such a model is not constructed. The rendezvous algorithm is entirely symmetric for the put() and the get(), except for the direction the token is transferred. This helps reduce the deadlock situations that could arise and also makes the interaction between processes more understandable and easier to explain. The algorithm controlling how a get() proceeds is shown in figure 17.12. The algorithm for a put() is exactly the same except that put and get are swapped everywhere. Thus it suffices to explain what happens when a get() arrives at a receiver, i.e. when a process tries to receive from the channel associated with the receiver. When a get() arrives at a receiver, a put() is either already waiting to rendezvous or it is not. Both the get() and putO methods are entirely synchronized on the receiver so they cannot happen simultaneously (only one thread can possess a lock at any given time). Without loss of generality assume a get() arrives before a put(). The rendezvous mechanism is basically three steps: a get() arrives, a put() arrives, the rendezvous completes. (1) When the get() arrives, it sees that it is first and sets a flag saying a get is waiting. It then waits on the receiver lock while the flag is still true, (2) When a put() arrives, it sets the getWaiting flag to false, wakes up any threads waiting on the
333
receiver (including the get), sets the rendezvousComplete flag to false and then waits on the receiver while the rendezvousComplete flag is false, (3) The thread executing the get() wakes up, sees that a put() has arrived, sets the rendezvousComplete flag to true, wakes up any threads waiting on the receiver, and returns thus releasing the lock. The thread executing the put() then wakes up, acquires the receiver lock, sees that the rendezvous is complete and returns. Following the rendezvous, the state of the receiver is exactly the same as before the rendezvous
Wakes up CondSend if one is waiting
FIGURE 17.12. Rendezvous algorithm.
334
arrived, and it is ready to mediate another rendezvous. It is worth noting that the final step, of making sure the second communication to arrive does not return until the rendezvous is complete, is necessary to ensure that the correct token gets transferred. Consider the case again when a get() arrives first, except now the put() returns immediately if a get() is already waiting. A put() arrives, places a token in the receiver, sets the get waiting flag to false and returns. Now suppose another put() arrives before the get() wakes up, which will happen if the thread the put() is in wins the race to obtain the lock on the receiver. Then the second put() places a new token in the receiver and sets the put waiting flag to true. Then the get() wakes up, and returns with the wrong token! This is known as a race condition, which will lead to unintended behavior in the model. This situation is avoided by our design.
17.6.3 Conditional Communication Algorithm There are two steps involved in executing a CIF or a CDO: first deciding which enabled branch succeeds, then carrying out the rendezvous. Built on top of rendezvous: When a conditional construct has more than one enabled branch (guard is true or absent), a new thread is spawned for each enabled branch. The job of the chooseBranch() method is to control these threads and to determine which branch should be allowed to successfully rendezvous. These threads and the mechanism controlling them are entirely separate from the rendezvous mechanism described in section 17.6.2, with the exception of one special case, which is described in section 17.6.4. Thus the conditional mechanism can be viewed as being built on top of basic rendezvous: conditional communication knows about and needs basic rendezvous, but the opposite is not true. Again this is a design decision which leads to making the interaction between threads easier to understand and is less prone to deadlock as there are fewer interaction possibilities to consider. Choosing which branch succeeds. The manner in which the choice of which branch can rendezvous is worth explaining. The chooseBranch() method in CSPActor takes an array of branches as an argument. If all of the guards are false, it returns -1, which indicates that all the branches failed. If exactly one of the guards is true, it performs the rendezvous directly and returns the identification number of the successful branch» The interesting case is when more than one guard is true. In this case, it creates and starts a new thread for each branch whose guard is true. It then waits, on an internal lock, for one branch to succeed. At that point it gets woken up, sets a finished flag in the remaining branches and waits for them to fail. When all the threads representing the branches are finished, it returns the identification number of the successful branch. This approach is designed to ensure that exactly one of the branches created successfully performs a rendezvous.
which branch should succeed?
*
+
rendezvous
I
FIGURE 17.13. Conceptual view of how conditional communication is built on top of rendezvous.
335
Algorithm used by each branch: Similar to the approach followed for rendezvous, the algorithm by which a thread representing a branch determines whether or not it can proceed is entirely symmetrical for a ConditionalSend and a ConditionalReceive. The algorithm followed by a ConditionalReceive is shown figure 17.14. Again the locking point is the receiver, and all code concerned with the communication is synchronized on the receiver. The receiver is also where all necessary flags are stored. Consider three cases. (1) a ConditionalReceive arrives and a put is waiting. In this case, the branch checks if it is the first branch to be ready to rendezvous, and if so, it is goes
Case 1
Case 3
Case 2
FIGURE 17.14. Algorithm used to determine if a conditional rendezvous branch succeeds or fails
336
ahead and executes a get. If it is not the first, it waits on the receiver. When it wakes up, it checks if it is still alive. If it is not, it registers that it has failed and dies. If it is still alive, it starts again by trying to be the first branch to rendezvous. Note that a put cannot disappear. (2) a conditionalReceive arrives and a conditionalSend is waiting When both sides are conditional branches, it is up to the branch that arrives second to check whether the rendezvous can proceed. If both branches are the first to try to rendezvous, the conditionalReceive executes a get(), notifies its parent that it succeeded, issues a notifyAH() on the receiver and dies. If not, it checks whether it has been terminated by chooseBranch(). If it has, it registers with chooseBranch() that it has failed and dies. If it has not, it returns to the start of the algorithm and tries again. This is because a ConditionalSend could disappear. Note that the parent of the first branch to arrive at the receiver needs to be stored for the purpose of checking if both branches are the first to arrive. This part of the algorithm is somewhat subtle. When the second conditional branch arrives at the rendezvous point it checks that both sides are the first to try to rendezvous for their respective processes. If so, then the conditionalReceive executes a get(), so that the conditionalSend is never aware that a conditionalReceive arrived: it only sees the get(). (3) a conditionalReceive arrives first. It sets a flag in the receiver that it is waiting, then waits on the receiver. When it wakes up, it checks whether it has been killed by chooseBranch. If it has, it registers with chooseBranch that it has failed and dies. Otherwise it checks if a put is waiting. It only needs to check if a put is waiting because if a conditionalSend arrived, it would have behaved as in case (2) above. If a put is waiting, the branch checks if it is the first branch to be ready to rendezvous, and if so it is goes ahead and executes a get. If it is not the first, it waits on the receiver and tries again.
17.6.4 Modification of Rendezvous Algorithm Consider the case when a conditional send arrives before a get. If all the branches in the conditional communication that the conditional send is a part of are blocked, then the process will register itself as blocked with the director. Then the get comes along, and even though a conditional send is waiting, it too would register itself as blocked. This leads to one too many processes being registered as blocked, which could lead to premature deadlock detection. To avoid this, it is necessary to modify the algorithm used for rendezvous slightly. The change to the algorithm is shown in the dashed ellipse in figure 17.15. It does not affect the algorithm except in the case when a conditional send is waiting when a get arrives at the receiver. In this case the process that calls the get should wait on the receiver until the conditional send waiting flag is false. If the conditional send succeeded, and hence executed a put, then the get waiting flag and the conditional send waiting flag should both be false and the actor proceeds through to the third step of the rendezvous. If the conditional send failed, it will have reset the conditional send waiting flag and issued a notifyAll() on the receiver, thus waking up the get and allowing it to properly wait for a put. The same reasoning also applies to the case when a conditional receive arrives at a receiver before a put.
337
FIGURE 17.15. Modification of rendezvous algorithm, section 17.6.4, shown in ellipse.
338
DDE Domain Author: John S. Davis II
18.1 Introduction The distributed discrete event (DDE) model of computation incorporates a distributed notion of time into a dataflow style of communication. Time progresses in a DDE model when the actors in the model execute and communicate. Actors in a DDE model communicate by sending messages through bounded, FIFO channels. Time in a DDE model is distributed and localized, and the actors of a DDE model each maintain their own local notion of the current time. Local time information is shared between two connected actors whenever a communication between said actors occurs. Conversely, communication between two connected actors can occur only when constraints on the relative local time information of the actors are adhered to. The DDE domain is based on distributed discrete event processing and leverages a wealth of research devoted to this topic. Several tutorial publications on this topic exist in [18][24][40][61]. The DDE domain implements a specific variant of distributed discrete event systems (DDES) as expounded by Chandy and Misra [18]. While the DDE domain has similarities with DDES, the distributed discrete event domain serves as a framework for studying DDES with two special emphases. First we consider DDES from a dataflow perspective; we view DDE as an implementation of the Kahn dataflow model [42] with distributed time added on top. Second we study DDES not with the goal of improving execution speed (as has been the case traditionally). Instead we study DDES to learn its usefulness in modeling and designing systems that are timed and distributed.
18.2 DDE Semantics Operationally, the semantics of the DDE domain can be separated into two functionalities. The first functionality relates to how time advances during the communication of data and how communication proceeds via blocking reads and writes. The second functionality considers how a DDE model prevents deadlock due to local time dependencies. The technique for preventing deadlock involves the
339
communication of null messages that consist solely of local time information.
18.2.1 Enabling Communication: Advancing Time Communicating Tokens. A DDE model consists of a network of sequential actors that are connected via unidirectional, bounded, FIFO queues. Tokens are sent from a sending actor to a receiving actor by placing a token in the appropriate queue where the token is stored until the receiving actor consumes it. If a process attempts to read a token from a queue that is empty, then the process will block until a token becomes available on the channel. If a process attempts to write a token to a queue that is full, then the process will block until space becomes available for more tokens in that queue. Note that this blocking read/write paradigm is equivalent to the operational semantics found in non-timed process networks (PN) as implemented in Ptolemy II (see the PN Domain chapter). If all processes in a DDE model simultaneously block, then the model deadlocks. Deadlock that is due to processes that are either waiting to read from an empty queue, read blocks, or waiting to write to a full queue, write blocks, then we say that the model has experienced non-timed deadlock. Non-timed deadlock is equivalent to the notion of deadlock found in bounded process networks scheduling problems as outlined by Parks [69]. If a non-timed deadlock is due to a model that consists solely of processes that are read blocked, then we say that a real deadlock has occurred and the model is terminated. If a non-timed deadlock is due to a model that consists of at least one process that is write blocked, then the capacity of the full queues are increased until deadlock no longer exists. Such deadlocks are called artificial deadlock, and the policy of increasing the capacity of full queues was shown by Parks to guarantee the execution of a model in bounded memory whenever possible. Communicating Time. Each actor in a DDE model maintains a local notion of time. Any non-negative real number may serve as a valid value of time. As tokens are communicated between actors, time stamps are associated with each token. Whenever an actor consumes a token, the actor's current time is set to be equal to that of the consumed token's time stamp. The time stamp value applied to outgoing tokens of an actor is equivalent to that actor's output time. For actors that model a process in which there is delay between incoming time stamps and corresponding outgoing time stamps, then the output time is always greater than the current time; otherwise, the output time is equal to the current time. We refer to actors of the former case as delay actors. For a given queue containing time stamped tokens, the time stamp of the first token currently contained by the queue is referred to as the receiver time of the queue. If a queue is empty, its receiver time is the value of the time stamp associated with the last token to flow through the queue, or 0.0 if no tokens have traveled through the queue. An actor may consume a token from an input queue given that the queue has a token available and the receiver time of the queue is less than the receiver times of all other input queues contained by the actor. If the queue with the smallest receiver time is empty, then the actor blocks until this queue receives a token, at which time the actor considers the updated receiver time in selecting a queue to read from. The last time of a queue is the time stamp of the last token to be placed in the queue. If no tokens have been placed in the queue, then the last time is 0.0 Figure 18.1 shows three actors, each with three input queues. Actor A has two tokens available on the top queue, no tokens available on the middle queue and one token available on the bottom queue. The receiver times of the top, middle and bottom queue are respectively, 17.0, 12.0 and 15.0. Since the queue with the minimum receiver time (the middle queue) is empty, A blocks on this queue before it proceeds. In the case of actor B, the minimum receiver time belongs to the bottom queue. Thus, B proceeds by consuming the token found on the bottom queue. After consuming this token, B compares all of its receiver times to determine which token it can consume next. Actor C is an example of an actor
340
that contains multiple input queues with identical receiver times. To accommodate this situation, each actor assigns a unique priority to each input queue. An actor can consume a token from a queue if no other queue has a lower receiver time and if all queues that have an identical receiver time also have a lower priority. Each receiver has a completion time that is set during the initialization of a model. The completion time of the receiver specifies the time after which the receiver will no longer operate. If the time stamp of the oldest token in a receiver exceeds the completion time, then that receiver will become inactive.
18.2.2 Maintaining Communication: Null Tokens Deadlocks can occur in a DDE model in a form that differs from the deadlocks described in the previous section. This alternative form of deadlock occurs when an actor read blocks on an input port even though it contains other ports with tokens. The topology of a DDE model can lead to deadlock as read blocked actors wait on each other for time stamped tokens that will never appear. Figure 18.2 illustrates this problem. In this topology, consider a situation in which actor A only creates tokens on its lower output queue. This will lead to tokens being created on actor C's output queue but no tokens will be created on B's output queue (since B has no tokens to consume). This situation results in D read blocking indefinitely on its upper input queue even though it is clear that no tokens will ever flow through this queue. The result: timed deadlock! The situation shown in figure 18.2 is only one example of timed deadlock. In fact there are two types of timed deadlock: feedforward and feedback. Figure 18.2 is an example of feedforward deadlock. Feedforward deadlock occurs when a set of connected actors are deadlocked such that all actors in the set are read blocked and at least one of the actors in the set is read blocked on an input queue that has a receiver time that is less than the local clock of the input queue's source actor. In the example shown above, the upper input queue of B has a receiver time of 0.0 even though the local clock of A has advanced to 8.0. Feedback deadlock occurs when a set of cyclically connected actors are deadlocked such that all actors in the set are read blocked and at least one actor in the set, say actor X, is read blocked on an input queue that can read tokens which are directly or indirectly a result of output from that same actor (actor X). Figure 18.3 is an example of feedback timed deadlock. Note that B can not produce an output based on the consumption of the token timestamped at 5.0 because it must wait for a token on the 22.0
22.0
J7.0
12.0
22.0
Actor A
15.0
22.0
J7.0
* Actor B
15.0
• 15.0
FIGURE 18.1. DDE actors and local time. Actor B
J5.0
>
>
> • • 8.0 5.0
> • • Actor C 3.0 2.0
Actor A
FIGURE 18.2. Timed deadlock (feedforward).
341
Actor D
> Actor E
H Actor C
upper input that depends on the output of 5! Preventing Feedforward Timed Deadlock. To address feedforward timed deadlock, null tokens are employed. A null token provides an actor with a means of communicating time advancement even though data (real tokens) are not being transmitted. Whenever an actor consumes a token, it places a null token on each of its output queues such that the time stamp of the null token is equal to the current time of the actor. Thus, if actor A of figure 18.2, produced a token on its lower output queue at time 5.0, it would also produce a null token on its upper output queue at time 5.0. If an actor encounters a null token on one of its input queues, then the actor does the following. First it consumes the tokens of all other input queues it contains given that the other input queues have receiver times that are less than or equal to the time stamp of the null token. Next the actor removes the null token from the input queue and sets its current time to equal the time stamp of the null token. The actor then places null tokens time stamped to the current time on all output queues that have a last time that is less then the actor's current time. As an example, if B in figure 18.2 consumes a null token on its input with a time stamp of 5.0 then it would also produce a null token on its output with a time stamp of 5.0. The result of using null tokens is that time information is evenly propagated through a model's topology. The beauty of null tokens is that they inform actors of inactivity in other components of a model without requiring centralized dissemination of this information. Given the use of null tokens, feedforward timed deadlock is prevented in the execution of DDE models. It is important to recognize that null tokens are used solely for the purpose of avoiding deadlocks. Null tokens do not represent any actual components of the physical system being modeled. Hence, we do not think of a null token as a real token. Furthermore, the production of a null token that is the direct result of the consumption of a null token is not considered computation from the standpoint of the system being modeled. The idea of null tokens was first espoused by Chandy and Misra [18]. Preventing Feedback Timed Deadlock. We address feedback timed deadlock as follows. All feedback loops are required to have a cumulative time stamp increment that is greater than zero. In other words, feedback loops are required to contain delay actors. Peacock, Wong and Manning [70] have shown that a necessary condition for feedback timed deadlock is that a feedback loop must contain no delay actors. The delay value (delay = output time - current time) of a delay actor must be chosen wisely; it must be less then the smallest delta time of all other actors contained in the same feedback loop. Delta time is the difference between the time stamps of a token that is consumed by an actor and the corresponding token that is produced in direct response. If a system being modeled has characteristics that prevent a fixed, positive lower bound on delta time from being specified, then our approach can not solve feedback timed deadlock. Such a situation is referred to as a Zeno condition. An application involving an approximated Zeno condition is discussed in section 18.3 below. The DDE software architecture provides one delay actor for use in preventing feedback timed deadlock: FeedBackDelay. See "Feedback Topologies" on page 18-345 for further details about this
Actor A
Actor B
Actor C
8.0 5.0 FIGURE 18.3. Timed Deadlock (Feedback)
342
Actor D
actor.
18.2.3 Alternative Distributed Discrete Event Methods The field of distributed discrete event simulation, also referred to as parallel discrete event simulation (PDES), has been an active area of research since the late 1970's [18][24][40][61][70]. Recently there has been a resurgence of activity [5][6][10]. This is due in part to the wide availability of distributed frameworks for hosting simulations and the application of parallel simulation techniques to nonresearch oriented domains. For example, several WWW search engines are based on network of workstation technology. The field of distributed discrete event simulation can be cast into two camps that are distinguished by the blocking read approach taken by the actors. One camp was introduced by Chandy and Misra [18][24][61][70] and is known as conservative blocking. The second camp was introduced by David Jefferson through the Jet Propulsion Laboratory Time Warp system and is referred to as the optimistic approach [40][24]. In certain problems, the optimistic approach executes faster than the conservative approach, nevertheless, the gains in speed result in significant increases in program memory. The conservative approach does not perform faster than the optimistic approach but it executes efficiently for all classes of discrete event systems. Given the modeling semantics emphasis of Ptolemy II, performance (speed) is not considered a premium. Furthermore, Ptolemy IPs embedded systems emphasis suggests that memory constraints are likely. For these reasons, the implementation found in the DDE domain follows the conservative approach.
18.3 Example DDE Applications To illustrate distributed discrete event execution, we have developed an applet that features a feedback topology and incorporates polymorphic as well as DDE specific actors. The model, shown in figure 18.4, consists of a single source actor (ptolemy/actor/lib/Clock) and an upper and lower branch of four actors each. The upper and lower branches have identical topologies and are fed an identical stream of tokens from the Clock source with the exception that in the lower branch ZenoDelay replaces FeedBackDelay. As with all feedback topologies in DDE (and DE) models, a positive time delay is necessary in feedback loops to prevent deadlock. If the time delay of a given loop is lower bounded by zero but can not be guaranteed to be greater than a fixed positive value, then a Zeno condition occurs in which time
Upper Branch
->
Wire
DoubleFork
Wire
DoubleFork
i
Timed Plotter
Clock ■?
ZenoDelay
FIGURE 18.4. Localized Zeno condition topology.
343
5>
Timed Plotter
Lower Branch
will not advance beyond a certain point even though the actors of the feedback loop continue to execute without deadlocking. ZenoDelay extends FeedBackDelay and is designed so that a Zeno condition will be encountered. When execution of the model begins, both FeedBackDelay and ZenoDelay are used to feed back null tokens into Wire so that the model does not deadlock. After local time exceeds a preset value, ZenoDelay reduces its delay so that the lower branch approximates a Zeno condition. In centralized discrete event systems, Zeno conditions prevent progress in the entire model. This is true because the feedback cycle experiencing the Zeno condition prevents time from advancing in the entire model. In contrast, distributed discrete event systems localize Zeno conditions as much as is possible based on the topology of the system. Thus, a Zeno condition can exist in the lower branch and the upper branch will continue its execution unimpeded. Localizing Zeno conditions can be useful in large scale modeling in which a Zeno condition may not be discovered until a great deal of time has been invested in execution of the model. In such situations, partial data collection may proceed prior to correction of the delay error that resulted in the Zeno condition.
18.4 Building DDE Applications To build a DDE application, use a DDEDirector. This ensures that each actor under control of the director is allocated DDEReceivers and that each actor is assigned a TimeKeeper to manage the actor's local notion of time. The DDE domain is typed so that actors used in a model must be derived from ptolemy/actor/TypedAtomicActor. The DDE domain is designed to use both DDE specific actors as well as polymorphic actors. DDE specific actors take advantage of DDEActor and DDEIOPort which are designed to provide convenient support for specifying time in the production and consumption of tokens.
18.4.1 DDEActor The DDE model of computation makes one very strong assumption about the execution of an actor: all input ports of cm actor operating in a DDE model must be regularly polled to determine which input channel has the oldest pending event. Any actor that adheres to this assumption can operate in a DDE model. Thus, many polymorphic actors found in ptolemy/actor/[lib, gui] are suitable for operation in DDE models. For convenience, DDEActor was developed to simplify the construction of actors that have DDE semantics. DDEActor has three key methods as follows: getNextTokenQ. This method polls each input port of an actor and returns the (non-Null) token that represents the oldest event. This method blocks accordingly as outlined in section 18.2.1 (Communicating Time). getLastPortO- This method returns the input IOPort from which the last (non-Null) token was consumed. This method presumes that getNextToken() is being used for token consumption.
18.4.2 DDEIOPort DDEIOPort extends TypedlOPort with parameters for specifying time stamp values of tokens that are being sent to neighboring actors. Since DDEIOPort extends TypedlOPort, use of DDEIOPorts will not violate the type resolution protocol. DDEIOPort is not necessary to facilitate communication between actors executing in a DDE model; standard TypedlOPorts are sufficient in most communication. DDEIOPorts become useful when the time stamp to be associated with an outgoing token is
344
greater than the current time of the sending actor. Hence, DDEIOPorts are only useful in conjunction with delay actors (see "Enabling Communication: Advancing Time" on page 18-340, for a definition of delay actor). Most polymorphic actors available for Ptolemy II are not delay actors.
18.4.3 Feedback Topologies In order to execute feedback topologies that will not deadlock, FeedBackDelay actors must be used. FeedBackDelay is found in the DDE kernel package. FeedBackDelay actors do not perform computation, but instead increment the time stamps of tokens that flow through them by a specified delay. The delay value of a FeedBackDelay actor must be chosen to be less than the delta time of the feedback cycle in which the FeedBackDelay actor is contained. Elaborate delay values can be specified by overriding the getDelayO method in subclasses of FeedBackDelay. An example of such can be found in ptolemy/domains/dde/demo/LocalZeno/ZenoDelay.java. A difficulty found in feedback cycles occurs in the initialization of a model's execution. In figure 18.5 we see that even if Actor B is a FeedBackDelay actor, the system will deadlock if the first event is created by A since C will block on an event from B. To alleviate this problem a special time stamp value has been reserved: TimeQueuedReceiver.IGNORE. When an actor encounters an event with a time stamp of IGNORE (an ignore event), the actor will ignore the event and the input channel it is associated with. The actor then considers the other input channels in determining the next available event. After a non-ignore event is encountered and consumed by the actor, all ignore events will be cleared from the receivers. If all of an actor's input channels contain ignore events, then the actor will clear all ignore events and then proceed with normal operation. The initialize method of FeedBackDelay produces an ignore event. Thus, in figure 18.5, if B is a FeedBackDelay actor, the ignore event it produces will be sent to C's upper input channel allowing C to consume the first event of A. The production of null tokens and feedback delays will then be sufficient to continue execution from that point on. Note that the production of an ignore event by a FeedBackDelay actor serves as a major distinction between it and all other actors. If a delay is desired simply to represent the computational delay of a given model, a FeedBackDelay actor should not be used. The intricate operation of ignore events requires special consideration when determining the position of a FeedBackDelay actor in a feedback topology. A FeedBackDelay actor should be placed so that the ignore event it produces will be ignored in deference to the first real event that enters a feedback cycle. Thus, choosing actor D as a FeedBackDelay actor in figure 18.5 would not be useful given that the first real event entering the cycle is created by A.
18.5 The DDE Software Architecture For a model to have DDE semantics, it must have a DDEDirector controlling it. This ensures that the receivers in the ports are DDEReceivers. Each actor in a DDE model is under the control of a Actor B
i
Actor D Actor A
Actor C
FIGURE 18.5. Initializing Feedback Topologies
345
/N
DDEThread. DDEThreads contain a TimeKeeper that manages the local notion of time that is associated with the DDEThread's actor.
18.5.1 Local Time Management The UML diagram of the local time management system of the DDE domain is shown in figure 18.6 and consists of PrioritizedTimedQueue, DDEReceiver, DDEThread and TimeKeeper. Since time is localized, the DDEDirector does not have a direct role in this process. Note that DDEReceiver is derived from PrioritizedTimedQueue. The primary purpose of PrioritizedTimedQueue is to keep track of a receiver's local time information. DDEReceiver adds blocking read/write functionality to PrioritizedTimedQueue. When a DDEDirector is initialized, it instantiates a DDEThread for each actor that the director manages. DDEThread is derived from ProcessThread. The ProcessThread class provides functionality that is common to all of the process domains (e.g., CSP, DDE and PN). The directors of all process domains (including DDE) assign a single actor to each ProcessThread. ProcessThreads take responsibility of their assigned actor's execution by invoking the iteration methods of the actor. The iteration
DDEThread
PrioritizedTimedQueue #JastTime: double #_priority: int completionTime: double rcvrTime: double +PrioritizedTimedQueue() +PrioritizedTimedQueue(p: lOPort) +PrioritizedTimedQueue(cntr: lOPort, priority : int) +get(): Token +getCapacity(): int +getContainer(): lOPort +getLastTime(): double +getRcvrTime(): double +hasRoom(): boolean +hasToken(): boolean +put(token: Token, time: double) +removelgnoredTokens(): void +reset(): void +setCapacily(c: int) +setContainer(p: lOPort) #_getCompletionTime(): double #JiasNullToken(): boolean #_setCompletionTime(t: double) #_setRcvrTime(time: double)
I ProcessThread
+DDEThread(a : Actor, d : ProcessDirector) +getTimeKeeper(): TimeKeeper +noticeOfTermination() +start() +wrapup{) 1.1
controls manages time for
O.n
TimeKeeper
1..1 1.1
AtomlcActor
contains
0..1
-_currentTime: double -jxitputTime: double +TimeKeeper(a: Actor): void +getCurrentTime(): double +getFirstRcvr(): TimedQueueReceiver +getNextTime{): double +getOutputTime{): double +removeAlllgnoredTokens(): void +sendOutNuUTokens(): void +setCurrentTime(time: double): void ■HjpdateRcvrListfrcvr: TimedQueueReceiver): void #_setOutpuffime(time : double): void #_setRcvrPriorities(): void contains
O.n
lOPort RcvrComparator
+RcvrComparator(keeper: TimeKeeper) +compare(first: Object, second : Object): int
DDEReceiver
«Interface» ProcessRece/ver
.terminate: boolean hideNullTokens: boolean +DDEReceiver() +DDEReceiver(ontr: lOPort) +DDEReceiver(cntr: lOPort, priority: int) +put(token: Token) #_hideNulfTokens(hide: boolean)
I I I I I
«Interface» Comparator
A
FIGURE 18.6. Key Classes for Locally Managing Time.
346
| I I I I
methods are prefire(), fire() and postfire(); ProcessThreads also invoke wrapup() on the actors they control. DDEThread extends the functionality of ProcessThread. Upon instantiation, a DDEThread creates a TimeKeeper object and assigns this object to the actor that it controls. The TimeKeeper gets access to each of the DDEReceivers that the actor contains. Each of the receivers can access the TimeKeeper and through the TimeKeeper the receivers can then determine their relative receiver times. With this information, the receivers are fully equipped to apply the appropriate blocking rules as they get and put time stamped tokens. DDEReceivers use a dynamic approach to accessing the DDEThread and TimeKeeper. To ensure domain polymorphism, actors (DDE or otherwise) do not have static references to the TimeKeeper and DDEThread that they are controlled by. To ensure simplified mutability support, DDEReceivers do not have a static reference to TimeKeepers. Access to the local time management facilities is accomplished via the Java Thread.currentThread() method. Using this method, a DDEReceiver dynamically accesses the thread responsible for invoking it. Presumably the calling thread is a DDEThread and appropriate steps are taken if it is not. Once the DDEThread is accessed, the corresponding TimeKeeper can be accessed as well. The DDE domain uses this approach extensively in DDEReceiver.put(Token) and DDEReceiver.get(). DDEReceiver.put(Token) is derived from the Receiver interface and is accessible by all actors and domains. To facilitate local time advancement, DDEReceiver has a second put() method that has a time argument: DDEReceiver.put(Token, double). This second DDE-specific version of put() is taken advantage of without extensive code by using Thread.currentThread(). DDEReceiver.put() is shown below: public void put(Token token)1 { Thread thread = Thread. currentThread (') ; double time = _lastTime; if( thread instanceof DDEThread ) { TimeKeeper timeKeeper = ((DDEThread)thread).getTimeKeeper(); time = timeKeeper.getOutputTime();} put( token, time )2,-
} Similar uses of Thread.currentThread() are found throughout DDEReceiver and DDEDirector. Note that while Thread.currentThread() can be quite advantageous, it means that if some methodsa are called by an inappropriate thread, problems may occur. Such an issue makes code testing difficult.
18.5.2 Detecting Deadlock The other kernel classes of the DDE domain are shown in figure 18.7. The purpose of the DDEDirector is to detect and (if possible) resolve timed and/or non-timed deadlock of the model it controls. Whenever a receiver blocks, it informs the director. The director keeps track of the number of active processes, and the number of processes that are either blocked on a read or write. Artificial deadlocks are resolved by increasing the queue capacity of write-blocked receivers. Note the distinction between internal and external read blocks in DDEDirector's package friendly 1. DDEReceiver.put(Token) is equivalent to the put() signature of the ptolemy.actor.Receiver interface. 2. Polymorphic actors need not be aware of DDE-specific code such as DDEReceiver.put(Token, Double).
347
methods. The current release of DDE assumes that actors that execute according to a DDE model of computation are atomic rather than composite. In a future Ptolemy II release, composite actors will be facilitated in the DDE domain. At that time, it will be important to distinguish internal and external read blocks. Until then, only internal read blocks are in use.
18.5.3 Ending Execution Execution of a model ends if either an unresolvable deadlock occurs, the director's completion time is exceeded by all of the actors it manages, or early termination is requested (e.g., by a user interface button). The director's completion time is set via the public stopTime parameter of DDEDirector. The completion time is passed on to each DDEReceiver. If a receiver's receiver time exceeds the completion time, then the receiver becomes inactive. If all receivers of an actor become inactive and the actor is not a source actor, then the actor will end execution and its wrapup() method will be called. In such a scenario, the actor is said to have terminated normally. Early terminations and unresolvable deadlocks share a common mechanism for ending execution. Each DDEReceiver has a boolean terminate flag. If the flag is set to true, then the receiver will throw a TerminateProcessException the next time any of its methods are invoked. TerminateProcessExceptions are part of the ptolemy/actor/process package and ProcessThreads know to end an actor's execution if this exception is caught. In the case of unresolvable deadlock, the _terminate flag of all blocked receivers is set to true. The receivers are then awakened from blocking and they each throw the exception.
Typed lOPort I ProcassOiractor
-oL ♦stopTime: Parameter readBlocks: irrt - writeBlocks: Hit ♦DDEDirectonO .DDEDirectorfwkspace: Workspace) ♦DDEDirectoitcntr: CompositeActor, nm : String) ♦hasMutationO: boolean *_addExtemalRead8lockO «_addlntemalReadBlockO *_addWriteBlock(rcvr: OOEReceiver) y.getlnKiammeTabfeO: Hashtable "_removeExtemalReadBlockO «_removelntema!ReadBlockO "_ramoveWriteBlockO KJncrementLowestCapacity Porto *_performMutationsO
DDEIOPortO +DDEIOPort(cntr: ComponentEntity, nm : String) +DDEIOPort(entr: ComponentEntity, nm : String, in : boolean, out: boolean) +bfoadcast(token : Token, sendTime : double) +send(chan : irrt, token : Token, sendTime : double)
I Typed AtomlcActori I
T FeedBackDelay ♦nullDelay: Parameter ♦realDelay: Parameter delay: double +FeedBackDelayO +FeedBackDelay(wkspc: Workspace) +FeedBackDelay(cntr: TypedComposReActor, nm : String) +getDelayO: double +setDelay(delay: double)
-_currentTime : double •JastPort: TypedlOPort +DDEActorO +DDEActor(ws: Workspace) +DDEActor(cntr: TypedCompositeActor. nm : String) ♦getLastPortO : TypedlOPort ♦getNextTokenO: Token »_getNextlnputQ: Token
FIGURE 18.7. Additional Classes in the DDE Kernel.
348
18.6 Technical Details 18.6.1 Synchronization Hierarchy Previously we have discussed in great detail the notion of timed and non-timed deadlock. Separate from these notions is a different kind of deadlock that can be inherent in a modeling environment if the environment is not designed properly. This notion of deadlock can occur if a system is not thread safe. Given the extensive use of Java threads throughout Ptolemy II, great care has been taken to ensure thread safety; we want no bugs to exist that might lead to deadlock based on the structure of the Ptolemy II modeling environment. Ptolemy II uses monitors to guarantee thread safety. A monitor is a method for ensuring mutual exclusion between threads that both have access to a given portion of code. To ensure mutual exclusion, threads must acquire a monitor (or lock) in order to access a given portion of code. While a thread owns a lock, no other threads can access the corresponding code. There are several objects that serve as locks in Ptolemy II. In the process domains, there are four primary objects upon which locking occurs: Workspace, ProcessReceiver, ProcessDirector and Atomic Actor. The danger of having multiple locks is that separate threads can acquire the locks in competing orders and this can lead to deadlock. A simple illustration is shown in figure 18.8. Assume that both lock^ and lock B are necessary to perform a given set of operations and that both thread 1 and thread 2 want to perform the operations. If thread 1 acquires A and then attempts to acquire B while thread 2 does the reverse, then deadlock can occur. There are several ways to avoid the above problem. One technique is to combine locks so that large sets of operations become atomic. Unfortunately this approach is in direct conflict with the whole purpose behind multi-threading. As larger and larger sets of operations utilize a single lock, the limit of the corresponding concurrent program is a sequential program! Another approach is to adhere to a hierarchy of locks. A hierarchy of locks is an agreed upon order in which locks are acquired. In the above case, it may be enforced that lock A is always acquired before lock B. A hierarchy of locks will guarantee thread safety [44]. The process domains have an unenforced hierarchy of locks. It is strongly suggested that users of Ptolemy II process domains adhere to this suggested locking hierarchy. The hierarchy specifies that locks be acquired in the following order: Workspace
>
ProcessReceiver
>
ProcessDirector
>
AtomicActor
The way to apply this rule is to prevent synchronized code in any of the above objects from making a call to code that is to the left of the object in question. Thread 1
o o I,
Lock A LockB
Thread 2
FIGURE 18.8. Deadlock Due to Unordered Locking.
349/350
PN Domain Author: Mudit Goel
19A Introduction The process networks (PN) domain in Ptolemy II models a system as a network of processes that communicate with each other by passing messages through unidirectional first-in-first-out (FIFO) channels. A process blocks when trying to read from an empty channel until a message becomes available on it. This model of computation is deterministic in the sense that the sequence of values communicated on the channels is completely determined by the model. Consequently, a process network can be evaluated using a complete parallel or sequential schedule and every schedule inbetween, always yielding the same output results for a given input sequence. PN is a natural model for describing signal processing systems where infinite streams of data samples are incrementally transformed by a collection of processes executing in parallel. Embedded signal processing systems are good examples of such systems. They are typically designed to operate indefinitely with limited resources. This behavior is naturally described as a process network that runs forever but with bounded buffering on the communication channels whenever possible. PN can also be used to model concurrency in the various hardware components of an embedded system. The original process networks model of computation can model the functional behavior of these systems and test them for their functional correctness, but it cannot directly model their real-time behavior. To address the involvement of time, we have extended the PN model such that it can include the notion of time. Some systems might display adaptive behavior like migrating code, agents, and arrivals and departures of processes. To support this adaptive behavior, we provide a mutation mechanism that supports addition, deletion, and changing of processes and channels. With untimed PN, this might display nondeterminism, while with timed-PN, it becomes deterministic. The PN model of computation is a superset of the synchronous dataflow model of computation (see the SDF Domain chapter). Consequently, any SDF actor can be used within the PN domain. Similarly any domain-polymorphic actor can be used in the PN domain. Very different from SDF, a sepa-
351
rate process is created for each of these actors. These processes are implemented as Java threads [66]. The software architecture for PN is described in section 19.3 and the finer technical details are explained in section 19.4.
19.2 Process Network Semantics 19.2.1 Asynchronous Communication Kahn and MacQueen [41] [42] describe a model of computation where processes are connected by communication channels to form a network. Processes produce data elements or tokens and send them along a unidirectional communication channel where they are stored in a FIFO queue until the destination process consumes them. This is a form of asynchronous communication between processes. Communication channels are the only method processes may use to exchange information. A set of processes that communicate through a network of FIFO queues defines a program. Kahn and MacQueen require that execution of a process be suspended when it attempts to get data from an empty input channel {blocking reads). Hence, a process may not poll a channel for presence or absence of data. At any given point, a process is either doing some computation (enabled) or it is blocked waiting for data {read blocked) on exactly one of its input channels; it cannot wait for data from more than one channel simultaneously. Systems that obey this model are determinate; the history of tokens produced on the communication channels does not depend on the execution order. Therefore, the results produced by executing a program are not affected by the scheduling of the various processes. In case all the processes in a model are blocked while trying to read from some channel, then we have a real deadlock, none of the processes can proceed. Real deadlock is a program state that happens irrespectively of the schedule chosen to schedule the processes in a model. This characteristic is guaranteed by the determinacy property of process networks.
19.2.2 Bounded Memory Execution The high level of concurrency in process networks makes it an ideal match for embedded system software and for modeling hardware implementations. A characteristic of these embedded applications and hardware processes, is that they are intended to run indefinitely with a limited amount of memory. A problem than is that the Kahn-MacQueen semantics do not guarantee bounded memory execution of process networks even if it is possible for the application to execute in bounded memory. Hence bounded memory execution of process networks becomes crucial for its usefulness for hardware and embedded software. Parks [69] addresses this aspect of process networks and provides an algorithm to make a process network application execute in bounded memory whenever possible. He provides an implementation of the Kahn-MacQueen semantics using blocking writes that assigns a fixed capacity to each FIFO channel and forces processes to block temporarily if a channel is full. Thus a process has now three states: running {executing), read blocked, or write blocked and a process may not poll a channel for either data or room. The introduction of blocking read and write can cause deadlock, caused by processes that are blocked either on a read or on a write to a channel. This leads to the notion of artificial deadlock which means that all processes in a model block, but with at least one process blocked on a write. However, different from a real deadlock, Parks has shown that a program can continue to make progress on 352
detection of an artificial deadlock by increasing the capacity of the channels on which processes are blocked on a write. In particular, Parks chooses to increase only the capacity of the channel with the smallest capacity among the channels on which processes are blocked on a write to keep overall required memory in the channels to a minimum.
19.2.3 Time In real-time systems and embedded applications, the real time behavior of a system is as important as the functional correctness. Developers can use process networks to test the functional correctness of applications, but it lacks the notion of time. One solution is that the developer uses some other timed model of computation, such as DE, for testing their timing behavior. Another solution is to extend the process networks model of computation with time, as we have done in Ptolemy II. This extension is based on the Pamela model [27], which is originally developed for performance modeling of parallel systems using Dykstra's semaphores. In the timed PN domain, time is global. That is, all processes in a model share the same time, referred to as the current time or model time. A process can explicitly wait for time to advance, by delaying itself for some period from the current time. When a process delays itself, it is suspended until time has sufficiently advanced, at which stage it wakes up and continues to execute. If the process delays itself for zero time the process simply continues to execute. In the timed PN domain, time changes only at specific moments and never during the execution of a process. The time a process observes, can only advance when it is in one of the following two states: 1. The process is delayed and is explicitly waiting for time to advance (delay block). 2. The process is waiting for data to arrive on one of its input channels (read block). The global time advances when all processes are blocked on either a delay or on a read from a channel with at least one process delayed. This state of the program is called a timed deadlock. The fact that at least one process is delayed, distinguishes the timed deadlock from other deadlocks. In case of a timed deadlock, the current time is advanced until at least one process is woken up.
19.2.4 Mutations The PN domain tolerates mutations, which are run-time changes in the model structure. Normally, mutations are realized as change requests queued with the director or manager. In PN there is no determinate point where mutations can occur. The only determinate point is a read deadlock. However, performing mutations at this point is unlikely as a real deadlock might never occur. For example, a model with even one non-terminating source never experiences a real deadlock. Therefore, mutations are performed as soon as they are requested (if they are queued with the director) and when real deadlock occurs (if they are queued with the manager). As we do not know when these requests are served, the process network can be in different schedule states when mutations are performed,. This introduces non-determinism in PN. The details of implementations are presented later in section 19.3. In timed PN, however, requests for mutations are not processed until there is a timed deadlock. Because occurrence of a timed deadlock is determinate, mutations in timed PN are determinate.
353
19.3 The PN Software Architecture 19.3.1 PN Domain The PN domain kernel is realized in package ptolemy.domains.pn.kernel. A UML static structure diagram of the architecture used to realize the PN domain is shown in figure 19.1 (see appendix A of chapter 1). In the successive sections we will highlight elements from the UML diagram, thereby explaining the implementation of the PN domain in details.
19.3.2 The Execution Sequence Director. In process networks, each node is a separate process. In the PN domain in Ptolemy II, this is achieved by letting each actor have its own separate thread of execution based on the native Java threads [66][44]. Process network processes are instances of ptolemy.actors.ProcessThread. BasePNDirector: This is the base class for the directors that govern the execution of a CompositeActor with Kahn process networks (PN) semantics. This base class attaches the Kahn-MacQueen process networks semantics to a composite actor. This director does not support mutations or a notion of time. It provides only a mechanism to perform blocking reads and writes using bounded memory execution whenever possible. It is capable of handling both real and artificial deadlocks. The execution sequence starts by a call to the initialize() method of the director. This method creates the receivers in the input ports of the actors for all the channels, creates a thread for each actor and initializes these actors. It also sets the count of active actors in the model to the number of actors in the composite actor. This is used in the detection of deadlocks and termination,. The next step in the sequence starts with a call to the prefire() method of the director. This method starts up all the created threads. In PN, this method always returns true. Then the fire() method of the director is called next. At this stage, the director resolves artificial deadlocks as soon as it arises using Parks's algorithm as explained in section 19.2.2. When a real deadlock is detected, the method returns. The last stage in the execution sequence, is the call to the postfire() method of the director. This method returns false if the composite actor containing the director has no input ports. Otherwise, it returns true. Returning true implies that if some new data is provided to the composite actor it's execution can resume. Returing false implies that this composite actor will not be fired again. In that case, the executive director or the manager will call the wrapup() method of the top-level composite actor, which in turn calls the wrapup() method of the director. This causes the director to terminate the execution of the composite actor. Details of termination are discussed in section 19.3.4. PNDirector: PNDirector is akin BasePNDirector with one additional functionality. It supports mutations of a process network graph. Mutations are processed as soon as they are requested. The point at which the mutations are processed depends on the schedule of the threads in the model. This might introduce non-determinism to the model. TimedPNDirector:
354
TimedPNDirector has two functionalities that distinguishes it from BasePNDirector. It introduces the notion of global time to the model and it allows for deterministic mutations. Mutations are performed at the earliest timed-deadlock that occurs after they are queued. Since occurrence of timeddeadlock is deterministic, performing mutations at this point makes mutations deterministic. Execution of Actor: A separate thread is responsible for the execution of each actor in PN. This thread is started in the prefireO method of the director. After starting, this thread repeatedly calls the prefire(), fire(), and post-
1
j
j CompositeActor j j j
Manager
•Interface» Actor
|
:
~T
PtolemyThread j
ProcessThread ■_actor: Actor j-_director: ProcessDirector j-_manager: Manager j+ProcessThread(actor: Actor, director: ProcessDirector) j+getActorO: Actor j+restartThreadO :+stopThreadO i+wrapupO
ProcessDirector
7T
_. •Interface» ProcsssReeenw
QueueRecelvar ♦requestFinishO +requestPause(pause: boolean) ♦resetO
.K
BasePNDirector
PNQueueReceiver
+BasePNDirectorO +BasePNDirector(workspace: Workspace) +BasePNDirector(container: CompositeActor, name: String) +addProcessListener(listener: PNProcessListener) »removeProcessUstenerflistener: PNProcessListener)
PNQueueReceiverO +PNQueueReceiver(container: lOPort) +petReadBlockedActorO: Actor getWriteBlockedActorO: Actor *isConnectedToBoundaryO: boolean ♦islnsideBoundaryO: boolean ♦isReadPendingO: boolean ♦isWritePendingO: boolean ♦IsOutsideBoundaryO: boolean +setReadPending(readpending: boolean) +setWrttePendingort. A unit of data that is communicated by actors. This was called a particle in Ptolemy Classic. topology The structure of interconnections between entities (via relations) in a Ptolemy II model. See clustered graph. For an entity or port, not opaque. That is, deep traversals of the topology pass right through its boundaries. transparent composite actor A composite actor with no local director. The port of a transparent composite entity. Deep traversals of the topology see right through such a port. type constraints The declared constraints on the token types that an actor can work with. type resolution The process of reconciling type constraints prior to running a model. Capable of working with any type of token. This was called anytype in Ptolemy Classic. The Ptolemy Classic name for a model. width of a port The sum of the widths of the relations linked to it, or zero if there are none. width of a relation The number of channels supported by the relation. The Ptolemy Classic name for an opaque composite actor.
369
blndex - in UML 17
actor.util package 11, 163, 164 actors 4, 109, 155, 156 acyclic directed graphs 197 add() method Token class 177 addChangeListener() method NamedObj class 152 addExecutionListener() method Manager class 172 AddSubtract actor 27, 92, 97, 313 addToScope() method Variable class 184 ADL3 ADS 2 advancing time CSP domain 321 aggregation association 134 aggregation UML notation 19 allowLevelCrossingConnect() method CompositeEntity class 142 analog circuits 5 analog electronics 1 Andrews 317 animated plots 227 anonymous inner classes 293 ANYTYPE 207 anytype 369 anytype particle 15 applet 34, 369 applets 9, 33, 224 using plot package 221 appletviewer 77, 224 application 369 application framework 155 applications 9, 33 arc 36, 134 architecture 3 architecture description languages 3 architecture design language 3 archive 78 archive applet parameter 229 arithmetic operators 177 arithmetic operators in expressions 184
Symbols ! in CSP 321 # in UML 17 " 40 ""charts 6 + in UML 17 ? in CSP 321 @exception 128 @param 128 _createRunControls() method Ptolemy/Applet class 73 director member 69 _execute() method ChangeRequest class 152 _go() method DEApplet class 76 manager member 69 _newReceiver() method IOPort class 162 toplevel member 69
A absolute type constraint 113 Absolute Value actor 97 abstract class 19 abstract syntax 13, 36, 133, 369 abstract syntax tree 193 abstraction 37, 139 acquaintances 156 action methods 94, 118, 167, 369 active processes 330 actor 165, 369 Actor interface 13, 165, 166 actor libraries 78 actor library 24 actor package 9, 90, 156 actor.event package 9 actor.gui package 9, 68, 89, 91, 92, 102, 103 actor.lib package 11, 90, 100, 102, 113 actor.process package 11, 173, 174 actor.sched package 11, 173, 174 570
arraycopy method 315 ArrayFIFOQueue class 314, 315 arrays in expressions 185 Array Token class 177, 210 Array Type class 211 artificial deadlock 340, 352 associations 19 AST 193 ASTPtBitwiseNode class 195 ASTPtFunctionallfNode class 195 ASTPtFunctionNode class 194, 195 ASTPtLeafNode class 195 ASTPtLogicalNode class 195 ASTPtMethodCallNode class 195 ASTPtProductNode class 195 ASTPtRelationalNode class 196 ASTPtRootNode class 195 ASTPtSumNode class 195 ASTPtUnaryNode class 196 asynchronous communication 163, 352 asynchronous message passing 7, 157 atomic actions 4 atomic actor 369 atomic communication 318 AtomicActor class 13, 165, 166 ATTLIST in DTD 236 attribute 369 Attribute class 138, 180 attributeChanged() method NamedObj class 115, 183 Poisson actor 116 Scale actor 117 attributeListO method NamedObj class 138 attributes 12, 17, 180 attributes in XML 40 attributeTypeChanged() method NamedObj class 116, 183 audio 11 Average actor 81, 105, 119, 121, 122, 123
B Backus normal form 193 balance equations 309 bang in CSP 321 barGraph element PlotML 239 Bars command 242 base class 18 BaseType class 47 BaseType.NAT211
BDF7 begin() method Ptolemy 0 167 Bernoulli 121 Bernoulli actor 99, 102, 120, 121 bidirectional ports 159, 165 bin directory 33 bin element PlotML 239 binary format plot files 221 bison 193 bitwise operators in expressions 184 block 369 block diagrams 9 block-and-arrow diagrams 4 blocked processes 330 blocking reads 339, 352 blocking receive 317 blocking send 317 blocking writes 339, 352 BNF 193 body of an element in XML 40 boolean dataflow 7 BooleanMatrixToken class 176 BooleanToken class 176 bottom-up parsers 193 bounded buffering 351 bounded memory 305, 352 boundedness 7 broadcast() method 159 DEIOPort class 292, 295, 296 browser 34, 369, 371 bubble-and-arc diagrams 4 buffer 163 bus 157 bus contention 321 bus widths and transparent ports 162 busses, unspecified width 160
c C2 C++2 calculus of communicating systems 4, 318 calendar queue 5,11, 290 CalendarQueue class 164 CCS 4, 318 CDATA 44 CD0 319, 335 Chandy 339 change listeners 151 371
change request 59 change requests 353 changed() method QueryListener interface 75 changeExecuted() method ChangeListener interface 151 changeFailed() method ChangeListener interface 151 ChangeListener interface 152 ChangeRequest class 151, 152, 293 channel 156, 369 channels 110 check box entry 81 checkTypes() method TypedCompositeActor class 212 chooseBranch() method CSPActor class 325 CIF 319, 326, 335 circular buffer 315 class attribute in MoML 40 class diagrams 17 class element 49 class names 21, 126 CLASSPATH environment variable 70 clipboard 224 Clock actor 74, 99, 215 Clock class 69 clone() method NamedObj class 144 Object class 117, 177 Scale actor 118 cloning 143 cloning actors 117 clustered graph 369 clustered graphs 11, 13, 36, 133 code duplication 109 code generation 369 codebase applet parameter 229 coding conventions 124 coin flips 99, 102 Color command 241 comments 125 comments in expressions 185 communicating sequential processes 4, 13, 317 communication networks 287 communication protocol 156, 162 Commutator actor 105, 106 Comparable interface 292 compat package 222, 243 compile-time exception 126 compiling applets 70
complete partial orders 197 completion time 341 complex numbers 11 complex numbers in expressions 188 ComplexMatrixToken class 176 ComplexToken class 176 component interactions 3 component-based design 89, 109 ComponentEntity class 13, 139, 140 ComponentPort class 13, 139, 140 ComponentRelation class 13, 139, 140 components 2, 15 Composite Actor 27 composite actor 369 Composite design pattern 19, 139 composite opaque actor 168 CompositeActor class 13, 165, 166 CompositeEntity class 13, 41, 58, 139, 140 concrete class 19 concrete syntax 36, 133, 369 concurrency 2 concurrent computation 156 concurrent design 13 concurrent finite state machines 6 concurrent programming 322 conditional communication 325 conditional do 319 conditional if 319 ConditionalReceive class 325 conditionals in expressions 185 ConditionalSend class 325 configure element 43 Configure ports 28 connect() method CompositeEntity class 142 connection 36, 134, 369 connections making in Vergil 28 conservative blocking 343 consistency 135 Const actor 24, 100 constants expression language 185 constants in expressions 185, 196 constraints on parameter values 115 constructive models 1 constructors in UML 17 container 137, 369 containment 19 contention 321 372
context menu 27 continuous time modeling 4 continuous-time modeling 13 continuous-time systems 94 contract 210 control key 28 control-clicking 28 convert() method Token class 188 Token classes 180 CORBA 15, 33 cos() method Math class 76 cosine 99, 108 CPO interface 200 CPOs 197 CQComparator interface 164 CrossRefList class 139 CSP4, 317 CSP domain 93, 163 CSPActor class 325 CSPDirector class 327 CSPReceiver class 327 CT4 CT domain 94 current time 91, 287, 353 CurrentTime actor 100, 101 Cygwin 223
D DAG 288 dangling ports SDF domain 313 dangling relation 159, 369 data encapsulation 175 data package 11, 175 data polymorphic 178 data polymorphism 89, 109, 369 data rates 309 data.expr package 11, 189 data.type package 11 dataflow 163, 288, 305, 339 DataSet command 242 dataset element PlotML 237, 238 dataurl 221 dataurl applet parameter 229 dataurl parameter PlotApplet class 221 dB actor 97 DCOM 15
DDE 6, 339 DDE domain 300 DDES 339 DDF 7 DE 5, 287 DE domain 94 DEActor class 290, 291, 292 deadlock 7, 148, 150, 309, 352 CSP domain 320 DDE domain 339 DEApplet class 67, 69 DECQEventQueue class 291 DECQEventQueue.DECQComparator class 292 DEDirector class 290, 291 deep traversals 141, 369 deepContains() methodNamedObj class 143 deepEntityList() method CompositeEntity class 141, 172 DEE vent class 291, 292 DEEventQueue interface 291 DEEventTag class 291 DefaultExecutionListener class 166, 172 defaultlterations parameter SDFapplets 1'4 defaultStopTime parameter DE applets 73 DEIOPort class 290, 291, 292, 295, 296, 298 delay 288, 292 CSP domain 320 w SDF 306 PN domain 353 SDF domain 310 Delay actor 289 DE domain 292 delay actors DDE domain 340 delayO method CSPActor class 327 delayed processes 330 delay To() method DEIOPort class 296, 298 deleteEntity element 56 deletePorts element 56 deleteProperty element 57 deleteRelations element 56 delta functions 5 delta time 5, 342 demultiplexer actor 157 dependency loops 194 depth for actors in DE 288 DEReceiver class 291 373
derived class 18 design 1 design patterns 15 determinacy 7, 163 determinism 317, 351 deterministic 289 DEThreadActor class 300 DETransformer class 292, 298 diamond in toolbar 28 digital electronics 1 digital hardware 5, 287 Dijkstra 322 dining philosophers 321, 322 Dirac delta functions 5 directed acyclic graph 288 directed graphs 36, 134, 197 DirectedAcyclicGraph class 198, 200 DirectedGraph class 198, 199 director 13, 162, 168, 370 Director class 13, 162, 165, 166 director element 51 director library 24 di sable Actor() method DEDirector class 295 disconnected graphs SDF domain 313 disconnected port 159, 370 discrete event domain 287 discrete-event domain 5 discrete-event model of computation 164 discrete-event modeling 13 discrete-time domain 6 Display actor 24 distributed discrete event 339 distributed discrete event systems 339 distributed discrete-event domain 6 distributed models 6 distributed time 339 Distributor actor 105, 157, 165 divide() method Token class 177, 195 doc element 44, 56 DOCTYPE keyword in XML 34, 39, 41, 49, 50, 233 document type definition 37, 233, 235 domain 155, 370 domain polymorphism 89, 109, 178, 370 domain-polymorphism 15 domains 9, 13 domains package 11, 13 domains.de.kernel package 290 domains.de.lib package 292
doneReading() method Workspace class 150 doneWritingO method Workspace class 150 DoubleCQComparator interface 164 DoubleMatrixToken class 176 doubles 185 DoubleToFix actor 107 DoubleToken class 176 DT6 DTD 37, 233, 235 dynamic dataflow 7 dynamic networks 165
E E 185 e 185 edges 197 EDIF 36, 133 EditablePlot class 227 EditablePlotMLApplet class 229 EditablePlotMLApplication class 230 element 370 ELEMENT in DTD 234 element in XML 39 embedded systems 1 EMPTY in DTD 236 empty elements in XML 40 encapsulated PostScript 223 encapsulated postscript 224 entities 4, 36, 133 entity 370 Entity class 13, 134, 135 entity in XML 40 EntityLibrary class 58 EPS 223, 224 equals() method Token class 177, 196 Eratosthenes 322, 323 evaluateParseTree() method ASTPtRootNode class 194 evaluation of expressions 181 event 5 event queue 287 events 287 exceptions 126 exceptions in applets 70 executable entities 155 Executable interface 13, 165, 166 executable model 13
Director class 172, 173 Executable interface 94, 165 in actors 119 fireAt() method DEActor class 298 DEDirector class 296 Director class 99, 123, 287, 292, 295, 302 fired 30 firing vector 309 firingCountLimit parameter SequenceSource actor 99, 123 first-in-first-out 351 fix function in expression language 189 fixed point data type 189 fixed-point 4 fixed-point semantics 94 fixed-point simulations 16 FixPoint class 189 FixPointFunctions class 189 FixToDouble actor 107 FixToken class 189 floating-point simulations 16 formatting of code 124 fractions 11 FrameMaker 223 FSM6 full name 137 functional actors 94 functions expression language 185
executable models 1 execute() method ChangeRequest class 151 execution 94, 167, 370 executionError() method ExecutionListener interface 172 ExecutionEvent class 166 executionFinished() method ExecutionListener interface 81, 172 ExecutionListener class 166 ExecutionListener interface 172 executive director 168, 172, 370 explicit integration algorithms 266 exporting MoML 60 exportMoML() method NamedObj class 59, 60 Expression actor 105 expression evaluation 193 expression language 6, 11, 184 extending 196 expression parser 193 expressions 76 extensible markup language 35, 232
F fail-stop behavior 206 fairness 322 false 185 FBDelay actor 342 FFT30 FIFO 156,315,351 FIFO Queue 11 FIFOQueue class 156, 163, 164, 314 file format for plots 232 file formats 15 File->New menu 24 fill command in plots 223 fillOnWrapup parameter Plotter actor 102 finally keyword 150 finish() method Manager class 169 finished flag 331 finite buffer 163 finite state machines 9 finite-state machine domain 6 fire() method actor interface 289 Average actor 123 CompositeActor class 172, 173
G galaxy 143, 370 Gaussian actor 28, 101 generalize 18 get() method lOPort class 157 Receiver interface 157 getAttributeO method NamedObj class 138 getColumnCount() method MatrixToken class 188 getContainer() method Nameable interface 137 getCurrentTime() method DEActor class 298 Director class 123 getDirector() method Actor interface 168 getElementO method ArrayToken class 186 375
getElementAt() method MatrixToken classes 177 getFullName() method Nameable interface 137 getInsideReceivers() method IOPort class 173 getOriginator() method ChangeRequest class 152 getReadAccess() method Workspace class 149 getReceivers() method IOPort class 173 getRemoteReceivers() method 165 IOPort class 162 getRowCount() method MatrixToken class 186 getState() method Manager class 172 getValue() method ObjectToken class 177 getWidth() method IORelation class 162 getWriteAccess() method Workspace class 150 Ghostview 223 global error for numerical ODE solution 266 grammar rules 193 Graph class 198, 199 graph package 11, 197 graphical elements 71 graphical user interface 89, 91 graphics 54 graphs 197 Grid command 241 group element 58 guarded communication 164, 318, 325 guards 6 GUI 89, 91 gui package 11
IOPort class 173 heterogeneity 15, 146, 172 Hewlett-Packard 2 hiding 37, 141 hierarchical concurrent finite state machines 9 hierarchical heterogeneity 146, 172 hierarchy 139 higher node 200 histogram 221, 222 Histogram class 227 histogram.bat 223 HistogramMLApplet class 229 HistogramMLApplication class 230 HistogramMLParser class 233 HistogramPlotter actor 103 history 163 Hoare317, 321 HTML 35, 67, 127, 221, 233 HTTP 78 hybrid systems 6
I i 185 if...then...else... 185 Illegal ActionException class 117 Illegal ArgumentException class 194 image processing 11 immutability tokens 175 Immutable 149 immutable 137 immutable property 370 imperative semantics 2 implementation 33 implementing an interface 19 implicit integration algorithms 266 import 17 Impulses command 242 inCSP319 incomparable 179 incomparable types 112 inconsistent models 310 incremental parsing 54, 58 indentation 125 index of links 135 index of links to a port 48 index of links to ports 57 Inequality class 199, 200, 214 InequalitySolver class 200 Inequality Term interface 198, 200, 214 information-hiding 146
H hardware 1 hardware bus contention 321 Harel, David 6 Harrison, David 221 hashtable 5 hasRoom() method IOPort class 173 Hasse 200 Hasse diagram 200 hasTokenQ method 376
inheritance 18, 109 initial output tokens 119 initial token 310 initialize() method Actor interface 290 Average actor 119 Director class 169 Executable interface 94, 165 in actors 118, 119 input element 52 input port 156 input property of ports 55 inputs transparent ports 160 inside links 37, 139 inside receiver 173 inspection paradox 81 instantaneous reaction 289 integers 185 intellectual property 6 interface 19 interoperability 2, 15 interpreter 11 IntMatrixToken class 176 IntToken class 176 invalidateResolvedTypes() method Director class 116 invalidateSchedule() method DEDirector class 292 Director class 115 IOPort class 156 IORelation class 156, 157 isAtomic() method CompositeEntity class 139 islnput() method 165 isOpaque() method ComponentPort 147 CompositeActor class 168, 172 CompositeEntity class 139, 160 isOutput() method 165 isWidthFixed() method IORelation class 162 iteration 94, 167, 370 iterations 119 iterations parameter 24 SDF applets 74 SDFDirector class 311
J j 185 jar files 78
plot package 222 Java 2 Java 223 Java Archive File 78 Java Foundation Classes 229 Java Plug-In 67 JavaRMI 15 Java Runtime Environment 67 java.lang.Math 196 JavaCC 193 Javadoc 113, 127 Jefferson 343 JFC229 JFrame class 229 JIT 80 JJTree 193 JPanel class 229 JRE67 just-in-time compiler 80
K Kahn process networks 7, 163, 339 kernel package 11 kernel.event package 11 kernel.util package 11, 126, 164
L LALR(l) 193 lattice 179 lattices 197 layout manager 71 LED A 197 length() method ArrayToken class 186 level-crossing links 37, 141, 142 lexical analyzer 193 lexical tokens 193 HberalLink() method ComponentPort class 142 Lines command 241 lingering processes 80 link 36, 134, 135, 370 link element 47, 52 link element and channels 48 link index 48, 57, 135 link() method Port class 142 links in Vergil 28 literal constants 185 liveness 13, 322 377
microelectromechanical systems 1 Microstar XML parser 58 microstep 288 microwave circuits 5 Milner318 Minimum actor 98 Misra 339 mixed signal modeling 5 ML 15 MoC 317 modal model 5 modal models 6 model 370 model element 41 model of computation 2, 155, 156, 370 model time 287, 320, 353 modeling 1 models of computation mixing 172 modulo() method Token class 177, 195 MoML 33, 370 exporting 60 moml package 12, 58, 59, 62 MoMLAttribute class 61 MoMLChangeRequest class 59, 152 monitor 148 monitors 13, 15, 332 monomorphic 113 monotonic functions 163 multiple containment 42 multiplyO method Token class 114, 177, 195 MultiplyDivide actor 29, 98 multiport 27, 110, 157, 163, 370 multiport property of ports 55 multiports SDF domain 313 multiports in MoML 46 mutability CSP domain 322 mutation 11, 15 mutations 150, 351, 353 DE domain 292, 298 mutual exclusion 148, 332
LL(k) 193 local director 168, 172 local error for numerical ODE solution 266 Location class 61 lock 148, 332 logarithmic axes for plots 236, 240 logical boolean operators in expressions 184 long integers 185 long integers in expressions 188 LongMatrixToken class 176 LongToken class 176 Lorenz system 274 lossless type conversions 183 Lotos 4, 321 lower node 199
M M/M/l Queue 322 mailbox 163 Mailbox class 156, 163 make install 79 makefiles 79 managed ownership 137 manager 168, 169, 370 Manager class 13, 166, 169 managerStateChanged() method ExecutionListener interface 172 Marks command 241 marks in ptplot 237 Math class 76 math functions 196 math package 11, 189 mathematical graphs 36, 134, 197 Matlab 2 matrices 11 matrices in expressions 185 matrix tokens 177 MatrixToken class 176, 186 MatrixViewer actor 104 Maximum actor 97, 98, 99, 106, 108 mechanical components 5 mechanical systems 5 media package 11 Mediator design pattern 36, 134 MEMS 1, 5, 276 Merge actor 292 Message class 228 message passing 156 methods expression language 186 microaccelerometer 276
N name 137 name attribute in MoML 40 name server 165 Nameable interface 12, 126, 135, 137 i 3 78
oscilloscope 237 output property of ports 55 overloaded 127 override 18
NamedList class 139 NamedObj class 12, 41, 60, 135, 137, 152 NameDuplicationException class 117 namespaces 58 naming conventions 21 NaT 219 newPort() method Entity class 46 newReceiver() method Director class 162 newRelation() method Composite Entity class 48 noColor element PlotML 237 node classes (parser) 195 nodes 197 noGrid element PlotML 237 non-determinism 317 nondeterminism with rendezvous 164 nondeterministic choice 318 non-timed deadlock 340 notifyAll() method Object class 332 null messages 340 Numerical type 180
P package 370 package diagrams 17 package structure 9 packages 13 Pamela 353 Panel class 227 parallel discrete event simulation 343 parameter 181, 370 Parameter class 74, 181 parameters 11, 74, 114 constraints on values 115 Parks 352 parse tree 193 parse() method MoMLParser class 58 parsed character data 234 parser 193 partial order 15 partial orders 197 partial recursive functions 6 particle 370 pathTo attribute vertex element 53 pause() method CSPDirector class 331 Manager class 172 PCDATA in DTD 234 PDES 343 period parameter Clock actor 74 persistent file format 59 PI 185 pi 185 Placeable interface 71,91 plot actors 221 Plot class 71, 227, 228 plot package 12, 221 plot public member Plotter class 71 PlotApplet class 228 PlotApplication class 228, 229 PlotBox class 227, 228, 229 PlotBoxMLParser class 233 PlotFrame class 228, 229 PlotLive class 227, 228
O object model 17 object modeling 15 object models 9 object-oriented concurrency 155 object-oriented design 89 ObjectToken class 175, 176, 177 OCCAM 321 Occam 4 ODE solvers 13 one() method Token class 178 oneRight() method MatrixToken classes 178 opaque 370 opaque actors 168, 172 opaque composite actor 168, 173, 370 opaque composite actors 15 opaque composite entities 146 opaque port 141 operator overloading 184 optimistic approach 343 originator in change requests 151 379
,
PlotLiveApplet class 228 PlotML 43, 222, 227, 232, 235 plotml package 227, 233 PlotMLApplet class 229 PlotMLApplication class 230 PlotMLFrame class 229 PlotMLParser class 233 PlotPoint class 227, 228 Plotter actors 30 Plotter class 91 plotting 12 Plug-In 67 plug-in 80 PN 7, 351 PN domain 93 Poisson actor 101, 115, 116 polymorphic actors 178 polymorphism 15, 89, 109, 207 data 178 domain 178 port 370 type of a port 47 Port class 13, 134, 135 port element 45 port toolbar button 27 ports 36, 110, 133 postfire() method actor interface 289 Average actor 123 CompositeActor class 169 DE domain 296 DEDirector class 302 Executable interface 94, 165 in actors 119 Server actor 298 PostScript 223 precedences 5 precondition 126 prefire() method Actor interface 289 CompositeActor class 172 DE domain 295 Executable interface 94, 165 in actors 119 Server actor 298 prefix monotonic functions 7 prefix order 163 preview data in EPS 223 prime numbers 323 priorities 323
priority of events in DE 288 priority queue 5, 11 private methods 17 process algebras 37, 141 process domains 173 process level type system 15 Process Network Semantics 352 process networks 13, 163, 305, 351 process networks domain 7, 172 processing instruction 44, 45 process-oriented domains 94 ProcessThread class 328 production rules 193 property element 42, 56 protected members and methods 17 protocol 156 protocols 89 PTII environment variable 33, 222, 223, 229 Ptolemy Classic 13, 370 ptolemy executable 33 Ptolemy II 370 Ptolemy Project 371 ptolemy.data.expr package. 189 PtolemyApplet class 68, 70 PtParser 193 ptplot 221, 222, 230 ptplot.bat 223 PUBLIC keyword in XML 34, 39, 41, 233 public members and methods 17 Pulse actor 101, 215 pure event 287 pure property 46 pure property in MoML 43 put() method Receiver interface 156 pxgraph221,222, 243 pxgraph.bat 223 PxgraphApplication class 243 pxgraphargs parameter PxgraphApplet class 222 PxgraphParser class 243
Q quantize() function in expression language 189 Quantizer actor 98 Query class 74 query in CSP 321 QueryListener interface 75 queue 163, 315, 322 queueing systems 287 QueueReceiver class 156, 157, 163 OoU
NamedObj class 152 requestChange() method 59 Director class 151, 292 Manager class 292 REQUIRED in DTD 236 resolved type 207, 371 resolveTypes() method Manager class 214 resource contention 321 resource management 317 resume() method CSPDirector class 331 Manager class 172 re-use 89 ReuseDataSets command 242 right click 27 rollback 272 RTTI 210 Rumbaugh 137 Run Window 24 run() method Manager class 169 Runtime Environment 67 run-time exception 126 run-time type checking 206, 210 run-time type conversion 206 run-time type identification 210 RuntimeException interface 126
quotation marks in MoML attributes 40
R race conditions 148 Ramp actor 101, 102, 103 random() function in expressions 186 Rapide 3 read blocked 352 read blocks 340 read/write semaphores 15 readers and writers 149 read-only workspace 150 real deadlock 320, 340, 352 RealToComplex actor 106, 107, 108 receiver wormhole ports 173 Receiver interface 156 receiver time 340 record tokens in expressions 185 Recorder actor 81, 104 RecordToken 210 reduced-order modeling 15 reference implementation 58 reflection 194, 196 registerClass() method PtParser class 196 registerConstant() method PtParser class 196 registerFunctionClass() method PtParser class 195 relation 371 in Vergil 28 Relation class 13, 135 relation element 47 relational operators in expressions 184 relations 4, 36, 133 relative type constraint 112 reloading applets 80 removeChangeListener() method NamedObj class 152 removing entities 56 removing links 57 removing ports 56 removing relations 56 rename element 56 rendezvous 4, 93, 157, 163, 317, 333 rendition of entities 53 report() method Ptolemy Applet class 70 reporting errors in applets 70 requestChange method
s Saber 2, 5 safety 13 SampleDelay actor 306 scalable vector graphics 54 Scalar type 180 ScalarToken class 176 Scale actor 98, 113, 115, 116, 117, 118 Scheduler class 313 schedulers 173 scheduling 167,311,313 scope 181 scope in expressions 185 Scriptics Inc. 144 scripting 184 SDF 7, 305 SDF scheduler 29 SDFAtomicActor class 315 SDFDirector class 311 SDFReceiver class 313, 314 SDFScheduler class 311, 313 SDL 7 3^1
setTypeEquals method 117 setTypeEquals() method Variable class 181 setTypeSameAs() method Variable class 183 setWidth() method IORelation class 157, 162 setXLabel() method Plot class 77 SGML 35, 233 shallow copy 117 shell script 223 sieve of Eratosthenes 322, 323 signal processing 351 signal processing library 28 simulation 1, 33 simulation time 287 Simulink 2, 5 simultaneous events 5, 287, 288 sin() method Math class 76 Sine actor 99, 108, 121 Sinewave actor 28 Sine wave class 62 single port 110 Sink class 109 sinks library 24 size element PlotML 237 software 1 software architecture 3 software components 15 software engineering 15 source actors 100, 102, 103, 123 Source class 109 sources library 24 spaces 125 specialize 18 spectrum 30 Spice 5 spreadsheet 11 SR7 star 143, 371 starcharts 6 Start menu 80 start tag in XML 40 start time 290 startRun() method Manager class 169 state 6, 371 Statecharts 6
semantics 2, 13 send() method DEIOPort class 292, 295, 296, 298 IOPort class 156 TypedlOPort class 214 SequenceActor interface 91, 99, 292 SequencePlotter actor 28, 30, 104 SequencePlotter class 91 SequenceSource actor 124 SequenceSource class 123 Server actor 292, 297 servlet 371 servlets 33 setConnected() method Plot class 77 setContainer() method kernel classes 135 Port class 105 setContext() method MoMLParser class 55 setCurrentTime 327 setCurrentTime() method Director class 123, 327 setExpression() method Parameter class 76 Variable class 181 setlmpulses() method Plot class 77 setMarksStyle() method() Plot class 77 setMultiport() method IOPort class 157 setPanelO method Placeable interface 71, 91 setReadOnly() method Workspace class 150 setSize() method Plot class 71 PlotBox class 232 setStopTime() method DEDirector class 73, 290 Settable interface 138 setTitleO method Plot class 77 setToken() method Variable class 181 setTopLevelO method MoMLParser class 55 setToplevel() method of MoMLParser 59 setTypeAtLeast() method Variable class 183 382
stateless actors 94 static schedule 169 static schedulers 173 static scheduling 308 static structure diagram 12, 90, 91, 134 static structure diagrams 17 static typing 205 StaticSchedulingDirector class 311 stem plot 77 stem plots 238 stop time 290 stopFire() method Executable interface 165 stopTime parameter DE Applets 73 TimedSource actor 99 stream 157 string constants 185 StringAttribute class 138 StringToken class 92, 176 stringValue() method Query class 76 StructuredType class 211 subclass 18 subclass UML notation 18 subdomains 15 subpackage 371 subtract^) method Token class 177, 195 superclass 18 SVG 54 swing 229 symbol table 193 synchronized keyword 148, 332 synchronous communication 163 synchronous dataflow 7, 13, 305 synchronous dataflow domain 7 synchronous message passing 4, 157, 317 synchronous/reactive models 7 syntax 9 System control panel 223
TerminateProcessException class 331 terminating processes CSP domain 331 testable precondition 126 thread actors DE domain 298 thread safety 137, 147, 148 threads 13, 163 thread-safety 15 threshold crossings 5 tick element PlotML 236 tick marks 226 time 2 CSP domain 320 DDE domain 339 PN domain 353 time deadlock 320 time stamp 5, 164, 287 DDE domain 340 Time Warp system 343 timed deadlock 341 Timed Actor interface 91, 99, 123, 292 TimedPlotter actor 71, 104 TimedPlotter class 69, 91 TimedSource actor 123 TimedSource class 124 title elemen PlotML 234 TitleText command 240 toArrayO method MatrixToken class 188 token 110,371 Token class 92, 93, 121, 175, 176 tokenConsumptionRate parameter port classes 313 tokenlnitProduction parameter port classes 313 tokenProductionRate parameter port classes 313 tokens 30, 90, 156 tokens, lexical 193 toolbar 27, 28 tooltips 45 top level composite actor 169 top-down parsers 193 topological sort 198, 288 topology 36, 133, 371 topology mutations 150 transferlnputs() method DEDirector class 301
T Tab character 125 tag 371 tag in XML 40 telecommunications systems 5 terminate() method Director class 331 Executable interface 167 Manager class 169
383
Director class 173 transferOutputs() method Director class 173 Transformer class 91, 109, 113, 114 transitions 6 transitive closure 198, 200 transparent 371 transparent composite actor 371 transparent entities 139 transparent port 371 transparent ports 141, 160 trapped errors 205 trigger input Source actor 99 true 185 tunneling entity 143 type changes for variables 183 type compatibility rule 206 type conflict 208 type constraint 112, 207 type constraints 112, 207, 214, 371 type conversion 210 type conversions 179 type hierarchy 178 type lattice 179 type of a port 47 type resolution 15, 167, 207, 371 type resolution algorithm 219 type system 112 process level 15 type variable 208 Typeable interface 183 typeConstraints() method 214 Typed Composite Actor 27 TypedActor class 211 TypedAtomicActor 211 TypedAtomicActor class 90, 165 TypedCompositeActor 211 TypedCompositeActor class 41, 165 TypedIOPort211 setting the type in MoML 47 TypedlOPort class 46, 110, 156, 292 TypedlORelation class 48, 156 TypedOIRelation211 TypeLattice class 179 type-polymorphic actor 207 types of parameters 181
undeclared type 207, 371 undeclared types 211 undirected graphs 197 unified modeling language 17 uniqueness of names 137 universe 371 Unix 223 unlink element 57 untrapped errors 205 util subpackage of the kernel package 151 utilities library 27
V variable 180 Variable class 138 VariableClock actor 102 variables in expressions 185 vector graphics 54 vectors 11 Verilog 5, 9 vertex 36, 134 vertex attribute link element 52 Vertex class 53, 61 VHDL 5, 9 VHDL-AMS 2, 5 View menu 24 visual dataflow 9 visual rendition of entities 53 visual syntax 9
W wait() method Object class 332 Workspace class 150 waitForCompletion() method ChangeRequest class 153 waitForDeadlock() method CSPActor class 327 waitForNewInputs() method DEThreadActor class 300 web server 34, 369, 371 welcome window 24 width of a port 110, 157,371 width of a relation 49, 57, 157, 371 width of a transparent 162 Windows 223 wireless communication systems 165 workspace 149 Workspace class 135, 138, 149 wormhole 15, 147, 168, 172, 371
U UML 9, 12, 17, 90, 91, 134 package diagram 9
384
wrap element PlotML 237 wrapup() method Actor interface 290 Executable interface 94, 165 Wright 3 write blocked 352 write blocks 340
X x ticks 226 xgraph 221, 243 XLabel command 240 XLog command 240 xLog element PlotML 236 XML 15, 33, 222, 232 XML parser 58 XMLIcon class 54 XRange command 240 xRange element PlotML 234 XTicks command 240 xTicks element PlotML 236 XYPlotter actor 104, 105 XYPlotter class 91 y ticks 226 yacc 193 YLabel command 240 YLog command 240 yLog element PlotML 236 YRange command 240 YTicks command 240 yTicks element PlotML 236 Zeno condition 342 zero delay actors 289 zero() method Token class 178 zero-delay loop 289 zoom in plots 223
385 •U.S. GOVERNMENT PRiNTING OFFICE:
2001-610-055-10
MISSION OF AFRL/INFORMATIONDIRECTORATE (IF)
The advancement and application ofInformation Systems Science and Technology to meet Air Force unique requirements for Information Dominance and its transition to aerospace systems to meet Air Force needs.