- Sequence Diagram Tutorial
- Sequence Diagram Description Definition
- Class Diagram
- Sequence Diagram Adalah
- Sequence diagrams, commonly used by developers, model the interactions between objects in a single use case. They illustrate how the different parts of a system interact with each other to carry out a function, and the order in which the interactions occur when a particular use case is executed.In simpler words, a sequence diagram shows different parts of a system work in a ‘sequence' to get so.
- A sequence diagram follows showing classes and objects from org.mozilla.webclient.test; org.mozilla.webclient, the public package; org.mozilla.webclient.wrappernative, the private package; java.awt; wrappernative is intended for wrapping motif- or win32-based native web browsers. In our particular test application, a win32-based native browser is being wrapped.
- Diagram and the statechart diagrams; if neccessary, modify them (5) Develop more object and sequence diagrams (6) If an operations seems to become stable (no changes in the diagrams for the parts with that operation), start to develop activity diagrams for that operation; begin with a textual description and advance to a more formal one.
- Specifying Sequence Diagram Editors
- Prerequisites
- Mappings
- Tools
Introduction
The sequence diagram is a good diagram to use to document a system's requirements and to flush out a system's design. The reason the sequence diagram is so useful is because it shows the interaction logic between the objects in the system in the time order that the interactions take place.
This document describes how to specify sequence diagram modelers with Sirius. It has been written for software architects who want to specify sequence diagrams on their own meta-models.
This tutorial is based on an example, i.e. the specification of a UML sequence diagram editor. The resulting diagram is available in the Obeo UML Behavioral viewpoint (available for free at https://github.com/ObeoNetwork/UML-Modeling ) which is based on the Eclipse Foundation's UML2 meta-model.
Sequence Diagrams Semantics
As their name says, sequence diagrams are meant to represent ordered sequences of elements. Typically, they represent events sent and received between some entities over time. The canonical case is a UML Sequence Diagram (where the notation comes from), which represents the messages exchanged between objects in a software system.
The most important consequence of this is that contrary to what happens on a classical diagram, the relative graphical positions of elements on a sequence diagram have strong meaning. This is true for the vertical placement and for the left-to-right order of lifelines. However placing a message above or below another one has a strong implication on the ordering of the events they represent, and thus on the structure of the underlying semantic model which is represented. Sirius works hard to ensure that what you see on you sequence diagram (in terms of vertical ordering of elements and horizontal ordering of lifelines) always correspond to the semantic ordering of the represented events.
This works both ways:
- Assuming a diagram is synchronized (i.e. you are in Automatic Refresh mode or your manually refreshed it since the last semantic changes), Sirius will always organize the elements on the diagram in a way which is compatible with the semantic ordering of the events: if you see an execution E1 placed above another execution E2, you can be sure the events corresponding to E1 happen before the events of E2 in the semantic model.
- Symmetrically, and perhaps more importantly, moving elements on a sequence diagram may trigger changes in the underlying semantic model to reflect the new event order implied by the positions you changed. This is very different from what happens in other diagrams, where most graphical repositioning of elements are only cosmetic. Keeping the example above, moving execution E2 graphically above E1 will trigger changes in the semantic model to move the corresponding event of E2 before the events of E1.
Most of the specific features and restrictions of sequence diagrams compared to other diagrams derive from this strong guarantee that at all time, the graphical (vertical) order of the elements you see on the diagram match exactly the semantic order of the events which exist in the underlying model and the horizontal order of the instance roles you see on the diagram match exactly the semantic order of the corresponding elements which exist in the underlying model.
From the specifier point of view, this means that sequence diagrams can only be defined on meta-models in which you can provide a total ordering of the events represented, and that you can reorder these elements in a predictable way (see the description of the Event Reorder Tool and Instance Role Reorder Tool for details).
Restrictions and Limitations
In order to guarantee the strong guarantee described above, some of the features present on normal diagrams are not supported, or even completely disabled on sequence diagrams. Basically, anything which would make it possible on a normal diagram to have meaningful semantic elements not visible on the diagram is forbidden. This would make it impossible for Sirius to keep consistent tracking of the 'position' of these invisible elements relative to the ones which are visible.
- Layers: sequence diagrams may define optional layers, as long as they do not make graphical elements appear or disappear on the diagram when they are selected or de-selected. Layers which contribute new tools in the palette for example are fine.
- Filters: filters which may hide elements from a sequence diagram when enabled are not supported.
- Hide/Reveal: hiding elements explicitly is not supported. The actions are disabled in the UI.
- Pin/Unpin: pinning graphical elements has no effect on the automatic layout of sequence diagrams. Even if an element has been marked as pinned, Sirius must be able to move it graphically as needed in order to maintain the graphical order of element in sync with the semantic order. The actions are disabled in the UI.
Prerequisites
As with any Sirius diagram, the semantic model used for a sequence diagram defines some elements and relationships that must be mapped to graphical elements in order to be represented on the sequence diagrams. For the UML2 modeler, the semantic model is defined in .uml
files and the mapping in the uml2.odesign
Viewpoint Specification file.
The job of the architect is to map the UML2 interactions, life-lines, executions and messages. Even if the support for sequence diagrams in Sirius is not dedicated to UML2, these four kinds of elements (or similar ones) must be provided by the sequence meta-model in order to be represented as sequence diagrams in Sirius.
Interaction
The interaction is the semantic container for all the sequence diagram elements.
In UML2, the interaction is represented by an element of type Interaction
.
Lifeline and Instance role
The instance role and the lifeline represents one participant in the interaction.
In UML2, the instance role and the lifeline are represented by one element of type Lifeline
.
Execution
The execution typically represents a period in the participant's lifetime when it is active. An execution is composed of three elements:
- Execution start: the start occurrence of the execution;
- Execution: the execution by itself (the duration of the execution);
- Execution finish: the finish occurrence of the execution.
In UML2, the execution is represented by an element of type Execution Specification
, the execution start and finish are defined by an abstract type Occurrence Specification
.
Message
The message represents a kind of communication between lifelines of an interaction. A message is composed by three elements:
- Message send (or source): the send occurrence of the message;
- Message: the message by itself (the kind of communication, e.g. synchronous/asynchronous call);
- Message receive (or target): the receive occurrence of the message
In UML2, the message is represented by an element of type Message Specification
, the message send and receive are defined by one abstract type Occurrence Specification
.
SingleEventEnd and CompoundEventEnd
Sirius internally manages a list of start/finish execution occurrences and sender/receiver message occurrences defined for each interaction. All these occurrences are event ends contained in the EventEnds
list.
An eventEnd
contains two properties:
semanticEvent
: points to the semantic event which could be a message or an execution,semanticEnd
: points to one connection end of the semantic event, i.e. the semantic element which could be a message sender, a message receiver, an execution start or an execution finish.
There are two kinds of EventEnds
:
SingleEventEnd
: Is an element which is used only as start/send or finish/receive by one execution/message. In UML2, an execution could be started or finished by anExecution Occurrence Specification
and a message could be sent or received by aMessage occurrence Specification
.CompoundEventEnd
: Is an element that represents the combination of a message and an execution. This kind ofEventEnd
exists in order to associate graphically a message to an execution. As theCompoundEventEnd
is anEventEnd
, it contains the two properties:semanticEvent
which points to a message or an execution.semanticEnd
which points to aMixEnd
element.
Depending on how the meta-model is defined, the MixEnd
could be represented by:
- one
SingleEventEnd
: This is the case in UML2, theExecution Specification
could be defined with a start/finish element of typeExecution Occurrence Specification
orMessage Occurrence Specification
- two
SingleEventEnds
: oneSingleEventElement
which points to the message and oneSingleEventElement
which points to the execution.
The Operation_0
message ends on the left border of the Operation_0()
execution because Operation_0_receiver
is associated to the start of the execution. Otherwise, the message would be associated to the lifeline. For example, the Message_3
is a simple message not linked to an execution. Consequently, the Message_3_receiver
is a SingleEventEnd
and is attached to the lifeline.
Hence, for an asynchronous call, we get:
- one
SingleEventEnd
for the message sending:Operation_0_sender
semanticEvent
=Operation_0
messagesemanticEnd
=Operation_0_sender
message occurrence specification
- one
CompoundEventEnd
for theOperation_0
message receiving andOperation_0
executionstartingEnd
composed by oneSingleEventEnd
=Operation_receiver
semanticEvent
=Operation_0
messagesemanticEnd
=Operation_0_receiver
message occurrence specification
- one
SingleEventEnd
for the execution finish:Operation_0_finish
semanticEvent
=Operation_0
executionsemanticEnd
=Operation_0_finish
execution occurrence specification
Ordering
In a Sirius sequence diagram, the elements are totally ordered.
Internally, Sirius maintains three ordered sets:
- two for the vertical ordering : a graphical set which orders the graphical elements, a semantic set which orders the semantic elements.
- one for the horizontal ordering, a semantic set which orders the semantic instance roles (the graphical ordering of instance roles is available without heavy computation).
To provide a functional diagram, each semantic/graphical orderings couple must always be coherent. Creation tools and reordering tool must manage the semantic orderings. More explanations are given in the next section.
Sequence Diagram Description
First, in an odesign file, from an existing viewpoint, you have to create a new kind of representation : Sequence Diagram Description.
As for other representations, you define mandatory properties:
- Id: unique identifier for the diagram type in Sirius
- Label: used to display information to end-user
- Domain class: type of the semantic element representing the sequence diagram container
For a complete description of each property, have a look at the Help > Sirius Specifier Guide > Reference Guide > Representation > Sequence Diagram.
The most important properties to understand are the Ends Ordering and Instance Roles Ordering.
In a sequence diagram, graphical elements are ordered chronologically and this order is essential. Maintaining and updating the elements global order will be the main purpose of all tools that you will create later. Your tools must keep all the time the order of semantic elements and according to this, Sirius manages the graphical representation order.
The ordered elements in a sequence diagram are defined with the Ends Ordering and Instance Roles Ordering fields. These properties will be used by Sirius to automatically order the graphical elements when you open a sequence diagram for an interaction.
The Ends Ordering handles the vertical order of events. It specifies with an expression how semantic elements must be ordered. These elements should be execution1 start/finish and message send/receive occurrences.
A specific variable exists for this expression: eventEnds
. The variable eventEnds
contains the list of all EventEnds
existing for the current interaction.
Pay attention: the evaluation of the Ends Ordering expression should returned only elements contained in the eventEnds
list.
The Instance Roles Ordering handles the horizontal order of instance roles / lifelines. It specifies with an expression how semantic elements must be ordered. These elements should be the semantic elements which will be represented as instance role.
If we have a look at the UML2 meta-model, the fragment reference defined in an Interaction
contains all the execution occurrences and the message occurrences. Execution occurrences and messages occurrences are EventEnd
elements. But the fragment reference contains also some other types of elements as execution specifications. In order that the Ends Ordering property only references EventEnd
elements we need to do an intersection of fragments elements and eventEnds (using either Acceleo or delegating to a Java service using service:
)..
Furthermore, the lifeline reference defined in an Interaction
contains all the Lifeline
representing both lifeline and instance role.
For the UML Modeler, the Ends Ordering expression will return for the diagram below, the ordered list: [Operation_0_sender, Operation_0_receiver, Operation_0_finish, test_sender, test_receiver, test_reply_sender, test_reply_receiver, Message_3_sender, Message_3 _receiver]
.
And the Instance Roles Ordering expression will return for the diagram below, the ordered list: [producers, consumers]
.
Default Layer
When the sequence diagram description is complete, you can add a Default layer.
Next step is to define the mappings and all the tools to manage the interaction elements.
Mappings
We want to represent on a sequence diagram four different elements and then associate a mapping to each element:
- Instance: instance role mapping
- Lifetime: execution mapping
- Execution: execution mapping
- Message: basic message mapping
Instance Role
Firstly, create the instance role mapping. It graphically corresponds to the box at the top of the lifeline.
Set the mandatory properties Id, Label and Domain class:
The Semantic Candidates Expression is an Acceleo expression returning the semantic elements for which the mapping will be evaluated and then a graphical element will represent the semantic element on the diagram.
Don't forget to create a new Style for the instance role mapping.
Internally, Sirius maintains three ordered sets:
- two for the vertical ordering : a graphical set which orders the graphical elements, a semantic set which orders the semantic elements.
- one for the horizontal ordering, a semantic set which orders the semantic instance roles (the graphical ordering of instance roles is available without heavy computation).
To provide a functional diagram, each semantic/graphical orderings couple must always be coherent. Creation tools and reordering tool must manage the semantic orderings. More explanations are given in the next section.
Sequence Diagram Description
First, in an odesign file, from an existing viewpoint, you have to create a new kind of representation : Sequence Diagram Description.
As for other representations, you define mandatory properties:
- Id: unique identifier for the diagram type in Sirius
- Label: used to display information to end-user
- Domain class: type of the semantic element representing the sequence diagram container
For a complete description of each property, have a look at the Help > Sirius Specifier Guide > Reference Guide > Representation > Sequence Diagram.
The most important properties to understand are the Ends Ordering and Instance Roles Ordering.
In a sequence diagram, graphical elements are ordered chronologically and this order is essential. Maintaining and updating the elements global order will be the main purpose of all tools that you will create later. Your tools must keep all the time the order of semantic elements and according to this, Sirius manages the graphical representation order.
The ordered elements in a sequence diagram are defined with the Ends Ordering and Instance Roles Ordering fields. These properties will be used by Sirius to automatically order the graphical elements when you open a sequence diagram for an interaction.
The Ends Ordering handles the vertical order of events. It specifies with an expression how semantic elements must be ordered. These elements should be execution1 start/finish and message send/receive occurrences.
A specific variable exists for this expression: eventEnds
. The variable eventEnds
contains the list of all EventEnds
existing for the current interaction.
Pay attention: the evaluation of the Ends Ordering expression should returned only elements contained in the eventEnds
list.
The Instance Roles Ordering handles the horizontal order of instance roles / lifelines. It specifies with an expression how semantic elements must be ordered. These elements should be the semantic elements which will be represented as instance role.
If we have a look at the UML2 meta-model, the fragment reference defined in an Interaction
contains all the execution occurrences and the message occurrences. Execution occurrences and messages occurrences are EventEnd
elements. But the fragment reference contains also some other types of elements as execution specifications. In order that the Ends Ordering property only references EventEnd
elements we need to do an intersection of fragments elements and eventEnds (using either Acceleo or delegating to a Java service using service:
)..
Furthermore, the lifeline reference defined in an Interaction
contains all the Lifeline
representing both lifeline and instance role.
For the UML Modeler, the Ends Ordering expression will return for the diagram below, the ordered list: [Operation_0_sender, Operation_0_receiver, Operation_0_finish, test_sender, test_receiver, test_reply_sender, test_reply_receiver, Message_3_sender, Message_3 _receiver]
.
And the Instance Roles Ordering expression will return for the diagram below, the ordered list: [producers, consumers]
.
Default Layer
When the sequence diagram description is complete, you can add a Default layer.
Next step is to define the mappings and all the tools to manage the interaction elements.
Mappings
We want to represent on a sequence diagram four different elements and then associate a mapping to each element:
- Instance: instance role mapping
- Lifetime: execution mapping
- Execution: execution mapping
- Message: basic message mapping
Instance Role
Firstly, create the instance role mapping. It graphically corresponds to the box at the top of the lifeline.
Set the mandatory properties Id, Label and Domain class:
The Semantic Candidates Expression is an Acceleo expression returning the semantic elements for which the mapping will be evaluated and then a graphical element will represent the semantic element on the diagram.
Don't forget to create a new Style for the instance role mapping.
Executions
Execution mappings are used when you have an element which is composed by a start, a duration and a finish element.
We will define the execution mappings:
First, create the execution mapping for the lifeline execution. This represents the dashed line of lifeline.
And create the execution mapping for the execution. This represents the execution square on lifeline or other execution.
Set the mandatory properties:
- Id, Label, Domain class
- Semantic Candidates Expression: expression that returns the first level executions associated to the current execution.
Here, a java service1 executionSemanticCandidates() is called.
- Starting End Finder Expression: semantic element defining the execution start
- Finishing End Finder Expression: semantic element defining the execution finish
The end finder expressions are used by Sirius to graphically link the execution to its start and finish elements and to find during creation and reorder operations where to reattach the dragged element.
As an execution could recursively contain other executions, don't forget to import the mapping on itself by setting the property Reused bordered node mappings :
For both execution mappings, don't forget to create a New Style:
Basic Messages
Now, we will define the basic message mapping:
Create the basic message mapping:
Set the mandatory properties:
- Id, Label, Domain class
- Semantic Candidates Expression: expression to get all the messages defined in an interaction
- Semantic Elements: Associates a group of logical semantic elements to the graphical element. For example, here we associate to the graphical message the semantic element of type
Message
, the message send event of typeMessage Occurrence Specification
and the message receive event of typeMessage Occurrence Specification
. Sirius will use this information to:- show associated semantic elements in properties view,
- listen for associated elements changes to refresh if necessary,
- delete associated elements if there is no specific delete tool.
- Source/Target mapping: a list of graphical mappings that could be source/target of the message. Several mappings could be defined as source or target mapping for a message. In UML2, Lifeline mapping and Execution mapping can be source/target of message. On the illustration below, the consumers lifeline and the
compute()
execution can be selected as source for the get message. The producers lifeline and the get execution can be selected as target for the get message.
- Source/Target finder expression: the expression which must return the source/target semantic element, i.e. the source/target context of the message. In UML2, the expression return is the lifeline or the execution for example. On the illustration below, the compute execution is the semantic source element for the message get and the get execution is the semantic target element.
- Sending/Receiving End Finder Expression: expression which must return the semantic element which represents the message sender/receiver.
Lost and Found Messages
Standard node mappings, direct children of a layer of the current sequence diagram description, can be used to represent the unknown message end. Lost and found messages should be created using a generic tool.
Tools
Sequence Diagram Tutorial
Java Services
To use a java service in an Acceleo expression, service must be define like this in odesign:
Creation Tools
Define a new Section to add creation tools:
Create Lifeline
Lifelines should be created using an Instance Role Creation Tool associated to the instance role mapping.
The predecessor
variable represents in the global instance role ordering, the element preceding the new instance role.
Create Execution
An Execution can be created using Execution Creation Tool.
The following variables can be used from inside the tool definition:
container
: the element (lifeline or execution) that will contain graphically the new execution, in the example below the container would be the get() execution.startingEndPredecessor
andfinishingEndPredecessor
: represent in the global event ends list, the element preceding the new execution start and the element preceding the new execution finish. To get the corresponding semantic end element associated to event end:startingEndPredecessor.semanticEnd
orfinishingEndPredecessor.semanticEnd
.
In the example above, we want to create a new BehaviorExecution_2
on the existing get execution. Thus, the startingEndPredecessor
and finishingEndPredecessor
will point to the get_receiver
message occurrence. This variables represent the semantic elements ( get_receiver
) associated to the graphical element preceding the startingEnd
( BehaviorExecution_2_start
) and finishingEnd
( BehaviorExecution_2_finish
) of the new element.
Message Creation Tool
A Message can be created using Message Creation Tool.
The following variables can be used from inside the tool definition:
source
: Semantic element associated to message send;target
: Semantic element associated to message receive;startingEndPredecessor
andfinishingEndPredecessor
: represent in the global event ends list, the element preceding the new message send and the element preceding the new message receive.
In this example, we want to create a new Message_1
from the existing compute execution to the producers lifeline. Thus, the startingEndPredecessor
and finishingEndPredecessor
will point to the get_finish
execution occurrence. These variables represent the semantic elements ( get_finish
) associated to the graphical element preceding the startingEnd
( Message_1_sender
) and finishingEnd
( Message_1_receiver
) of the new element.
Precondition. As for many other tools, it is possible to define a precondition for message creation tools. Depending on the precondition expression, the tool allows the element creation only under certain conditions. The precondition is defined as an interpreted expression.
A variable is available for the Acceleo expression : $preTarget. This variable is the semantic element associated to graphical element that is currently hovered by the mouse.
Event Reorder Tool
This tool is called when the user moves or changes the size of graphical elements on the diagram.
A single unique event reorder tool can and must be specified for message and execution mappings. The purpose of the tool is to re-synchronize the graphical ordering with the semantic ordering. When the user reorders a graphical element, the global order of graphical elements changes and the tool must then reorder the semantic elements according to these changes.
This tool has access to the following two variables usable in expressions:
startingEndPredecessorAfter
: represents in the global event end list the element preceding the moved element start/send. It is the event which, after the move, will be directly preceding the starting end (top) of the moved element;finishingEndPredecessorAfter
: represents in the global event end list the element preceding the moved element finish/receive. It is the event which, after the move, will be directly preceding the finishing end (bottom) of the moved element.
In this example, we want to move the get execution after the Message_0
. Thus, the startingEndPredecessorAfter
variable will point to the compute_finish
execution occurrence. This variable represents the semantic elements ( compute_finish
) associated to the graphical element preceding the startingEnd
( get_start
) after the move of get execution. The finishingEndPredecessorAfter
variable will point to the get_start
execution occurrence. This variable represents the semantic elements ( get_start
) associated to the graphical element preceding the finishingEnd
( get_finish
) after the move of get execution.
Now, we will have a look to a more complex reorder operation.
In this example, we want to move the get execution after the Message_1
. The get execution is linked to the get synchronous message, thus the get execution startingEnd
is a compoundEvent
representing the get_receiver
message occurrence. When we move the execution, the associated message must be also moved. In this case, the startingEndPredecessorAfter
variable will point to the compute_finish
execution occurrence. This variable represents the semantic elements ( compute_finish
) associated to the graphical element preceding the startingEnd
( get_send
) after the move of get execution. The finishingEndPredecessorAfter
variable will point to the get_receiver
message occurrence. This variable represents the semantic elements ( get_receiver
) associated to the graphical element preceding the finishingEnd
( get_finish
) after the move of get execution.
Instance Role Reorder Tool
This tool is called when the user horizontally moves an instance role on the diagram.
A single unique event reorder tool can and must be specified for instance role mappings. The purpose of the tool is to re-synchronize the graphical ordering with the semantic ordering. When the user reorders a graphical instance role, the global order of graphical instance roles changes and the tool must then reorder the semantic instance roles according to these changes.
This tool has access to the following two variables usable in expressions:
predecessorBefore
: represents in the global instance role ordering the element previously preceding the moved instance role. It is the element which, before the move, was directly preceding the moved element;predecessorAfter
: represents in the global instance role ordering the element preceding the moved instance role. It is the event which, after the move, will be directly preceding the moved element;
In this example, we want to move the consumers
instance role after the producers
instance role. Thus, the predecessorAfter
variable will point to the producers
execution occurrence. This variable represents the semantic elements ( producers
) associated to the graphical element preceding the consumers
instance role after its move. The predecessorBefore
variable will be null, because consumers
was the first element of the ordering.
Other Tools
Nothing specific for deletion tool, edit label tool, diagram creation and diagram navigation tool, have a look at the Sirius Specifier Guide.
You are currently viewing a snapshot of www.mozilla.org taken on April 21, 2008. Most of this content ishighly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. Ifthere are any pages on this archive site that you think should be added back to www.mozilla.org, please file a bug.
- Coding
- Testing
- Tools
This section describes how Webclient works in a custom Java application.While this section bears similarities to the Getting Started section ofthe Webclient Developer Guide, it goes into far more depth. Moreover,the intent here is to serve as a guide to the implementation of Webclient,not its use.
The discussion is based upon the test example located in org.mozilla.webclient.test.A sequence diagram follows showing classes and objects from
org.mozilla.webclient.test
org.mozilla.webclient
, the public packageorg.mozilla.webclient.wrapper_native
, the private packagejava.awt
wrapper_native is intended for wrapping motif- or win32-basednative web browsers. In our particular test application, a win32-basednative browser is being wrapped.
The two objects in the test package, specifically emand aEMWindow, represent the custom application. As shown in thediagram, they are, respectively, instances of the following Java classes:
EmbeddedMozilla
This is the Java class that contains the main() method of thecustom application.EMWindow
This is the Java class that is the Frame containing the embeddedbrowser window, as well as a menu bar, a URL textfield and browser controlbuttons.Following the sequence diagram is a brief descriptionof the code, then a summary of what is required for your application touse Webclient.
UML sequence diagram
description
The Webclient test is essentially a small custom application that embedsthe Mozilla web browser using Webclient. It consists of a number of filesand is launched via EmbeddedMozilla.java. However, the main activitytakes place in EMWindow.java.
Here is a description of the primary public classes and objects involved,the way objects are created, and the sequence of method invocations. Note that EmbeddedMozilla.javacreates an instance of itself called em in its main()method, which should normally have two arguments:
- arg[0]: binDir, the location of the bin directoryfor Mozilla, and
- arg[1]: url, the url to be initially displayed.
Press the button below to view the code for EMWindow.java. Note that for easy reference, code highlighted in the text below is alsohighlighted in the source.
View EMWindow.javacode.
View EmbeddedMozilla.java code.The constructor of EMWindow.java handles much of the setup forembedding Mozilla. It sets up the menu bar for the frame. It creates aURL textfield and navigation buttons and adds them to a panel. Then itcreates the browser.
There is one BrowserControlFactory class per Webclient application.It is the starting point for using Webclient for embedding a browser. (Itis a pre-existing class, not an interface.) Its setAppData()method,as shown above, takes a single argument, StringabsolutePathToNativeBrowserBinDir.(This is the absolute path to the bin, or binary executable directoryof the native web browser that we are embedding; in this case the directorywith the platform-specific executable for Mozilla.)
setAppData() invokes the static method appInitialize()on the class BrowserControlImpl, which in turn invokes the createWrapperFactory()method in the same class. This generates WrapperFactoryImpl object(wrapperFactory), whose initialize() method is then invoked.initialize()on wrapperFactory invokes nativeAppInitialize(), a nativemethod called only once at the beginning of program execution. It allowsnative code to handle one-time initialization.
Next, EMWindow.java invokes newBrowserControl() onBrowserControlFactoryto get an implementation of the BrowserControlInterface. Thisis the core Webclient interface; all other interfaces are obtained fromit.
newBrowserControl() generates newCanvas and returnsa new instance of BrowserControlImpl, which then is set to browserControl.newCanvasis an instance of the browserControlCanvasClass, which is determinedby the setAppData() method mentioned above.newCanvasis needed to generate browserControl.Next, browserControl is asked for the java.awt.Canvassubclass BrowserControlCanvas. This subclass that allows customapplication developers to insert the web browser into their container hierarchy.It is important that browserCanvas
- is the first interface obtained from browserControl and
- is added to the container hierarchy soon after it is obtained.
add(controlPanel, BorderLayout.NORTH);
add(browserCanvas, BorderLayout.CENTER);
Next, the panel with the URL textfield and navigation buttons are addedto the frame, as well as browserCanvas. When browserCanvasis added to the Frame, its addNotify() method is called, which,among other things, creates the WindowControlImp object calledwc,and the createWindow() method is invoked on it. This gets nativeWebshell,creates an event thread, and invokes starts().
Sequence Diagram Description Definition
Once this is done, other objects, such as navigation, currentPage,history,and eventRegistration, are generated via browerControl.queryInterface().
actionPerformed() implements EMWindow.java as an ActionListenerand responds to events such as navigation buttons being pressed. makeItem()adds EMWindow as an ActionListener to each button component.
eventDispatched() implements EMWindow.java as a DocumentLoadListener.DocumentLoadListenerextends WebEventListener and gets notice of events by registeringvia the eventRegistration object , which is created by the constructorfor EMWindow.java. FollowingeventDispatched(), you willnotice five methods that are implemented to make EMWindow.javaa MouseListener: mouseClicked(),mouseEntered(),mouseExited(),mousePressed() and mouseReleased(). Note that followingthe creation of eventRegistration (mentioned previously), theEMWindow object was added as a DocumentLoadListener anda MouseListener via these statements:
eventRegistration.addDocumentLoadListener(this);
eventRegistration.addMouseListener(this);
The EventRegistration interface contains four methods:
Class Diagram
- add DocumentLoadListener
- remove DocumentLoadListener
- add MouseListener
- remove MouseListener