After investigating the complex interactions that occur with the a title=”GEF Internals Part 1 – Mouse Interaction and the Selection Tool” href=”https://www.vainolo.com/2012/01/01/gef-internals-part-1-mouse-interaction-and-the-selection-tool/”selection tool in my previous post/a, we will now focus on a simpler case, the creation tool.
When the mouse moves over the canvas, the codeLightweightSystem/code system catches the mouse move event and forwards it to the codeDomainEventDispatched/code. If the event was not previously captured by the domain codeDomainEventDispatcher/code executes direct codedraw2d/code interaction. I am not completely sure what “captured by the domain” means, but for example when there is a mouse click, which is translated into a codemouseDown/code event, and a codemouseUp/code event, if the codemouseDown/code event was handled by the domain, then then codemouseUp/code event is “captured by the domain” and no direct codedraw2d/code interaction is allowed. Anyway, after codedraw2d/code interaction is executed (or is not executed), the codeDomainEventDispatcher/code can forward the event to the domain as either a codemouseMove/code or as a codemouseDrag/code event (depending of the states of the buttons). This can be seen in the following sequence diagram:
a href=”https://www.vainolo.com/wp-content/uploads/2012/01/dispatchMouseMoved.png”img class=”aligncenter size-full wp-image-959″ title=”dispatchMouseMoved” src=”https://www.vainolo.com/wp-content/uploads/2012/01/dispatchMouseMoved.png” alt=”” width=”422″ height=”497″ //a
When the domains receives the codemouseMove/code, it searches for the currently active tool and forwards it the codemouseMove/code request. This is the place where finally some work is done. The domain calls codeTool.mouseMove/code which is implemented in the codeAbstractTool/code class. This method does some internal work that I could not understand but seems harmless, and after this it calls codehandleMove/code, an internal method that tools should override to implement their functionality. The codeCreationTool.handleMove/code method works as follows: First it creates of updates a codeCreateRequest/code instance that contains the information used by the codeCreationTool/code, which includes the mouse location, factory that creates the new domain instances, and other things. Since the mouse can be moved a lot over the canvas, it is not logical to create a new request every time a mouse move is captured, so the codeCreationTool/code has a cached request which it updates every time a codemouseMove/code event is accepted.
After the codeCreateRequest/code is created or updated, the codeCreationTool/code updates the target of the request… This happens in two steps: First the tool finds the topmost codeEditPart/code below the current mouse location that can handle a codeCreateRequest/code. Second, it requests from the codeEditPart/code for the emtarget/em of the request… what? didn’t we just fetch the target of the request from the diagram? Well, it seems that GEF allows the codeEditPart/code under the mouse to give us another codeEditPart/code as the real target of the request. Why? I can think of a number of reasons, for example we can have an editor where you have squares and a button on the inner top left corner of the square and every time you click this button a new circle is added inside the square. So the codeEditPart/code that receives the codehandleMove/code is the button, but the codeEditPart/code that handles the request is the enclosing square, probably the parent of the button.
Now that GEF has the real codeEditPart/code that will handle the request, and the codeCreateRequest/code itself, it asks the target codeEditPart/code to create the codeCommand/code that will be executed if the tool is applied (mouse click). This command is created in the codeEditPart/code by running over all the codeEditPolicy/code instances that have been installed in the codeEditPart/code and asking them to create a codeCommand/code for the provided request. If more than one codeEditPart/code returns a codecommand/code, they are are chained inside a codeCompoundCommand/code. After the command is created, the codeTool/code refreshes the cursor that is displayed on the editor – and this is where we get the functionality that changes the cursor to an X when the current codeTool/code cannot be applied to the target under the mouse, because the command that was return by the codeEditPart/code cannot be executed.
Finally, the tool requests feedback from the target codeEditPart/code. Note that here the target may not be exactly the codeEditPart/code below the cursor. This feedback is requested in the same way as the codeCommand/code, by delegating it to the codeEditPolicy/code instances that are installed in the codeEditPart/code.
I have captured this interaction in the following sequence diagram:
a href=”https://www.vainolo.com/wp-content/uploads/2012/01/mouseMoveHandling.png”img src=”https://www.vainolo.com/wp-content/uploads/2012/01/mouseMoveHandling.png” alt=”” title=”mouseMoveHandling” width=”879″ height=”723″ class=”aligncenter size-full wp-image-961″ //a
Wasn’t that interesting? Now, you can see that having complex edit policies can be very hazardous for your editor, since they may be called on every mouse move! Furthermore, all the classes perform a LOT of functionality, and definitely do not conform to the a href=”http://en.wikipedia.org/wiki/SOLID_(object-oriented_design)”SOLID/a principles, specially a href=”http://en.wikipedia.org/wiki/Single_responsibility_principle”Single responsibility/a, which makes the code harder to understand. But I have to say that the architecture is beautiful. I’m still looking into GEF all the time, and will post my findings as I go, but if you have a special topic that you’d like to have answered, leave me a commend and I’ll see what I can do.