When you start to design an application for producing drawings in Silverlight, there are some things you don’t have to worry too much about. You don’t have to come up with definitions for shapes, or figure out a fast way to render them. You don’t have to define a color space, and gradients, and geometries for clipping and filling. You don’t even have to define what a surface is and how shapes interact with it. Windows Presentation Foundation takes care of all of that stuff, and quite nicely too.
What you do have to think about is how to organize and present that capability to the user, and how to connect the user’s intentions to the underlying systems. You need to think about how to do all that in a robust and extensible manner, in an environment that is quite a bit more constrained than the context in which a normal WPF app runs. In other words, you need everything but all the stuff I mentioned above, and that still leaves quite a lot to chew on.
I have a penchant for starting at the bottom of a hierarchy and designing my way upward. But in this case the bottom was pretty much defined, and I thought I could see how it would all fit together, so I began instead with the big picture. The first two ideas that popped into that picture were the concepts of a View, and a Tool. A View would be the thing that created and managed surfaces, and associated Tools with a surface, and a source of input events. A Tool would be the thing that knew how to take an input event, and a surface, and turn it into changes to a shape on that surface.
In WPF the Canvas class is the obvious choice for a surface. Canvas is derived from Panel, which makes it a container for UIElements. Shapes are UIElements which can be placed into a Canvas. Moreover, Canvas is the only Panel that allows direct x,y positioning of its child elements, something that will come in handy in a drawing tool. Canvas is also a UIElement itself, and can be styled and inserted into other Panels, so it is pretty versatile for this use. I initially began with a UserControl that wrapped a Canvas and added some management behavior, but as I worked those behaviors kept percolating up to the View, so at this point a Canvas is just a property of the View. As an aside, one of the longer range things I’m thinking about is how to move these drawings around. Nobody will want to create a drawing they can’t save. Rendering the Canvas objects out as XAML, pathdata, or directly to bitmaps are things I’m considering, and that I’ll probably write about later.
If the role of surface is easy to cast, that of Tool is less so. WPF defines almost all the stuff a user will want to create and manipulate, but it says nothing at all about how that will happen beyond defining some useful input events. When I started thinking about tools I knew right off that I wanted a couple of key characteristics: tools should be entirely generic – that is, nothing else in the application should have to know about any class below the base class for all tools; tools should define themselves in terms of their name, behavior, and appearance in a toolbox; and finally, tools should be dynamically loaded from an external assembly, so that I could add tool sets on the server side without any client changes. The concept of a Tool that I ended up with is flexible enough to support multiple toolboxes with tools that may draw, manipulate the canvas, drop text, or perform other actions.
So I had a general idea of the objects in the system, and it looked like this:
For me the most important place to go after I get an initial concept of the objects in an app is to delve into their lifecycles: how and by what are they created? Where do they live while they are alive? How are events routed to them? As I went through this process several things happened. Operations kept migrating away from the Stylus and into the View or Tool. I initially had this concept of the stylus as being attached to the tool, but I decided that was just a cute visualization, and at this point the stylus is just an abstract source of drawing events. Also, the View solidified as the arbiter of the relationships between the other classes. The View is the first thing created. It owns and manages the surface (a Canvas). It owns and manages the set of ToolBoxes that provide Tools. And it owns and manages the Stylus that serves as the source of input.
A few things are still being worked out as I go, such as the exact sequence of events that occurs when a user select a tool, moves the stylus out over the canvas, and presses the left button. Does the GeometryEditor create the object, select it, then start dragging? Or create it, drag it out, and then select it after the button is released. Another area that needs a lot of thought is how to present style editing for the shapes. That will be the subject of a future post. But for now, enough design. The next two posts will discuss dynamically loading the tools, and a handly little class I created to enable onscreen geometry editing.