Picky, Picky

Sometimes programming in .NET just drives me up a wall. Not that I would prefer to go back to the Win32 API and writing my own message loops, but when a framework is as all-encompassing and opaque as .NET is, it’s doubly important to make the internals work right and sensibly, and doubly likely that somewhere they will do neither. Case in point: I probably spent four hours today trying to inject some custom behavior into a WCF channel only to find out that the problem was the lack of a space in front of a comma. Or, more accurately, four spaces in front of four commas.

My goal was to use message inspectors to sniff out an authentication cookie from an incoming WCF service reply, and inject it into subsequent requests, as part of a piggyback authentication scheme with an ASP.NET website. I won’t say much about message inspectors. They let you get your hands on various parts of an incoming or outgoing message before it is handed to the client or proxy, however the case may be. They work as advertised and there are a lot of good posts on how to use them. Microsoft has one too… it’s just not good. If you want to read specifically about managing cookies using inspectors this is the post that got me started.

If you read any of those posts you’ll notice that to implement a message inspector requires also creating a custom behavior that implements IEndpointBehavior. It’s a very simple class that just creates an instance of your inspector class on demand. If you’re willing to deal with creating the behavior for every service and every channel manually you can stop there, since there are imperative ways to do this. What I wanted was to be able to specify the behavior declaratively, in the .config file where the endpoints are defined, so that it is added transparently to all channels when they are used. In order to do that you need one more class: a custom extension of BehaviorExtensionElement that gets registered in the system.serviceModel/extensions/behaviorExtensions configuration scope. This class creates the behavior, the behavior creates the inspector, and so on and so forth.

And that’s where the fun began. The syntax for declaring the extension element and behavior looks like this:

<system.serviceModel>
	<extensions>
		<behaviorExtensions>
			<add name="extensionName" type="extensionType" />
		</behaviorExtensions>
	</extensions>

	<behaviors>
		<endpointBehaviors>
			<behavior name="behaviorName">
				<extensionName />
			</behavior>
		</endpointBehaviors>
	</behaviors>
</system.serviceModel>

Looking at the behaviorExtensions/add element, the name attribute value is arbitrary, and is used later to declare the behavior itself. The type attribute value must be the fully qualified typename of the class that implements the behavior extension. When I first wrote the code I assumed that “fully qualified” meant: include the namespace. My declaration looked something like this:

<behaviorExtensions>
	<add name="myExtension" type="MyNamespace.MyExtension" />
</behaviorExtensions>

When I got everything written and fired it up I got a configuration error from the parser, indicating that it could not find the named assembly. Hmm… /facepalm. I had implemented the extension class in a separate assembly, so of course it couldn’t find it. I tried adding the assembly name:

<behaviorExtensions>
	<add name="myExtension" type="MyNamespace.MyExtension, MyAssemblyName" />
</behaviorExtensions>

That didn’t work either. A little more poking around on the web and I discovered that the parser absolutely required the complete strong name of the assembly. So I added the missing fields, and my declaration looked like this:

<behaviorExtensions>
	<add name="myExtension" type="MyNamespace.MyExtension, MyAssemblyName, version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
</behaviorExtensions>

Still no go, but now the error changed. Instead of complaining that the assembly could not be found the parser now squeaked about the manifest data not matching the reference. Aha! Progress! I googled a bit more and discovered another post describing problems with breaking the string and having spaces after the commas. So I removed the spaces. I won’t bore you with another listing. Just look at the one above and imagine taking out the spaces after the commas in the type attribute string. It still did not farking work.

Damn, /facepalm, the sequel. My assembly is signed. Just because Microsoft’s example used null for the PublicKeyToken attribute doesn’t mean I can. I used sn.exe to generate a public key token and replaced the null with a string of letters and digits. Again, no listing, just use your imagination. It still didn’t work. But the error changed again! Instead of complaining about the mismatched manifest data the error now simply stated that the element couldn’t be loaded. I changed the key back to null – mismatched manifest data. Put back the token – can’t load element. What the hell? It was pretty obvious that I was getting past the assembly part… but what now?

Google provided no additional help, except for one or two posts that mentioned the unusual pickiness of the config parser with respect to this element. And it was while ruminating over one of these that I remembered taking those damn spaces out earlier in the process. I put them back in.

And it worked. Just why the presence of spaces in a comma-separated list of substrings should be important to the config parser I don’t know. I don’t even care. All I know is that it worked with spaces, and it didn’t work without them. Microsoft must have its reasons because the issue has been brought up a number of times and they have refused to acknowlege it as a bug. To me it looks like a bug, or at least a massively anal design decision, but what do I know? All I can do is kick the damn thing until it works.

Where is the surround sound mixer in Vista?

Despite the fact that there is every possibility I am uniquely imperceptive, I thought I would throw this out there on the off chance that there is at least one other person who was stumbling around in the Vista user interface looking for a way to adjust channel levels in a 5.1 or 7.1 channel surround sound setup.

Used to be in Windows XP you right clicked the speaker tray icon and clicked on mixer, but after I upgraded to Vista I could no longer find the damn thing. Worse, something wierd in the Creative drivers I was using for my sound card kept messing with the center channel level, forcing me to go through their audio console to reset the mixer defaults.

Today I finally stumbled on it, hidden three layers deep in the audio properties dialogs. Just to prove that I really did run it to ground, here is a pic.

vista_mixer

To find your way to the mixer right click the speaker tray icon, then click “Playback Devices” to open the Sound dialog. In the Sound dialog either double-click the Speakers device, or click it and then click the “Properties” button at the bottom. Either action will open the “Speakers Properties” dialog. Click the “Levels” tab, and then click the “Balance” button next to “Play Control”. That’s all there is to it.

When is a File There, But Not There?

If you’re using the System.IO.File.Exists method to determine whether a file is present in a particular location, then the answer to the question posed in the title of this post might be: when you don’t have permissions to see it. This might seem obvious, but actually it caught me a little bit off guard this morning. In general I’m conditioned to think that a permissions failure will throw an exception. After all, one way to interpret the rule might be: you don’t have permissions to access the file, therefore there is no way to answer your question. Another way is: you don’t have permissions to access the file, therefore as far as you’re concerned it doesn’t exist. Here’s the MSDN lowdown on the Exists method:

If the caller does not have sufficient permissions to read the specified file, no exception is thrown and the method returns false regardless of the existence of path.

Obviously Microsoft’s library designers agreed with the second interpretation. Not sure I like it, but at least now I know that the file might be there even when File.Exists says it isn’t.

The Right True End

It was with some degree of sadness that I closed the cover of Patrick O’Brian’s “Blue at the Mizzen” last evening. The act marked the end of my second trip through the twenty-volume series since I first had “Master and Commander” recommended to me by my brother ten years ago or more. I enjoyed this journey every bit as much as, and perhaps more than, the first. But even as I savored those final pages melancholy crept in, hard on the heels of the almost certain knowledge that it would be my last visit to O’Brian’s world. I was saying good bye to Captain (nay, Admiral!) Aubrey and the Doctor, for good. I will reread a book more than once, even many times. I have read the four books in the main trunk of Tolkien’s work something like eight or nine times, at least. But I have to read them all, and in order, and well, there are twenty of them in O’Brian’s tale.

It takes a significant portion of a person’s life to read twenty novels once. It must be a truly rare writer who could motivate a second helping, and perhaps no writer living or dead could prompt a third. O’Brian was every bit that writer: fit to inhabit the same exalted perch and breathe the same rarified air as Conrad, London, Wells, Verne, and Forester. To be sure, O’Brian cannot claim their variety of subject matter and point of view, and there are some who might smirk at the literary pretensions of what we must all admit was a six-thousand page serial adventure novel, but I don’t believe many who have read the books would share that view. O’Brian was in many ways a one-hit wonder, but what a prodigious great hit it was. Page after lyrical, poetical page, the tales of Jack Aubrey and Stephen Maturin are an antiquarian feast for the literary senses.

And if it requires a rare writer to prepare such a feast, maybe it needs a rare reader to take a seat at the table once it is served. Great length is not the only characteristic of this tale that might deter the faint-hearted amateur. One of the great distinguishing features of O’Brian’s work is the language. Among the things novelists strive for are style, voice, and a sense of place. In the Aubrey-Maturin novels O’Brian emerged in the eye of the reading world as a master craftsman, whose every sentence, every perfectly placed paragraph, was so thoroughly steeped in the time, or his sense of it, that while reading them you found yourself immersed without ever feeling the water around your ankles. But this language can be daunting to those accustomed to the modern novel, that often flies by with the breathless pace of a movie. I have relatives who have tried to read O’Brian, but simply have not been able to make headway, even though they love a good sea story.

And it is as a sea story, one immensely long glorious sea story, that O’Brian’s work truly shines. He is neither as dark as Conrad, as gritty as London, nor as fanciful as Verne, and his vision of life at sea is somewhat too idealized to be read as history, but for an authentic image of an eighteenth-century full-rigged ship and its working he is not to be matched. For this he drew on comprehensive research, and personal experience. I am probably in a minority of O’Brian readers who have spent a significant amount of time in square-rigged wooden ships. I spent a large part of 1984 and 1985 sailing as crew aboard first a barkentine (square-rigged foremast, fore-and-aft-rigged main and mizzen) of 180′, and then a brigantine (same rig, one less mast) of 140′. Like the characters in “Master and Commander” and its sequels, I have been aloft in a blow, and know what it is like to lay out on a yard a hundred feet above deck, with one hand for the ship, and one hand for yourself. There is not one detail of O’Brian’s descriptions, from chainplates to futtock shrouds, from wearing to clubhauling, that did not ring true for me.

As authentic as they are, the novels are not one unbroken success from front to back. Some are better than others, and I think that on my second time through I read them with a somewhat more critical eye. Beginning with “The Yellow Admiral” I began to get a sense that perhaps O’Brian knew that things were getting a little repetitive, and even more to the point, he himself acknowledged, in one of the few forewords that he wrote, that he was “running out of history.” If he had known the story would be so popular, he said, then he might have begun in the 1780’s or even earlier. There was a great deal of smacking good Royal navy history that had passed beyond the tale’s reach when he decided to bring the Doctor and the Lieutenant together in Minorca in the year 1800. I doubt, though, that the tale would have been better for an earlier start.

Nor is it improved by a later end. There is a 21st “book” in the series, but I won’t read it. Published after the author died in Dublin in 2000, it consists of a couple of typeset chapters and some handwritten treatments and notes. If they are an accurate guide to the plot, then it seems that O’Brian, at least, was not yet tired of his characters. But for me the perfect ending is “Blue at the Mizzen.” I am content to see Jack with the promise of his pennant, and Stephen with the hope of Christine to salve his wounds, and all of them frozen forever in memory, riding to anchor in the bay of Valparaiso and looking after home and hearth at Woolcombe. The twenty books O’Brian actually completed give us the grand arc of the character’s lives from start to satisfying finish, from jobless Lieutenant to respected Admiral, from penniless surgeon to wealthy naturalist, and there is really nothing more that a reader can ask from an author. By any measurement, Patrick O’Brian gave this reader more pleasure to the page than he had any right to expect.

Fun in 80 Columns or Less

I like it when, every now and then, some ridiculously low-tech requirement comes along and I can spend a few minutes whipping up a class to make it easier and more flexible. Such things often go into the utility library and see a lot of reuse. An opportunity arose recently when I was working on an email notification and reporting engine strapped to a large database. Every day the data gets updated six ways from Sunday and we generate a bunch of naturally tabular data that has to get embedded into emails. The usual approach to this, for myself and everyone else on the team, is to hard code the output. Last night I was looking for something relaxing to work on, and I decided to whip up some stuff to make generating fixed-width text tables a lot easier. The result is a class called TextTable.

ttscreen

I don’t think TextTable is going to win me any ACM awards any time soon, but it’s one of those things that is pretty handy when you need it. I hope to add some extensions to it in the future to support proportional fonts. In the meantime, if you’re interested, you can download the source or read the project post.

Aliased

A couple of events over the last week have me thinking about the concept of aliasing. I first encountered this idea when I started messing with computer graphics back in the late 80’s, and indeed the first event to which I refer involved someone on a forum bringing up the use of “antialiasing” techniques in games and other graphical applications. The term aliasing in general means a loss of information. In graphics people use the word aliasing to mean the jagged lines that result from rendering continuous geometrical features to a pixelated frame buffer. The reason that you see jagged edges on lines and polygons is because information has been lost where there is no pixel to hold it.

The second case that arose also involved a loss of information, but it was less obvious to the people working on it. In this instance a system produces changes to a database, and each change triggers a message out to an enterprise service buss. The problem is that the messages are unordered, and often involve sequential changes to the same field in a record. The designers created a multi-threaded sender for the resulting messages, and cannot guarantee the order of receipt, nor did they provide for any closure message to signal the end of a transaction. Given such constraints, can the state of the entity from the view of message consumers ever be guaranteed to be consistent with the state as seen by the publisher?

The answer is no. In this case, as in the aliasing of edges in a rendered image, information has been lost that cannot be replaced by the target system. In the graphical image it was the color of the missing pixels. Here it is the order of changes to an entity. Once that data is gone, it’s gone. Graphical applications can compensate to some extent by making assumptions. For example, they might assume that the color of a pixel neighboring two other pixels in a jagged line should be set to an average of the surrounding pixels, thus filling in some of the missing information with a best-guess at what might have been there. This is in fact how graphical anti-aliasing works, and it can produce dramatic improvements.

There are similar techniques for sound, although less successful because sound is less forgiving of error. Business information is less forgiving still. If a bank account is supposed to go through a series of transformations leaving it with a balance of x, there is no way to guess what x should be if some of those changes are applied in the wrong order. How could we fix the underlying problem in this particular design? The only way is to provide enough information to consumers to allow them to reconstruct the order of changes. But really the issue is the basic design: the object should not have been published to observers of the transaction on the buss until it was in a consistent and stable state.  Lacking either of these remedies there is nothing clients can do to restore the missing information and guarantee integrity. Ultimately it’s no different than running a pristine digital image through a lossy compression algorithm: you’re throwing stuff away, and you can’t get it back.

GMemory – An Image Search and Match Game

GMemory is a Google-based image search and match game that takes its inspiration from the game of Memory that I used to play with my kids when they were young. The Memory game consists of a number of wooden tiles with images on one side. Each distinct image appears twice in the tile set. The tiles are spread out face down, and the object is to match the pairs by turning over first one tile, and then another. If you match the tiles stay face up and you turn another. Otherwise they are turned face down again and play passes to the other player if there is one.

GMemory works the same way, except that it searches Google Images for the pictures to use, based on search criteria supplied by the player. It’s a simple game that gave me a chance to learn Silverlight and the Google RESTful search API. (For more on that topic see my GSearch post).

You can play the game here, or download the source and project files here.

SilverDraw Library

The SilverDraw library is an update to my ColorTools and GradientEditor projects. It primarily contains the color pickers and gradient editors from those two assemblies, and provides a place for me to hang other classes as I develop them. I produced this in response to a request for access to the GradientEditor application, which I was happy to consent to since I had meant to get to it at some point anyway.

What you get in SilverDraw are the ColorPicker and SimpleColorPicker controls, as well as a utility control for displaying sytem colors, and of course the GradientEditor control, which you can see in action here. Ultimately my plan for this assembly was to add complete support for drawing features, and I may still get to that at some point, but it’s pretty much been on hold for a year now for various reasons. In any case, I hope you find the controls in SilverDraw useful.

Downloads: SilverDraw Assemblies, SilverDraw Source

GSearch for .NET and Silverlight 2

GSearch is a set of class libraries for searching Google from .NET 3.5 and Silverlight 2 managed code. GSearch uses the Google RESTful webservice API to execute searches and receive results, and supports the following search types: blogs, books, images, locations, news, patents, video, and web.

GSearch consists of a core assembly, GSearch.Core.dll, that is approximately 21k in size, along with a set of eight related asemblies that implement the specific search types. These range in size from 9k to 15k, and since an application needs to reference and distribute just the assemblies for the search types it uses, the overhead for adding search to your application is fairly small.

Continue reading