# Friday, January 30, 2009

A while ago at PDC, a couple of my colleagues presented CHESS but unfortunately weren’t ready to release it. Well, you don’t have to wait anymore: CHESS is now available for download at DevLabs for Visual Studio 2008 Team Dev or Team Test. CHESS comes under the same pre-release license as Pex or under an academic license.

CHESS = Unit Testing of Concurrent Programs

CHESS is a tool that finds and reproduces “Heisenbugs” in concurrent programs (both Win32 and .NET)… You know those kind of bugs that do not reproduce under the debugger or only in production environment, the kind of bugs where you scratch your head for days in front of a process dump before giving up.

I also recently recorded a great ‘getting started’ CHESS tutorial for CHESS in Visual Studio Unit Test: you can take a unit test*** and turn it into a CHESS test by adding the [HostType(“Chess”)] attribute!


*** That unit test should involve threads, otherwise CHESS will have not effect.

CHESS and Pex

In order to control the scheduling of .NET programs, CHESS instruments the threading APIs using ExtendedReflection, the code instrumentation framework that was built for Pex. In the future, we would also like to combine both approach to explore data inputs and thread schedules together.

Well enough said… go and try it out!

posted on Friday, January 30, 2009 12:52:08 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Tuesday, January 27, 2009

Ben Hall wrote a review of my Reflector Review addin. Nice tutorial on how to use this little helper.

The Review addin lets you annotate and comments any metadata inside of Reflector.


posted on Tuesday, January 27, 2009 1:24:08 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Thursday, January 15, 2009

Phil Haacked has started an interresting discussion on the implementation of a named formatter whose format strings are of the form {name} instead of {0}. In fact, he started with 3 implementations and his reader submitted 2 more! Since Phil was kind enough to package all the implementations (and the unit tests) in a solution, I took the liberty to run Pex on them and see if there was any issue lurking out. The goal of this post is not really to show that those implementation are correct or not, but give you an idea where Pex could be applicable in your code.

But wait, it’s already Unit Tested!

Indeed, Phil diligently wrote a unit test suite that covers many different use cases. Does it mean you are done with testing? The named format syntax can be tricky since may involve escaping curly braces… What is the named format syntax anyway? It’s pretty obvious that it should let you write something like “{Name} is {Status}” instead of “{0} is {1}” but that still pretty vague. In particular, what are the syntactic rules for escaping curly braces (what’s the meaning {{}{}{{}}}, etc…) or is there even a grammar describing named formats.

As it is often the case, there is no clear and definitive answer – at least I could not find it –. In this post, I’ll show 2 techniques that can be used here out of the many patterns for Pex which we documented.

Technique 1: Explore for runtime errors (pattern 2.12 – parameterized stub)

The most basic way of running Pex is to simply apply Pex on a method without adding any assertions. At this point, you are looking for NullReferenceException, IndexOutOfRanceException or other violations of lower level APIs. Although this kind of testing won’t help you answer whether your code is correct, it may give you back instances where it is wrong. For example, by passing any format and object into the format method in a parameterized unit test; the code below is a parameterized unit test for which Pex can generate relevant inputs:

public void HenriFormat(string format, object o)

Note that it took a couple runs of Pex to  figure the right set of assemblies to be instrumented. Hopefully, this process is mostly automated and you simply have to apply the changes that Pex suggests.

The screenshot below shows the input generated by Pex. Intuitively, I would think that FormatException is expected and part of properly rejecting invalid inputs. However, there are ArgumentOutOfRanceExceptions triggered inside of the DataBinder code that probably should intercepted earlier. If this implementation uses the DataBinder code, it should make sure that it only gets acceptable values. Whether those issues are worth fixing is a tricky question: maybe the programmer intended to let this behavior happen.


Technique 2: Compare different implementations (pattern 2.7 – same observable behavior)

A second approach is to compare the output of the different implementations. Since all the implementations implement the same (underspecified) named format specification, their behavior should match with respect to any input. This property makes it really easy to write really powerful parameterized unit tests. For example, the following test asserts that the Haacked and Hanselman implementation have the same observable behavior, i.e. return the same string or throw the same exception type (PexAssert.AreBehaviorEqual takes care of asserting this):

public void HaackVsHansel(string format, object o)
        () => format.HaackFormat(o),
        () => format.HanselFormat(o)

Again, this kind of testing will not help to answer if the code is correct but it will give you instance where the different implementations behave differently, which means one of the two (or even both) have a bug. This is a great technique to test a new implementation against an old (fully tested) implementation which can be used as oracle. We run the test and Pex comes back with this list of test cases. For example, the format string “\0\0{{{{“ leads to different output for both implementation. From the different outputs, “\0\0{{“ vs “\0\0{{{{“, it seems that curly braces are escaped differently. If I wanted to dig deeper, I could also simply debug the generated test case.


Comparing All Implementations

Now that we’ve seen that Phil and Scott were not agreeing, could we apply this to the other formatters. I quickly set up a T4 template to generate the code for all the parameterized unit tests between each formatters. Note that order matters for Pex: calling A then B might lead to a different test suite compared to B then A, just because Pex explores the code in different orders.

<#@ template language="C#" #>
string[] formatters = new string[] {
using System;
using Microsoft.Pex.Framework;
using Microsoft.Pex.Framework.Validation;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UnitTests;
using StringLib;

namespace UnitTests
    [PexClass(MaxConstraintSolverTime = 2, MaxRunsWithoutNewTests = 200)]
    public partial class StringFormatterTestsTest
        <# foreach(string left in formatters) { 
           foreach(string right in formatters) {
               if (left == right) continue;    
        public void <#= left #>Vs<#= right #>(string format, object o)
                () => format.<#= left #>Format(o),
                () => format.<#= right #>Format(o)
        } #>

The answer that Pex returns is interresting and scary: none of the implementation behaviors match. This is somewhat not surprising since they all use radically different approaches to solve this problem: regex vs string api etc…

Try it for yourself.

The full modified source is available for download here. You will need to install Pex from http://research.microsoft.com/pex/downloads.aspx to re-execute the parameterized unit tests.

posted on Thursday, January 15, 2009 4:38:25 PM (Pacific Standard Time, UTC-08:00)  #    Comments [1]
# Wednesday, January 14, 2009

Today, we released the 0.9.40105.0 version of Pex. The major highlights of this release are: Pre-Release license for Visual Studio 2008, better test framework integration and partial support for Visual Basic.NET/F# (read the full release notes).

CHESS, another project from RiSE, also released it’s first public download today. CHESS finds and reproduces heisenbugs in concurrent programs. CHESS uses the Pex code instrumentation technology to monitor and control the .Net threading APIs, and one day we hope to combine Pex and CHESS…

Enjoy and don’t forget to give us your feedback through the MSDN forums.

posted on Wednesday, January 14, 2009 12:21:29 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Saturday, January 10, 2009

Stubs is a framework for generating stubs (that we announced earlier with Pex). The framework generates stubs (not mocks) that are

  • lightweight, i.e. relying on delegates only,
  • strongly typed, i.e. no magic strings – refactoring friendly,
  • source code generated, i.e. no dynamic code or expression trees,
  • great debugging experience, i.e. break and step through stubs,
  • minimalistic API (there is almost none),
  • friendly for Pex!

More details about Stubs are also available in the stubs reference document.

Quick Start

Let’s start from a simple interface IFileSystem for which we want to write stubs:

public interface IFileSystem
    void WriteAllText(string fileName, string content);
    string ReadAllText(string fileName);

Stubs relies solely on delegates to attach behaviors, and leverages the lambda syntax from C# 3.0 to accomplish pretty much anything:

// SFileSystem was generated by Stubs and stubs IFileSystem
var stub = new SFileSystem();

// always returns “...”
stub.ReadAllText = (me, file) => "...";

// expectations: checks file = “foo.txt”
stub.ReadAllText = (me, file) =>
        Assert.AreEqual("foo.txt", file);
        return "...";

// storing side effects in closures: written saves content
string written = null;
stub.WriteAllText = (me, file, content) => written = content;

// hey, we can do whatever we want!
stub.ReadAllText = (me, file) =>
        if (file == null) throw new ArgumentNullException();
        return "...";
// downcast to the interface to use it
IFileSystem fs = stubs;

Anything is possible… as long as C# (or your favorite language) allows it.

Anatomy of a Stub

Each stubbed method has an associated field delegate that can be set freely (e.g. WriteAllText and ReadAllText). If this delegate field is set, it will used when the method is called; otherwize a default action occurs. Let’s see this with a simplified stub of the IFileSystem interface, SFileSystem, which shows how the IFileSystem.WriteAllText is implemented:

class SFileSystem 
    : StubBase<SFileSystem>
    , IFileSystem
    public Action<SFileSystem, string, string> WriteAllText; // attach here
    void IFileSystem.WriteAllText(string fileName, string content)
        var stub = this.WriteAllText;
        if (stub != null)
            stub(this, fileName, content); // your code executed here

The actual generated code may look more complicated because it contains custom attributes for debugging, comments and globally qualified types to avoid name clashing:


Code generation: How does it work?

Stubs is a single file generator that pre-generates stubs to source code. Stubs also monitors the build and regenerates the source code when a change occurs.


The stub generation is configured through an XML file (.stubx file) that contains which code and how it should be generated. The generated code is saved in a ‘Designer’ file, similarly to other code generation tools (type dataset etc…).


Great debugging experience

A cool side effect of simply using delegates: you can step through and debug your stubs! This is usually not the case with mock framework using dynamic code. Below, you can see that one can set breakpoints in the body of a stub and debug as usual.


Where do I get it?

The Stubs framework comes with Pex but you can use it in any unit testing activity. It provides a simple lightweight alternative to define stub for testing. Pex can be downloaded from http://research.microsoft.com/pex .

posted on Saturday, January 10, 2009 7:41:32 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Thursday, January 08, 2009

Following the implementation of the disjointset, another fun algorithm to implement was the offline least common ancestor problem. Tarjan proposed a algorithm based on the disjoint-set and a DFS traversal. Given that I now have all those ingredients in QuickGraph, it was just a matter of hooking together the implementation with the right events. The beef of the implementation looks as follows (and looks quite elegant to me):

var gpair = GraphExtensions.ToAdjacencyGraph(this.pairs); // for fast lookup
var disjointSet = new ForestDisjointSet();
var vancestors = new Dictionary();
var dfs = new DepthFirstSearchAlgorithm(this, this.VisitedGraph, 
new DictionaryGraphColor>(this.VisitedGraph.VertexCount));

dfs.InitializeVertex += (sender, args) => disjointSet.MakeSet(args.Vertex);
dfs.DiscoverVertex += (sender, args) => vancestors[args.Vertex] = args.Vertex;
dfs.TreeEdge += (sender, args) =>
        var edge = args.Edge;
        disjointSet.Union(edge.Source, edge.Target);
        vancestors[disjointSet.FindSet(edge.Source)] = edge.Source;
dfs.FinishVertex += (sender, args) =>
        foreach (var e in gpair.OutEdges(args.Vertex))
            if (dfs.VertexColors[e.Target] == GraphColor.Black)
                this.ancestors[EdgeExtensions.ToVertexPair(e)] = vancestors[disjointSet.FindSet(e.Target)];

// go!


posted on Thursday, January 08, 2009 12:32:54 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Sunday, January 04, 2009

The k-shortest paths computes the k first shortest path between two vertices in a directed graph. If you put this in the context of route planning, it gives you k alternative routes in case the shortest path is blocked by snow :) While the single source shortest path is well-known and implemented in many languages, there are not many implementations available for this problem, although it has been extensively studied. After looking around, I stumbled on a nice article comparing various approaches. The authors pointed out an algorithm from 1959 from Hoffman and Pavley that solved this problem (there are actually many others). This algorithm looked like a good fit:

  • it requires a single call to a single-source shortest path algorithm. Other approaches will require as much as kn calls to a shortest path algorithm.
  • it sounded simple and did not require new specialized data structures.

I want to try it!

The algorithm is available in QuickGraph 3.1.40104.00 and up. You can take a look at it at http://www.codeplex.com/quickgraph/SourceControl/changeset/view/29858#377982. To use it on a BidirectionalGraph,

IBidirectionalGraph<TVertex, TEdge> g = …;
foreach(IEnumerable<TEdge> path in g.RankedShortestPathHoffmanPavley(weights, source, goal, 4))

A glimpse at the Hoffmann-Pavley algorithm

The algorithm works in 2 phases.

On the first phase, we build a minimum ‘successor’ tree towards the goal vertex. This tree can be used to build a shortest path from any (reachable) vertex to the goal vertex. To build this tree, we can simply apply the Dijkstra shortest path algorithm on the reversed graph. This can be done in a couple lines with QuickGraph.

As for many other k-shortest path algorithm, phase works by building deviation path and picking the best one. In the case of the Hoffman-Pavley algorithm, it works as follows: pick the latest shortest path, for each vertex of this path, build a deviation path (more later) for each out-edge and add it to a priority queue. Then start again:

var queue = new PriorityQueue<Path>();
while(queue.Count > 0) {
    var path = queue.Dequeue();
    foreach(var vertex in path)
        foreach(var edge in graph.OutEdges(vertex))
             queue.Enqueue(CreateDeviation(path, vertex, edge));

A deviation path is composed of three parts:

  1. the initial part of the ‘seeding’ path, i.e. the edges before the deviation edge,
  2. the deviation edge
  3. the remaining shortest path to the goal. When we build the deviation path, we also compute it’s weight. A nice property of the deviation paths is that they can be ‘built’ when needed. This saves a lot of space and computation as most deviation paths will probably not end up in the winning set of paths – instead of storing a full path, we store a path index and an edge index.

That’s it! The details contain more code to deal with self-edges and loops but the main idea is there. This is definitely a very elegant algorithm!

What about testing!

Algorithm authors usually illustrate their approach with an example. This is a good way to get started on a small graph example and ensure that the algorithm works as the original author expected. This is the kind of unit test I get started with.

The next step is to apply the algorithm to a large number of graph instances. Unfortunately, I do not have any other k-shortest path algorithm, so the oracle is harder to build here. Nevertheless, the result of the algorithm, i.e. the shortest path collection, has a couple of properties that should always be true:

  • the algorithm produces loopless paths,
  • path k is lower or equal to path k + 1.

The problem with this test is that it does not guarantee that some shortest path have been missed. At this point, I’m a bit puzzled on how to test that.

posted on Sunday, January 04, 2009 11:05:14 PM (Pacific Standard Time, UTC-08:00)  #    Comments [1]
# Thursday, January 01, 2009

When you implement an algorithm that computes an optimal solution (i.e. minimum/maximum with respect to a cost function), how do you test that the solution is actually optimal?

This is the kind of question I face when implementing algorithms in QuickGraph. For example, I was recently looking at the minimum spanning tree of a graph (MST)? While checking that the result tree is a spanning tree is easy, checking that it is minimum is not obvious: nobody has written Assert.IsMinimum yet :). Here are a couple techniques that I found useful along the way:

Input-Output table

The most obvious approach is to pre-compute the result for a number of problems and assert the solution of the algorithm matches. In this MST case, use a small set of graphs for which the MST is well known, and check that the computed MST has the correct weight. This approach will take you only so far and requires a lot of manual work since you need to solve the problem (or find a known solution) for a number of representative cases.

Solution Perturbation

If the algorithm computes a solution that is minimal with respect to a cost function, one can try to perturbate the solution to see if there’s a smaller one. If so, it clearly violates the fact that the solution should be minimal, thus you just found a bug. In the case of MST, this would mean randomly picking edges that are not in the minimum spanning tree, swapping them and evaluate if the result tree is smaller.

This is kind of approach is actually used in optimization where the search space might have local minima (see simulated annealing).

Multiple Implementations

A nice thing about graph problems is that there are usually many (vastly) different ways to solve them. If multiple implementations are available, we can simply compare the result of each algorithm against each other and make sure that they match each other. Each algorithm might have bugs but it is unlikely that they share common ones. Since we have now a good oracle, we can apply this approach that a large number of inputs to increase our coverage.

In the MST case, two popular algorithms are Prim’s and Kruskal’s. Those are 2 different approaches: Prim is built on top of Dijkstra single source shortest path, while Kruskal is built on top of the disjoint-set data structure. By carefully picking the weights of the edges, we can assert different things:

  • if the edge weights are all equals, any spanning tree is minimal. So we can compare the result to a depth-first-search algorithm (which can easily compute a spanning tree).
  • if some edge weights are different, there may be many minimum spanning tree. In this case, we can still assert that weight of the tree is minimum.
  • if all the edge weights are different, then the MST is unique. This fact can be used to precisely pinpoint the differences in solution during testing.

There is a corner case that needs to be checked: if all algorithm are no-op, i.e. they don’t do anything. There solution will always match!

Happy New Year!

posted on Thursday, January 01, 2009 9:03:14 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]