# Monday, May 31, 2004

With the kind help of Lutz Roeder, I have recompiled the IL Grapher into a Reflector Add-in...

posted on Monday, May 31, 2004 11:37:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [15]
# Sunday, May 30, 2004

In a near future, MbUnit will support storing of the test results in a SQL database (SQL server supported). The database structure is done and the BLL/DAL is finished. Here's a snapshot of the database schema.

 

posted on Sunday, May 30, 2004 1:21:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Saturday, May 29, 2004

I have added the possibility to sort fixture by importance (or severity): Critical, Severe, Default, NoOneReallyCaresAbout.

[TestFixture]
[Importance(TestImportance.Critical)]
public SomeFixture
{...}

posted on Saturday, May 29, 2004 10:45:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]

A number of new assertions classes have been added to MbUnit since the latest post on this topic.  The new helper classes involve Arrays, collection, compiler, serialization, web...

ArrayAssert

This helper class contains method to compare arrays:

byte[] expected = ...;
byte[] actual = ...;
ArrayAssert.AreEqual(expected,actual);

The method compares the rank, the length and makes element-wize comparaison.

ColAssert

This class provides several methods to compare two ICollection instance:

ICollection expected = ...;
ICollection actual = ...;

ColAssert.IsSynchronized(actual);
ColAssert.AreCountEqual(expected,actual);
ColAssert.AreEqual(expected,actual);

The class also provides method to test the collection count, syncroot, synchronization, etc...

SerialAssert

This class contains various methods to test the "serializability" of objects.

SerialAssert.IsXmlSerializable(typeof(MyClass));

WebAssert

This class contains assertions on the properties of web control and web pages.

CompilerAssert

This class contains assertions to check that snippets are compilable:

String source = ...; // C# code to compile
// verify that source compiles
CompilerAssert.Compiles(CompilerAssert.CSharpCompiler, source);

What about your assertions ?

There is also a CodeSmith template that can let you build "strongly-typed" assertion classes out of existing types. See in the templates directory.

posted on Saturday, May 29, 2004 9:48:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [8]
# Friday, May 28, 2004

I just came back from Microsoft, Redmond where I attented interviews for SDE/T in the CLR team. (Interviews at Microsoft is a unique experience on its own). It looks like it worked because I'm moving in October to Redmond. :) I would like to thank Michael Corning, Harry Robinson and Holly Barbacovi for their support on this adventure.

Me and Michael Corning at Building 44 in Redmond.

Don't be fooled by the bad quality of the photo, it was pooring rain (the famous Redmond weather).

 

posted on Friday, May 28, 2004 1:25:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [9]
# Saturday, May 22, 2004

I've been recently interrested into Mutation Testing, a funny way of measure the quality of tests. This blog presents the first snapshot of a toy application that mutates any .Net program.

Mutation testing is the action of inserting "articifial" faults into the Instance Under Test and look if the tests catch this fault. The idea is that the tests are adequate if they detect all the faults. For example, a typical mutation is to negate the condition expression in a if statement:

//original
if (condition)
   DoSomething();
// mutated
if (!condition)
   DoSomething();

Jester implements Mutation testing for JUnit (there is a nice article here about Jester). In his Thesis (that Lutz Roeder kindly pointed out to me), A Multation Testing Tool for Java Programs, Matthias Bybro defines an entire framework for generating and executing mutants. In this blog, I will not focus on the theory of mutation testing but I'll show how you can get it implemented in .NET

The tools we need

As usual, before attacking the problem we can review what functionalities we need and what we have on our tool set. In this case, we need the ability to load an assembly, explore and alter the IL, and execute or write the mutated assembly. Got any idea....

RAIL! Runtime Assembly Instrumentation Library, that's exactly what we need. With RAIL, you can load an assembly, explore and alter the IL and execute or write the mutated assembly. You can even substitute types or entire functions. In fact, the powerpoint presentation of RAIL, the author shows how to play with IL.

Let's code

The AssemblyScrambler application is designed as follows: a ScramblerEngine instance contains a collection of IScrambler instances. An IScrambler instance contains a method to scramble IL code (I started this application before knowing about mutation testing. So Scrambler should be named mutators, etc...):

public interface IScrambler
{
    void Scramble(ScrambleTrace trace,RMethodDef method);
}

where trace is used to log mutations, and method is an instance of Rail.Reflect.RMethodDef which represents a method. The scramblers are used as follows in ScramblerEngine:

public void Scramble(string fileName)
{
    this.assembly = RAssemblyDef.LoadAssembly(fileName);
    foreach(RTypeDef t in this.assembly.RModuleDef.GetTypes())
    {
        foreach(RMethodDef method in t.GetMethods())
        {
            foreach(IScrambler scrambler in this.Scramblers)
            {
                scrambler.Scramble(trace,method);
            }
        }
    }
}

We are now ready to start implementing scramblers. There is currently only one implemented that swithes brtrue -> brfalse and brfalse -> brtrue. RMethodDef contains a MethodBody that contains a Code instance. Code is a mutable collection of instructions:

for(int i = 0;i<method.MethodBody.Code.InstructionCount;++i)
{
    Instruction il = method.MethodBody.Code[i];
    // selecting instruction
    if (il.OpCode.OperandType != OperandType.InlineBrTarget 
    && il.OpCode.OperandType != OperandType.ShortInlineBrTarget)
        continue;
    // il is ILBranch
    ILBranch branch = (ILBranch)il;
    // check if is brfalse
    if (il.OpCode.Name == OpCodes.Brfalse.Name)
    {
        // subtitute with brtrue
        method.MethodBody.Code[i]=new ILBranch(OpCodes.Brtrue,branch.Target);
    }
    else ...

Switching brtrue and brfalse is as simple as that. Note that here, 99% percent of the work is done by the excellent RAIL library.

Small example

Let's apply the scrambler to a small method:

public void IsTrue(bool isTrue)
{
    Console.Write("Expected: {0}, ",isTrue);
    if (isTrue)
        Console.WriteLine("Actual: true");
    else
        Console.WriteLine("Actual: false");
}

The IL code for this method is the following (using Reflector):

.method public hidebysig instance void IsTrue(bool isTrue) cil managed
{
// Code Size: 42 byte(s)
.maxstack 2
L_0000: ldstr "Expected: {0}, "
L_0005: ldarg.1 
L_0006: box bool
L_000b: call void [mscorlib]System.Console::Write(string, object)
L_0010: ldarg.1 
L_0011: brfalse.s L_001f
L_0013: ldstr "Actual: true"
L_0018: call void [mscorlib]System.Console::WriteLine(string)
L_001d: br.s L_0029
L_001f: ldstr "Actual: false"
L_0024: call void [mscorlib]System.Console::WriteLine(string)
L_0029: ret 
}

You can see that instruction at index 0011 is what we target. We have a small console application that calls this method. The code and results are:

Sandbox sandbox = new Sandbox();
sandbox.IsTrue(true);
sandbox.IsTrue(false);
-- output
Expected: True, Actual: true
Expected: False, Actual: false

After mutation

The above method is passed into the AssemblyScrambler machine, the IL code of the mutated application now looks like this:

.method public hidebysig instance void IsTrue(bool isTrue) cil managed
{
// Code Size: 42 byte(s)
.maxstack 3
L_0000: ldstr "Expected: {0}, "
L_0005: ldarg.1 
L_0006: box bool
L_000b: call void [mscorlib]System.Console::Write(string, object)
L_0010: ldarg.1 
L_0011: brtrue.s L_001f
L_0013: ldstr "Actual: true"
L_0018: call void [mscorlib]System.Console::WriteLine(string)
L_001d: br.s L_0029
L_001f: ldstr "Actual: false"
L_0024: call void [mscorlib]System.Console::WriteLine(string)
L_0029: ret 
}

Take a look now at L_0011, it is now brtrue.s.... the method is mutated. In fact, the output of the snippet gives:

Expected: True, Actual: false
Expected: False, Actual: true

You can download the source at http://www.dotnetwiki.org/DesktopDefault.aspx?tabid=121. Don't forget that you need the RAIL assemblies.
posted on Saturday, May 22, 2004 12:25:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [5]
# Friday, May 21, 2004

In the post Fun with Graphs (3): Creating the graph of a database structure, I have presented a small application that creates the graph of a database. We are now going to improve the output by adding the different fields, primary keys, etc.. in the graph.

GraphvizRecordCell

Graphviz supports a type of vertex shape that is drawed as nested tables. This shape is called Record. NGraphviz comes with a class wrapper (GraphvizRecordCell) that lets you easily create such records. Some remarks on cells:

  • A GraphvizRecordCell can also contain other nested cells,
  • By default, Graphviz starts to arrange the cells horizontally and swith direction (vertical/horizontal) at each level 

Let's take the formatVertex event handler and adapt it to create records:

private void formatVertex(Object sender, FormatVertexEventArgs e)
{
    TableSchemaVertex v = (TableSchemaVertex)e.Vertex;
    GraphvizRecord record = new GraphvizRecord();
    e.VertexFormatter.Shape = GraphvizVertexShape.Record;
    e.VertexFormatter.Record = record;
    GraphvizRecordCell table = new GraphvizRecordCell();
    record.Cells.Add(table);

    GraphvizRecordCell name = new GraphvizRecordCell();
    name.Text = v.Table.Name;
    table.Cells.Add(name);
    ...

Here's a sample result on the MbUnit database:

posted on Friday, May 21, 2004 12:56:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]

Over the last few days, I have started to prepare MbUnit to support loading of test assemblies into separate domain. This feature is very important for a number of reasons:

  • test assemblies are shadow copied,
  • test assemblies can be unloaded. This means that MbUnit can detect when you have recompile the test assembly and reload it.The assembly unloading feature is very important if you plan to do Test Driven Development (test, code, test, code...).
  • it is easier to control the AssemblyResolve event,

Of course, executing the tests in separate AppDomain has a big drawback: test results and notifications is transmitted by Remoting, and this cost cpu cycles. Currently there is a big performance hit (twice slower) for using separate AppDomain. A possible explanation is that there too much event notification that need to cross Remoting channel.

To be continued...

posted on Friday, May 21, 2004 7:02:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]
# Thursday, May 20, 2004

Sorting the fixture using the namespace/type is nice... but like always, there situations when you would like to sort fixture using other criteras. For example, you might want to sort the test by authors, categories, importance, etc...

While preparing for AddDomain remoting, I have totally refactored the way MbUnit populates the tree to make it totally extensible: now you can populate the anyway you like!

FixtureCategoryAttribute

This is a new attribute that can tag fixture to sort them by categories. You can describe a nested category by separting the names by dots (like a namespace) and you can tag a fixture with multiple categories (a single fixture can be part of multiple categories). For example:

[CompositeFixture(typeof(EnumerableTest))]
[ProviderFactory(typeof(ArrayListFactory),typeof(IEnumerable))]
[ProviderFactory(typeof(HashtableFactory),typeof(IEnumerable))]
[Pelikhan] -> author
[FixtureCategory("Important.Tests.Should.Be.Here")] -> categories
[FixtureCategory("A.Test.Can.Be.In.Multiple.Categories")]
[FixtureCategory("A.Test.Can.Be.In.Multiple.Categories2")]
public class CompositeTest
{
}

Screenshot

Here's a snapshot of the latest MbUnit snapshot: as you can see the tests are sorted by namespace, authors and categories.

posted on Thursday, May 20, 2004 10:34:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Wednesday, May 19, 2004

This article will try to give an rough overview of the MbUnit vision of tests, and consequently it's architecture. It's contains some material of a previous CodeProject article.

Why MbUnit ?

Unit testing is a great tool for ensuring an application quality and frameworks like NUnit or csUnit have made it very simple to implement. However, as the number of tests begins to grow, the need for more functionalities begin to show up. The above frameworks are based on the Simple Test Pattern which is basically the sequence of SetUp, Test, TearDown actions. Although highly generic, this solution lets a lot of work to be done by the test writer. Sadely, there is no easy way to derive and include a new "fixture" type in those frameworks.

MbUnit is simply born from the fact that I wanted a new fixture and integrating it into existing frameworks was nearly impossible (I was also resting from a knee surgery at hospital with nothing else to do than coding).

Illustrating example

In order to make things clear, I will refer to an example while explaining how MbUnit works. Let me consider the Simple Test Pattern which is implemented by most test unit framework available. This is the classic way of writing unit test as described in the figure below. A TestFixture attribute tags the test class, one SetUp method, tests are done in the Test tagged method and clean up is performed in TearDown tagged method. This is illustrated in the left of the figure.

Attribute -> Run -> Invoker

The kernel of MbUnit is  composed of different components that work in a serial way. The first component is the fixture attribute

The fixture attribute is used to tag the classes that contain unit tests (TestFixtureAttribute is a fixture attribute). The new thing in MbUnit is that each fixture attribute contains the execution logic of the fixture which is returned at run-time under the form of a Run (IRun interface). In the case of the example, the TestFixtureAttribute is defined as a sequence of SetUp, Test and TearDown:

public class TestFixtureAttribute : TestFixturePatternAttribute 
{
     public override IRun GetRun()
     {
          SequenceRun runs = new SequenceRun();
            
          // setup
          OptionalMethodRun setup = new
                              OptionalMethodRun(typeof(SetUpAttribute),false);
          runs.Runs.Add( setup );
            
          //tests
          MethodRun test =new MethodRun(typeof(TestPatternAttribute),true,true);
          runs.Runs.Add(test);
            
          // tear down
          OptionalMethodRun tearDown = new
                           OptionalMethodRun(typeof(TearDownAttribute),false);
          runs.Runs.Add(tearDown);
            
          return runs;                        
     }
}

where

  • TestFixturePatternAttribute is the abstract base class for all new fixture attribute in MbUnit,
  • the GetRun method is called by the MbUnit core to know what is the execution path of the fixture. The fixture can use built-in basic attributes to build it's execution path.
  • An IRun instance can represent the call to a method, or to a sequence of methods, etc...
  • SequenceRun is a sequence of IRun's,
  • MethodRun is a IRun instance that wraps a call to a method tagged by a predefined attribute.
  • OptionalMethodRun is inherited from MethodRun and describes optional methods.

The IRun object will create an execution tree  by exploring the tagged type. Each node of the tree contains a RunInvoker (IRunInvoker interface). The RunInvoker is in charge for calling the method, garding the execptions, loading data, etc... On our sample fixture, there are two tests that the Run will extract:

When the tree is built, we just extract all the possible path from the root node to the leaves to extract the different possible tests. Each of these path is called a Pipe (RunPipe class).

In the GUI, the RunPipe instances are attached to the TreeNode nodes so you can easily select and execute separately the tests. This ensures that the test execution are isolated.

This architecture brings a lot of flexibility (and complexity) on the kind of fixtures that can be defined. Any user can define it's own fixture and use MbUnit to execute it.

posted on Wednesday, May 19, 2004 11:22:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]

As most of you should know by now, PageRank is the ranking system used by Google to estimate the importance of a page (you can see it in the Google toolbar). Of course, since the basic idea of the algorithm was published they may have been some significant modifications. In this blog, I'll how we can use QuickGraph to compute the PageRank of a graph...

PageRank

The idea behind PageRank is simple and intuitive: pages that are important are referenced by other important pages, page importance is distributed to out-edges. There is an important literature on the web that explains PageRank:

The PageRank is computed by using the following iterative formula:

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn)) 

where PR is the PageRank, d is a damping factor usually set to 0.85, C(v) is the number of out edges of v.

PageRank can also be expressed in terms of matrix algebra where it is shown that it is equivalent to finding the eigen values of a sparse matrix (see http://citeseer.ist.psu.edu/kamvar03extrapolation.html). And in fact, the above formula is equivalent to the Power Method (a slow method for finding eigen values).

Where's my LAPACK ?

Until C# (and .Net) has a real and free wrapper around LAPACK, we cannot attack PageRank using matrix algebra. As mentionned above, the formula is equivalent to the Power Method, which is a slow (potentially very slow) method for computing eigen values (convergence rate is |la_1 / la_2| where la_1 is the largest eigen value, la_2 is the second largest eigen value). There are other faster methods (like model reduction) that we could use to speed up things if we had LAPACK (I'm throwing a bottle in the .Net sea here).

Implementation in QuickGraph

Since we have no matrix algebra, the implementation is very basic and unefficient (I'm almost ashamed). This is almost a disclaimer: do not use this algorithm for big graphs, it is potentially slow. The main loop looks like this:

// temporay rank dictionary
VertexDoubleDictionary tempRanks = new VertexDoubleDictionary();
// create filtered graph that removes dangling links
FilteredBidirectionalGraph fg = new FilteredBidirectionalGraph(
    this.VisitedGraph,
    Preds.KeepAllEdges(),
    new InDictionaryVertexPredicate(this.ranks)
    ); 

int iter = 0;
double error = 0;
do
{
    // compute page ranks
    error = 0;
    foreach(DictionaryEntry de in this.Ranks) 
    {
        IVertex v = (IVertex)de.Key;
        double rank = (double)de.Value;

        double r = 0;
        foreach(IEdge e in fg.InEdges(v))
        {
            r += this.ranks[e.Source] / fg.OutDegree(e.Source);
        }
        // add sourceRank and store
        double newRank = (1-this.damping) + this.damping * r;
        tempRanks[v] = newRank;
        // compute deviation
        error += Math.Abs(rank - newRank);
    } 
    // swap ranks
    VertexDoubleDictionary temp = ranks;
    ranks = tempRanks;
    tempRanks = temp; 
    iter++;
// iterate until convergence, or max iteration reached
}while( error > this.tolerance && iter < this.maxIterations);

where ranks is the PR method, damping is the d factor. Note that because we use enumerators, we cannot modify the ranks as we iterate the dictionary, otherwize the enumerator would be invalidated, therefore we use 2 dictionaries and swap between them as we go along.

The results

As usual, we use GraphvizAlgorithm to output the results. Here are some graph sample with each corresponding page rank:

posted on Wednesday, May 19, 2004 9:22:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [5]
# Tuesday, May 18, 2004

This is the first episode of an article serie on graphs and databases. The menu of today will be

  1. how to extract the structure of a database,
  2. create a graph representation using QuickGraph
  3. draw it using NGraphviz

To make things user friendly, I will use the PropertyGrid to set up things.

Extracting the database schema

At first sight this seemed to be a tedious and boring task but hopefully a light poped up on the back of my head saying "you have already seen that in CodeSmith". In fact CodeSmith comes with an assembly, SchemaExplorer, whose purpose is to extract database schema. Even better, the main class, DatabaseSchema, comes with a custom type editor (DatabaseSchemaTypeEditor) so that integration in the PropertyGrid is straightforward. 

public class DataGraphProperties
{
    private DatabaseSchema schema = null;
    [Category("Data")]
    [TypeConverter(typeof(DatabaseSchemaTypeConverter))]
    public DatabaseSchema Schema
    {
        get
        {
            return this.schema;
        }
        set
        {
            this.schema = value;
        }
    }
}

In the PropertyGrid, the Schema property will let the user to select a data source.

Database and graphs

It is straightforward to see that a database is a graph where the tables are the vertex and the foreign keys are the edges. The DatabaseSchema class contains the collection of tables (TableSchema instances), each table containing a collection of foreign keys (TableKeySchema instance). So we have all we need to populate the graph.

Custom Vertex and Edges

The first step for creating a representation of the database as a QuickGraph graph is to create the custom vertex (that implements IVertex) and edge classes (that implement IEdge). This task is straigtforward by using two default classes, Vertex and Edge, available in the QuickGraph assembly. This is illustrated for TableSchemaVertex:

public class TableSchemaVertex : Vertex
{
    private TableSchema table = null;
    public TableSchemaVertex(int id)
    :base(id)
    {}

    public TableSchema Table
    {
        get
        {
            if (this.table==null)
                throw new InvalidOperationException("table not initialized");
            return this.table;
        }
        set
        {
            this.table = value;
        }
    }
}

The TableSchemaVertex instance are to be created by a vertex provider:

public class TableSchemaVertexProvider : TypedVertexProvider
{
    public TableSchemaVertexProvider()
    :base(typeof(TableSchemaVertex))
    {}
}

The same thing is done again for the edges, which is called TableKeySchemaEdge.

Custom Graph

The custom graph is generated using the CodeSmith template AdjacencyGraph.cst. The class is called DatabaseSchemaGraph.

Populating the graph

Once the data structure is ready, populating the graph with the tables and the keys is straightforward:

DatabaseSchema schema = ...;
DatabaseSchemaGraph graph = ...;
// add tables;
foreach(TableSchema table in schema.Tables)
{
    graph.AddVertex(table);
}
// foreach table, add all relations (out-edges)
foreach(TableSchema table in schema.Tables)
{
    foreach(TableKeySchema key in table.ForeignKeys)
    {
        graph.AddEdge(key);
    }
}

That's it :)

Let's do some drawing

Now that we have a graph of the database, the Graphviz "machinery" can be used to output a number of different drawings (refer to this post for a detailled tutorial on using Graphviz). Bundled that with the PropertyGrid and we get a nice and simple database grapher. I have applied DbGrapher on the database that MbUnit uses to store test results:

Next episode

In the next episode, we will see how to improve the (poor) quality of the drawing and how to detect cascade cycles (on delete cycles etc...).

posted on Tuesday, May 18, 2004 9:37:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Monday, May 17, 2004

The Abstract Test Pattern (ATP)

I have received a few comments on my blog entry on Composite Unit Testing (CUT) arguying that this was the Abstract Test Pattern . Here's a snapshot of the definition from the definition form http://c2.com/cgi/:

A Testing Pattern describing a way to reuse test cases for multiple implementations of an Interface.
Problem
How to write a Test Suite against an Interface (or Abstract Class) that can be used to test all implementations of the interface.

Solution

  • Write an AbstractTest  for every Interface and Abstract Class). The AbstractTest should have an abstract FactoryMethod that creates an object with the type of the Interface.
  • Write a ConcreteTest for every implementation of the Interface. The ConcreteTest? should be a descendant of the AbstractTest and override the FactoryMethod to construct an instance of the implementation class.

Functional Compliance

Eric George's article gives a more detailled description of the pattern and describes it as functional compliance. It is easy enough for the compiler to tell whether a class is syntactically compliant with an interface. It applies a check to see if all required methods have been implemented with the correct signatures (syntaxic compliance), but the compiler cannot check functional compliance of a class with its interface. Here's the formal definition given by Eric George:

Functional Compliance is a module's compliance with some documented or published functional specification. The specification can be purely documentational, or it can be partially enforced through Interfaces or Abstract Classes. Interfaces and Abstract Classes along with their associated documentation represent a contract between the implementation code and the client (or user) code. It is this contract that needs to be fully tested. The Liskov Substitution Principle (LSP) tells us that all modules that honor a contract (usually by implementing an interface), should behave the same from the perspective of the client code. A module's functional compliance is really the degree to which it obey's the LSP.

So what about Composite Unit Testing ?

The remarks from the readers were right. Composite Unit Testing is

  • an enhanced form of the Abstract Test Pattern,
  • is a tool to test functional compliance

There is, however, a major difference between ATP and CUT: separation of the test code and the factory methods. In AUT, you create a ConcreteTest that inherits AbstractTest and implements a factory method, so the code that generates the tested entity is "hard-coded" into concrete test. In CUT, the framework takes care of retreiving and feeding you AbstractTest using user-specified factories (you can easily have multiple factories):

// AUT
// abstract method
public abstract class AbstractEnumerableTest
{
    public IEnumerable Create();
    public void GetEnumeratorTest()
    {
        IEnumerable en = this.Create();
        ...
    }
}

// concrete implementation
[TestFixture]
public class ArrayListEnumerableTest
{
    public override IEnumerable Create()
    { return new ArrayList();}
}

The same test as above, using CUT:

// the fixture
public class EnumerableFixture
{
    public void GetEnumeratorTest(IEnumerable en)
    {
        ...
    }
}

// the factories
public class ArrayListFactory
{
    public ArrayList Emtpy
    { get{ return new ArrayList();}}
}

// link the fixture with the factories
[CompositeFixture(typeof(EnumerableFixture), typeof(IEnumerable))]
[ProviderFactory(typeof(ArraListFactory),typeof(IEnumerable))]
public class EnumerableTest
{}
posted on Monday, May 17, 2004 7:42:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Friday, May 14, 2004

The following article on CodeProject talks about Scarified Treemaps, an interresting tree visualization. I wonder what it would look like in MbUnit...

Demo application - treemaps.png

posted on Friday, May 14, 2004 2:17:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, May 13, 2004

Here's a preview of a new fixture that will be available in the next release of MbUnit. The fixtures loads XML data from files and feeds it to the different methods, hence it's Data Driven Unit Testing. I'll make a real article on this when the fixture stabilizes but here's a first example.

[DataFixture]
[XmlDataProvider("../../sample.xml","//User")]
[XmlDataProvider("../../sample.xml","DataFixture/Customers")]
public class DataDrivenTests
{
    [ForEachTest("//User")]
    public void ForEachTest(XmlNode node)
    {
        Assert.IsNotNull(node);
        Assert.AreEqual("User",node.Name);
        Console.WriteLine(node.OuterXml);
    }

    [ForEachTest("//User",DataType = typeof(User))]
    public void ForEachTestWithSerialization(User user)
    {
        Assert.IsNotNull(user);
        Console.WriteLine(user.ToString());
    }
}

The file sample.xml looks like this:

<DataFixture>
  <Employees>
    <User Name="Mickey" LastName="Mouse" />
  </Employees>
  <Customers>
    <User Name="Jonathan" LastName="de Halleux" />
    <User Name="Voldo" LastName="Unkown" />
  </Customers>
</DataFixture>

and the User class like this:

[XmlRoot("User")]
public class User
{
    private string name;
    private string lastName;
    public User()
    {}
    [XmlAttribute("Name")]
    public String Name
    {
    get{ return this.name;}
    set{ this.name = value;}
    }
    [XmlAttribute("LastName")]
    public String LastName
    {
    get{ return this.lastName;}
    set{ this.lastName = value;}
    }
}

How does it work ?

  • The XmlDataProvider attributes are used to specify an XML filename that contains the data (of course, you can put several of those).
  • Then a first XPath expression is applied to the data.
  • Each selected node is then feeded to the ForEachTest which applies a second XPath expression on the node. This allows a fine grained selection of the data.
  • You can also specify the desired output data type. XmlSerializer is then used to convert XmlNode into the desired object.

Screenshot

Here's a screenshot that shows have MbUnit loads the XML and creates the different test case.

posted on Thursday, May 13, 2004 10:28:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [13]

This is a copy of an upcoming CodeProject article (number 36)

Introduction

This article presents a new way of creating unit tests. Rather than creating a fixture for each class, we split the testing effort by class functionality. Taking advantage of interface composition, we use split the unit tests for each interface and we feed those fixtures using factories. This is why I call this technique: Composite Unit Testing.

There are several advantages of using this approach:

  1. Expressing requirements: interfaces are a natural place for expressing requirements, which later on translate into unit tests,
  2. Test reusability: once you have design a test suite for an interface, you can apply it to any class that implements this interface,
  3. Test Driven Developement: this approach fits nicely and naturally into the TDD paradigm since execution path is interface -> interface fixture -> implementation(s).
  4. Separation of tests and tested instance generation: the code that generates the tested instances is located in factories that can be reused for each fixtures.

In the rest of the article, I will illustrate this technique on ArrayList and Hashtable.

Test Case

Let us consider illustrate the process with two classes of the System.Collections namespace: ArrayList and Hashtable.

The two classes belong to different families of containers: ArrayList is a sequential container, while Hashtable is an associative container. However, as the interface diagram below shows, they share a lot of  functionalities (enumeration, cloneable, serialization, etc...). These functionalities are usually represented by interface composition: ICloneable, ISerializable, etc... The interface define functionalities and requirements on those functionalities.

ArrayList and Hashtable

If you take the usual unit testing methodology, you will need to write two (huge) fixture to test the two classes. Since they share functionality, you will end up duplicating testing code, maintenance problem will increase, etc...

Composite unit testing provides a flexbile solution to those problems. In the following, I will illustrate how it is implemented in MbUnit.

Composite Unit Testing Methodology

As mentionned in the introduction, composite unit testing fits naturally in the TDD idea. The principal steps of the process are:

  1. create the interface and express requirements,
  2. create the interface fixture and translate the requirements into unit tests,
  3. implement the interface,
  4. create a class factory that provides instances of the interface,
  5. link fixture to factories and run...

Step 1: Create the interface

This is where you define the functionalities and the requirements. If the documentation is clear enough, it should translate naturally into unit tests. (In this example, the job is already done).

Step 2: Create the interface fixture (for IEnumerable)

MbUnit defines a new custom attribute TypeFixture that is used to create fixture for types (classes, structs or interface). TypeFixture constructor take the type that is tested as argument. Let us start with the fixture of IEnumerable:

EnumerableTest

using System;
using System.Collections;
using MbUnit.Core.Framework;
using MbUnit.Framework;
[TypeFixture(typeof(IEnumerable))]
public class EnumerableTest
{}

The test case in EnumerableTest will receive an instance of the tested type (IEnumerable here) as argument. Therefore, the correct signature of those methods is as follows:

[TypeFixture(typeof(IEnumerable))]
public class EnumerableTest
{
    [Test]
    public void EmptyTest(IEnumerable en)
    {...}
}

The argument is the only difference with the "classic" unit test. You can use test decorators like ExpectedException, Ignore, etc... as usual. IEnumerable defines one method, GetEnumerator. The only requirement is that the IEnumerator instance is not a null reference:

[TypeFixture(typeof(IEnumerable))]
public class EnumerableTest
{
    [Test]
    public void GetEnumeratorNotNull(IEnumerable en)
    {
        Assert.IsNotNull(en.GetEnumerator());
    }
}

That's pretty short but there is nothing else to test. If you want to test the enumeration, you need to write another fixture for IEnumerator. By defining fixtures for each interface you quickly increase the coverage of the tested code.

Composition of tests

Step 4: Create the factories

(We have skipped step 3, the implementation step)

A factory is simply a class that defines public properties or method (with no arguments) that return an object to be tested. You can use factories to provide different flavor of the same class: an empty ArrayList, randomly filled, ordered filled, etc... For example, a possible factory for ArrayList is:

public class ArrayListFactory
{
    public ArrayList Empty
    {
        get
        {
            return new ArrayList();
        }
    } 
    public ArrayList RandomFilled()
    {
        ArrayList list = new ArrayList();
        Random rnd = ...;
        for(int i=0;i<15;++i) 
            list.Add(rnd.Next());
        return list;
    } 
}

Note that a factory does not need any particular attributes. Similarly we can define HashtableFactory.

Step 5: Linking the fixtures to the factories

Linking the factories to the fixtures is simply done by using another custom attribute: ProviderFactory.

[TypeFixture(typeof(IEnumerable))]
[ProviderFactory(typeof(ArrayListFactory),typeof(IEnumerable))]
[ProviderFactory(typeof(HashtableFactory),typeof(IEnumerable))]
public class EnumerableTest
{...}

ProviderFactory takes the type of the factory, and the tested type as argument. The framework will take care of exploring by reflection the factories, select the suitable properties and feed the fixtures with created test instances.

Factories

Step 6: Running the tests

The full source of the example is available in the demo project. You need to create a new C# assembly project and add the reference to MbUnit.Core.dll and MbUnit.Framework.dll, which you can download from the MbUnit web site: http://mbunit.tigris.org. /

Launch MbUnit.GUI and load the assembly (right click -> Assemblies -> Add Assemblies...). Here are some screenshots of the application:

MbUnit GUI

MbUnit report

Conclusion

This article has presented composite unit testing, a new strategy for designing and implementing unit testing. Awaiting comments :)

 

posted on Thursday, May 13, 2004 11:52:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [16]
# Wednesday, May 12, 2004

Just finished a new CodeSmith template for generating a MockObject out of an interface. The template is pretty basic: it loads the type, create the methods/properties of the different interfaces. You can also specify a number of expected values and the template will generate the SetXXX methods.

Located in the MbUnit CVS (Templates folder).

posted on Wednesday, May 12, 2004 5:05:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

Here's a sample C# application that implements structured numbers. (Practial use of this number is unknown :) )

The maths

The StrutucredNumbers sample contains an implementation of the binary tree operation developped by V. Blondel in Structured Numbers, Properties of a hierarchy of operations on binary trees, Acta Informatica, 35, 1-15, 1998.

We introduce a hierarchy of operations on (finite and infinite) binary trees. The operations are obtained by successive repetition of one initial operation. The ¯rst three operations are generalizations of the operations of addition, multiplication and exponentiation for positive integers.

In the paper, the author defines countably many internal operations on binary trees. The first operation, which he denote by .1. , is obtained by forming the binary tree whose left and right subtrees are equal to the operands. This operation is not associative. The second operation .2. is defined as follows: From the binary trees a and b we construct the binary tree a.2.b by repeating the operation .1. on the tree a with the structure dictated by b. In the same way, we define an operation .3. by repeating .2. , an operation .4. by repeating .3. , etc. We eventually obtain countably many internal operations ( .k. for k>= 1) with the definition

a .k. b = a^(k-1).a^(k-1).a^(k-1)...a^(k-1).a,

where there are b factors.

The code

The code is mainly composed of the BtNode that implements the algebra defined above and some valuators for the tree. At last, NGraphviz is used to render to trees :)

The results

Suppose that we have

x = ..
y = ..
, then
  • x:
  • x.1.y:
  • x.2.y:
  • x.3.y:
  • x.4.y:
posted on Wednesday, May 12, 2004 12:16:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Tuesday, May 11, 2004

In this "episode", we are going to build an application that generates and renders the exection graph of a method using QuickGraph and Lutz Reoder's IlReader. The execution graph is directed graph where each vertex is an IL instruction and each edge represent the transition between two instructions, it could be a jump or a simple move to the next instruction. The graph represents the different path that the application can take into your method. If you have access to the content of a method, you can potentially build.

In a future blog, I will use the execution graph to make "smart" test fixture generator by applying basic graph algorithms on the execution graph. In the following of the blog, I assume you have some basic knowledge of IL, Reflection and Reflection.Emit.

Step 1: creating the graph structures

We begin by creating a specialized type of vertex InstructionVertex that will hold a System.Reflection.Emit.Instruction instance:

using System.Reflection.Emit;
public class InstructionVertex : QuickGraph.Vertex
{
    private Reflector.Disassembler.Instruction instruction=null;
    public InstructionVertex(int id):base(id)
    {}
    public Reflector.Disassembler.Instruction Instruction
    {
        get
        {
            if (this.instruction==null)
                throw new InvalidOperationException();
            return this.instruction;
        }
        set
        {
            this.instruction = value;
        }
    }
    public override string ToString()
    {...}
}

Note that the ToString method uses the code Example.cs in the IlReader source to render an Instruction to string.

To generate a strongly-typed BidirecitonalGraph, that we call InstructionGraph, we use the CodeSmith template called AdjacencyGraph.cst located in the Templates directory.

Step 2: Loading the IL instructions of the method

The building of the graph is done by the IlGraphBuilder class. This class takes the type of the method as an argument. It contains one public method BuildGraph that takes a MethodInfo instance as argument as outputs a InstructionGraph instance containing the execution graph.

The code to extract the MethodBody from the method is almost entirely copy pasted from the IlReader sample:

public class IlGraphBuilder
{
    private Type visitedType;
    private ModuleReader reader;
    public IlGraphBuilder(Type visitedType)
    {
        this.visitedType = visitedType;
        this.reader = new ModuleReader(this.visitedType.Module, new AssemblyProvider());
    }
    public InstructionGraph BuildGraph(MethodInfo mi)
    { 
        // get method body using IlReaderr
        MethodBody methodBody = reader.GetMethodBody(mi);
        ...
    }

    private sealed class AssemblyProvider : IAssemblyProvider
    {
        public Assembly Load(string assemblyName)
        {
            return Assembly.Load(assemblyName); 
        }
        public Assembly[] GetAssemblies()
        {
            throw new NotImplementedException(); 
        }
    }
}

Step 3: Building the graph

The MethodBody instance contains the list of Instruction instances and the list of exception handlers. The building of the graph is done in 3 steps:

  1. iterate the Instruction list and create the vertices in the graph. We also build a dictionary that associates the instruction offset to the corresponding vertex,
    // new field in the class
    Hashtable instructionVertices;
    ...
    foreach(Instruction i in methodBody.GetInstructions())
    {
        // avoid certain instructions
        if (i.Code.FlowControl == FlowControl.Phi || i.Code.FlowControl == FlowControl.Meta)
            continue;
        // add vertex
        InstructionVertex iv = g.AddVertex();
        iv.Instruction = i;
        // store in hashtable
        this.instructionVertices.Add(i.Offset,iv);
    }
    
  2. iterate again over the instructions and add the edges that represent the transitions. This is the tricky part, I use a recursive exploration of the instructions, (this part of the code is a bit heavy for the blog)
  3. iterate over the exception handlers to link the different sections: create the link to catch,finally handlers, etc...

Step 4: Drawing the graph

Drawing the graph is straight-foward using the GraphvizAlgorithm class:

GraphvizAlgorithm gv = new GraphvizAlgorithm(g); 
gv.Write(mi.Name);

Analysing some basic flow contructions:

As an application, I going to show the graph of some basic instruction flow like for, while, if, foreach, etc...

public void HelloWorld()
{
    Console.WriteLine("Hello World");
}

public void IfAlone(bool value)
{
    if (value)
        Console.Write("value is true");
}

public void IfThenElse(bool value)
{
    if (value)
        Console.Write("value is true");
    else
        Console.Write("value is false");
}

public void For()
{
    for(int i = 0;i!=10;++i)
    {
        Console.Write(i);
    }
}

public void While()
{
    int i = 0;
    while(i!=10)
    {
        i++;
    }
}

public void TryCatchFinally()
{
    try
    {
        Console.WriteLine("try");
    }
    catch(Exception)
    {
        Console.WriteLine("catch"); 
    }
    finally
    {
        Console.WriteLine("finally");
    }
}

public void TryMultiCatchFinally()
{
    try
    {
        Console.WriteLine("try");
    }
    catch(ArgumentException)
    {
        Console.WriteLine("catch(arg)"); 
    }
    catch(Exception)
    {
        Console.WriteLine("catch"); 
    }
    finally
    {
        Console.WriteLine("finally");
    }
}

public void ForEach(ICollection col)
{
    foreach(Object o in col)
    {
        Console.WriteLine(o);
    }
}

public void ForEachContinue(int[] col)
{
    foreach(int o in col)
    {
        Console.WriteLine(o);
        if (o == 0)
            continue;
    }
}

public void ForEachBreak(int[] col)
{
    foreach(int o in col)
    {
        Console.WriteLine(o);
        if (o == 0)
            break;
    }
}

public void SwitchIt(int value)
{
    switch(value)
    {
    case 0:
        Console.Write("0");
        break;
    case 1:
        Console.Write("1");
        break;
    default:
        Console.Write("default");
        break;
    }
}

posted on Tuesday, May 11, 2004 10:51:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [9]
# Monday, May 10, 2004

I have just finished a CodeSmith template that "automatically" generates empty tests case for a given type.

This template does a little bit more than just enumerating the available methods: it explores MSIL using RAIL and creates test case following the given rules:

  • if the IL instruction is a conditional branch, create a test case for both possibilities (true/false case)
  • if the IL instruction throws, create a test case expecting the exception,
  • if the IL instruction is returning, create a test case that checks the returned value

Those rules are quite simplistic but can already generate a "huge" number of test case automatically. Each test case comes with a piece of documentation and the method IL code where the target instruction has been set in bold. Currently, the documentation contains IL code, but in the future it would be nicer to output real code, using Reflector for example.

In order to make the template work, you need to put RAIL assemblies and the AssemblyHelper assembly in the CodeSmith directory.

Here's a quick example. The method:

public void ABitMoreComplext(bool goForIt)
{
    if (goForIt)
        throw new Exception();
    else
        Console.Write("did not throw");
        // other instruction 
    Console.Write("hello");
}

and the resulting output in the generated class:

        
        /// <summary>
        /// Tests ABitMoreComplext method when condition is executed as true
        /// (see remarks).
        /// </summary>
        /// <remark>
        /// <para>
        /// Not implemented
        /// </para>
        /// <code>
        /// ldarg.1
        /// <b>brfalse.s</b>
        /// newobj
        /// throw
        /// ldstr
        /// call
        /// ldstr
        /// call
        /// ret
        /// </code>
        /// </remarks>
        [Test]
        [Ignore]
        public void ABitMoreComplextIfTrue1()
        {
            throw new NotImplementedException();
        }
        /// <summary>
        /// Tests ABitMoreComplext method when condition is executed as true
        /// (see remarks).
        /// </summary>
        /// <remark>
        /// <para>
        /// Not implemented
        /// </para>
        /// <code>
        /// ldarg.1
        /// <b>brfalse.s</b>
        /// newobj
        /// throw
        /// ldstr
        /// call
        /// ldstr
        /// call
        /// ret
        /// </code>
        /// </remarks>
        [Test]
        [Ignore]
        public void ABitMoreComplextIfFalse1()
        {
            throw new NotImplementedException();
        }
        /// <summary>
        /// Tests that the ABitMoreComplext method throws (see remarks)
        /// </summary>
        /// <remark>
        /// <para>
        /// Not implemented
        /// </para>
        /// <code>
        /// ldarg.1
        /// brfalse.s
        /// newobj
        /// <b>throw</b>
        /// ldstr
        /// call
        /// ldstr
        /// call
        /// ret
        /// </code>
        /// </remarks>
        [Test]
        [Ignore]
        [ExpectedException(typeof(Exception))]
        public void ABitMoreComplextThrow3()
        {
            // don't forget to update the exception type.
            throw new NotImplementedException();
        }

ps: The template is called TestFixture and is in the Templates directory in the MbUnit CVS.

 

posted on Monday, May 10, 2004 2:25:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [13]

A co-worker, Jacques Theys, sent me this link:

"GPGPU stands for General-Purpose computation on GPUs. With the increasing programmability of commodity graphics processing units (GPUs), these chips are capable of performing more than the specific graphics computations for which they were designed. They are now capable coprocessors, and their high speed makes them useful for a variety of applications. The goal of this page is to catalog the current and historical use of GPUs for general-purpose computation."

 

posted on Monday, May 10, 2004 10:17:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Sunday, May 09, 2004

MbUnit has a Visual Studio Add-In using the NUnitAddIn framework. Setting the framework is done through the following steps:

  1. Get the latest NUnitAddIn installed on your machine,
  2. Copy the MbUnit binaries to the NUnitAddIn folder,
  3. Edit the NUnitAddIn.config file as follows:
    <?xml version="1.0"?>
    <configuration>
      <nunitaddin>
        <frameworktestrunners>
          <testRunner name="MbUnit"
                         typeName="MbUnit.AddIn.MbUnitTestRunner" 
                         assemblyPath="MbUnit.AddIn.dll"  />
          <testRunner name="NUnit"
                         typeName="NUnitAddIn.NUnit.TestRunner.SimpleNUnitTestRunner" 
                         assemblyPath="NUnitAddIn.NUnit.dll"  />
          ...
        </frameworktestrunners>
      </nunitaddin>
    </configuration>

That's it. You can now right click on an assembly, namespace or fixture and execute it using "Run Tests...".

posted on Sunday, May 09, 2004 10:54:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]

The major addition of this release if the implementation of a Visual Studio Add-in.

Download the files.

posted on Sunday, May 09, 2004 10:40:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]

As GPU power goes "exponentially" up, using it as a number cruncher becomes a reality.

http://www.cs.washington.edu/homes/oskin/thompson-micro2002.pdf

posted on Sunday, May 09, 2004 10:36:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]

The last few days have been very exciting for MbUnit. I have been working (remotely) with Jamie Cansdale, creator of NUnitAddIn, to create a Visual Studio Add-in for MbUnit. It turned out to be surpringly simple, thanks to the extensible architecture of NUnitAddIn and the expertise of Jamie.

A quick NUnitAddIn introduction

NUnitAddIn is not just an Add-in for NUnit as it's name seems to tell. It is far more than that. It is a extensible framework to build Add-ins. All the words are important here: extensible, because it is designed to accept any type of Add-in (at least test runners) and framework because it takes care of all the complicated/technical task of launching processes, attaching debuggers, etc. In fact, writing an Add-in with NUnitAddIn is as simple as implementing an interface!

Add-in How-to

I will give here a detailled how-to on the Add-in creation. This is the summary of few hours of coding and dozens of MSN messages with Jamie Cansdale (very active support!). Now let's go for the fun (I will assume you have NUnitAddIn installed in c:\Program Files\NUnitAddIn)

Setup the project

  1. Create a new Assembly project and name it as you like. In this example, it will be named MbUnit.AddIn
  2. Add the following references
    • NUnitAddIn.TestRunner.dll
    • NUnitAddIn.TestRunner.Framework.dll
  3. Change to ouput directory to the NUnitAddIn directory: Projet -> Properties -> Common Properties -> Output Path -> Select NUnitAddIn directory
  4. Make the assembly strongly named

Setup NUnitAddIn

You need to edit NUnitAddIn.config in the NUnitAddIn directory (do not edit NUnitAddIn.exe.config). The file looks like this:

<?xml version="1.0"?>
<configuration>
  <nunitaddin>
    <frameworktestrunners>
      <testrunner name="NUnit"
                     typeName="NUnitAddin.NUnit.TestRunner.SimpleNUnitTestRunner" 
                     assemblyPath="NUnitAddin.NUnit.dll"  />
      ...
    </frameworktestrunners>
  </nunitaddin>
</configuration>

Add your Add-in on top of the "food chain":

<?xml version="1.0"?>
<configuration>
  <nunitaddin>
    <frameworktestrunners>
      <testrunner name="MbUnit"
                     typeName="MbUnit.AddIn.MbUnitTestRunner" 
                     assemblyPath="MbUnit.AddIn.dll"  />
      <testrunner name="NUnit"
                     typeName="NUnitAddin.NUnit.TestRunner.SimpleNUnitTestRunner" 
                     assemblyPath="NUnitAddin.NUnit.dll"  />
      ...
    </frameworktestrunners>
  </nunitaddin>
</configuration>

Every is now setup. NUnitAddIn will use the MbUnit.AddIn.MbUnitTestRunner class as test runner.

Getting started with ITestRunner

The last step of the job is to implement ITestRunner. We will name our runner accordingly to the name with have putted in the config file.

  1. Create the class
    using NUnitAddIn.TestRunner.Framework;
    
    public class MbUnitTestRunner : ITestRunner, MarshalByRefObject
    {...}
  2. Tag the class with the assemblies used by your test runner using the DependencyAttribute attribute. For MbUnit there are quite a few:
    [
    DependentAssembly("NUnitAddin.TestRunner"),
    DependentAssembly("NUnitAddin.TestRunner.Framework"),
    DependentAssembly("QuickGraph.Exceptions"), 
    DependentAssembly("QuickGraph.Concepts"),
    DependentAssembly("QuickGraph.Predicates"),
    DependentAssembly("QuickGraph.Collections"),
    DependentAssembly("QuickGraph.Representations"),
    DependentAssembly("QuickGraph.Algorithms"),
    DependentAssembly("QuickGraph.Serialization"),
    DependentAssembly("QuickGraph"),
    DependentAssembly("MbUnit.Core")
    ]
    public class MbUnitTestRunner : MarshalByRefObject, ITestRunner
    
    The two reference to NUnitAddin.TestRunner and NUnitAddIn.TestRunner.Framework are obligatory.
  3. Make the object "long living" by making InitializeLifetimeService return null:
    public class MbUnitTestRunner : ITestRunner, MarshalByRefObject
    {
        public override Object InitializeLifeTimeService()
        {
            return null; 
        }
    }
  4. ITestRunner define two methods, Abort and Run. Abort does not need to be implemented, so yoiu can throw a NotImplementedException in it. Run is where the job is done.
    public void Abort() 
    { 
        throw new NotImplementedException(); 
    } 
    
    public TestResultSummary Run( 
        ITestListener testListener, 
        ITraceListener traceListener, 
        string assemblyPath, 
        string testPath 
    ) 
    {
        ...
    }
    In the Run method, testListener is used to send test results to NUnitAddIn, ITraceListener is used to ouput messages to the console window, assemblyPath is the path to the test assembly and testPath is a string describing what has to be tested formatted as follows:
    ('N' | 'T' | 'M') : Type
    For example, values of testPath can be
    • N:MyTests.Tests, the namespace (and sub-namespaces) MyTests.Tests has to be run,
    • T:MyTests.Tests.SimpleFixture, the class SimpleFixture has to be run,
    • M:MyTests.Tests.SimpleFixture.SomeTest, the method SimpleFixture.SomeTest has to be run
    • null, the entire assembly is run
  5. Let's start with an "hello world" test runner:
    public TestResultSummary Run( 
        ITestListener testListener, 
        ITraceListener traceListener, 
        string assemblyPath, 
        string testPath 
    ) 
    {
        traceListener.WriteLine("Hello World!");
    }
    

We are ready to make the first run of the Add-in. Open you dummy test project and right click either on the assembly, on a namespace, a type or a method and hit the "Run Tests..." rocket. If everything goes to plan, you should something like this appear in the output window:

------ Test started: Assembly: MbUnit.Tests.dll ------
Hello World!
---------------------- Done ----------------------

If you are lucky it worked out-of-the box, otherwize the next section deals with the Add-in debugging.

Debugging the Add-in

Different failure can appear at different levels. I have encountered several but hopefully, I had Jamie on my back helping me all the way. Here's a simple procedure:

  1. Add a break point on the "Hello World" line inside the Run method,
  2. Right-click on a test class, choose Test With... -> Debugger. If you are lucky, the debugger will hit the break point and you can do "classic" debugging.
  3. If the debugger did not hit the break point, it is likely that NUnitAddIn failed to load the Add-In. You need to make the debugger break on exceptions:
    1. go to Debug -> Exceptions
    2. choose Commmon Language Runtime
    3. When exception is thrown, break into debugger
  4. Run the tests again, this should give you the exception that make the Add-in loading fail.

Important note: when you recomile your project, you may have an error saying the assembly file cannot be copied because it is used by another process. At this point, you need to restart NUnitAddIn. To do this, right click on the rocket icon in the taskbar and click "Close". Recompile and yes it works!

Implementing the Run method

The rest of the work is mainly up to you. You need to load the assembly, look for the test fixture according to the "testPath" and execute them. The important thing is that the notification is done throught ITestListener.

Just for the fun, here's the output of the Add-in on the TestFixtureTest in MbUnit.Tests assembly:

------ Test started: Assembly: MbUnit.Tests.dll ------
C:\Documents and Settings\Peli\Mes documents\Tigris\mbunit\src\MbUnit.Tests\bin\StrongDebug\MbUnit.Tests.dll: [mbunit] Test Execution
[mbunit][setup] Load C:\Documents and Settings\Peli\Mes documents\Tigris\mbunit\src\MbUnit.Tests\bin\StrongDebug\MbUnit.Tests.dll assembly.
[mbunit][setup] Exploring types for fixtures.
[mbunit][setup] Setup Successfull, starting 1 tests.
[fixture] TestFixtureAttributeTest
[start] TestFixtureAttributeTest.SetUpMethod.TestMethod.TearDownMethod
[success] TestFixtureAttributeTest.SetUpMethod.TestMethod.TearDownMethod
[start] TestFixtureAttributeTest.SetUpMethod.FailedTest.TearDownMethod
[failure] TestFixtureAttributeTest.SetUpMethod.FailedTest.TearDownMethod
TestCase 'TestFixtureAttributeTest.SetUpMethod.FailedTest.TearDownMethod' failed: 
Equal assertion failed.
[[0]]!=[[1]]
MbUnit.Core.Exceptions.NotEqualAssertionException
C:\Documents and Settings\Peli\Mes documents\Tigris\mbunit\src\MbUnit.Core\Framework\Assert.cs(649,0): at MbUnit.Core.Framework.Assert.FailNotEquals(Object expected, Object actual, String format, Object[] args)
C:\Documents and Settings\Peli\Mes documents\Tigris\mbunit\src\MbUnit.Core\Framework\Assert.cs(208,0): at MbUnit.Core.Framework.Assert.AreEqual(Int32 expected, Int32 actual, String format, Object[] args)
C:\Documents and Settings\Peli\Mes documents\Tigris\mbunit\src\MbUnit.Core\Framework\Assert.cs(220,0): at MbUnit.Core.Framework.Assert.AreEqual(Int32 expected, Int32 actual)
c:\documents and settings\peli\mes documents\tigris\mbunit\src\mbunit.tests\testfixtureattributetest.cs(33,0): at MbUnit.Tests.TestFixtureAttributeTest.FailedTest()

1 succeeded, 1 failed, 0 skipped, took 0,00 seconds.


---------------------- Done ----------------------
posted on Sunday, May 09, 2004 9:13:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Friday, May 07, 2004

Drawing a graph, although it sounds intuitive, is a tricky task. It usually involves a number of sophisticated algorithms and even more numerous parameters to set up the layout.

QuickGraph comes with a Managed C++ wrapper of Graphviz, a famous graph drawing library from AT&T. In this blog, I will show how to draw a graph in a few lines.

Let's build a dependency graph between some good old C files:

// create a new adjacency graph
AdjacencyGraph g = new AdjacencyGraph(false);
// adding files and storing names
IVertex boz_h = g.AddVertex(); 
IVertex zag_cpp = g.AddVertex(); 
IVertex yow_h = g.AddVertex();
...

// adding dependencies
g.AddEdge(dax_h, foo_cpp); 
g.AddEdge(dax_h, bar_cpp); 
...

At this point, the graph structure is built and ready to be drawed by graphviz. You need to create an instance of QuickGraph.Algorithms.Graphviz.GraphvizAlgorithm and call it's write method:

// creating graphviz algorithm
GraphvizAlgorithm gw = new GraphvizAlgorithm(
    g, // graph to draw
    ".", // output file path
    GraphvizImageType.Png // output file type
    );
// outputing to graph.
gw.Write("filedependency");

And the result is:

That's not so bad but we would like to see the names of the files. First thing to do, is to tell QuickGraph to produce NamedVertex vertices instead of Vertex. This is done by feeding the AdjacencyGraph constructor with two class factories, one for the vertices, one for the edges:

AdjacencyGraph g = new AdjacencyGraph(
    new NamedVertexProvider(),
    new EdgeVertexProvider(),
    false);

The NamedVertex class a Name property that we can use to store the name of the file:

// adding files and storing names
NamedVertex boz_h = (NamedVertex)g.AddVertex();     boz_h.Name = "boz.h"; 
NamedVertex zag_cpp = (NamedVertex)g.AddVertex();   zag_cpp.Name = "zag.cpp";
NamedVertex yow_h = (NamedVertex)g.AddVertex();     yow_h.Name = "yow.h";

The last step is add attach an event handler to the event of GraphvizAlgorithm that renders the vertices and add the name property:

// outputing graph to png
GraphvizAlgorithm gw = new GraphvizAlgorithm(
    g, // graph to draw
    ".", // output file path
    GraphvizImageType.Png // output file type
    );
gw.FormatVertex +=new FormatVertexEventHandler(gw_FormatVertex);
gw.Write("filedependency");
}
private void gw_FormatVertex(object sender, FormatVertexEventArgs e)
{
    NamedVertex v = (NamedVertex)e.Vertex;
    e.VertexFormatter.Label = v.Name;
}

and now the result is much better:

posted on Friday, May 07, 2004 8:10:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]

Let's have some fun with graph theory. Today, I'm showing how to generate a random maze with Quickgraph. Before going into the code, let see how we are going to make it.

The maths

Consider that you have a regular lattice like in the image below. If you consider each edge as a path, the lattice represents a highly connected network. The question is: which edge should you delete to make it a maze ?

The answer to this questions is trivial in terms of graph theory: you just need to extract a tree out of the graph and this will be a maze.  In fact in a tree, there is only 1 way from the root to each of the leaf, like in mazes. This leaves us with the problem of extracting a tree from a graph.

The maze generation example was originaly brought up by David Wilson (a researcher from Microsoft). He has designed an efficient algorithm, known as Cycle-Popping Tree Generator, for extracting random tree out of graphs. This algorithm is implemented in QuickGraph so we have all the tools on our hand to start solving the problem.

The code

Let's start by building the graph and the lattice. For the cycle popping algorithm, we need a BidirectionalGraph (a graph where you can iterate the out-edges and the in-edges of the vertices). For the lattice, we create a two dimensional array of vertices and finally, we store the vertex -> lattice index relation into a Hashtable:

// We need a few namespaces for QuickGraph 
using QuickGraph.Concepts;
using QuickGraph.Concepts.Traversals;
using QuickGraph.Algorithms.RandomWalks;
using QuickGraph;
using QuickGraph.Providers;
using QuickGraph.Collections;
using QuickGraph.Representations;

public class MazeGenerator
{
    // the bidirecitonal graph
    private BidirectionalGraph graph = null;
    // 2D array of vertices
    private IVertex[,] latice;
    // v -> (i,j)
    private Hashtable vertexIndices = null;

Next, we write the method that will populate the graph. If the lattice has m rows and n columns, the method will create m*n vertices and (m-1)*(n-1)*2 edges to link those vertices:

public void GenerateGraph(int rows, int columns)
{
    this.latice=new IVertex[rows,columns];
    this.vertexIndices = new Hashtable();
    this.graph = new BidirectionalGraph(false); // false = does not allow parallel edges

    // adding vertices
    for(int i=0;icolumns;++j)
        {
            IVertex v =this.graph.AddVertex();
            this.latice[i,j] = v;
            this.vertexIndices[v]=new DictionaryEntry(i,j);
        }
    }

    // adding edges
    for(int i =0;icolumns-1;++j)
        {
            this.graph.AddEdge(latice[i,j], latice[i,j+1]);
            this.graph.AddEdge(latice[i,j+1], latice[i,j]);
            this.graph.AddEdge(latice[i,j], latice[i+1,j]);
            this.graph.AddEdge(latice[i+1,j], latice[i,j]);
        }
    }
    ...

There is a bit of index gymnastics in the code above, but drawing it on a sheet of paper should make things clear. I will not focus on the drawing code on this blog: I'm using basic features of System.Drawing namespace, nothing really "extreme". So back to our maze problem, we need to apply the cycle-popping algorithm to generate a maze.

Cycle-popping algorithm

The cycle poping algorithm is implement in the QuickGraph.Algorithms.RandomWalks.CyclePoppingRandomTreeAlgorithm class. Using this algorithm is quite straightforward:

// (outi,outj) is the coordinate of the exit vertex in the lattice
// (ini,inj) is the coordinate of the entry vertex in the lattice
public void GenerateWalls(int outi, int outj, int ini, int inj)
{
    // we build the algo and attach it to the graph
    CyclePoppingRandomTreeAlgorithm pop = new CyclePoppingRandomTreeAlgorithm(this.graph);

    // finding the root vertex
    IVertex root = this.latice[outi,outj];
    if (root==null)
        throw new ArgumentException("outi,outj vertex not found");

    // this lines lanches the tree generation 
    pop.RandomTreeWithRoot(root);

At this point, pop contains a dictionary which associates each vertex to it's successor edge in the tree. Remember that the tree is directed towards the root here. This dictionary has two roles: it contains the edges that are part of the maze (that's what we needed) and even better you can build the way-out using the dictionary by following the successors until you hit the root:

    // we add two fields to the class:
    private VertexEdgeDictionary successors = null;
    private EdgeCollection outPath = null;
    ...
    ///////////////////////////////////
    // GenerateMaze continued
    // storing the successors
    this.successors = pop.Successors;

    // build the path to ini, inj
    IVertex v = this.latice[ini,inj];
    if (v==null)
    throw new ArgumentException("ini,inj vertex not found");

    // building the solution path
    this.outPath = new EdgeCollection();
    while (v!=root)
    {
        IEdge e = this.successors[v];
        this.outPath.Add(e);
        v = e.Target;
    }
}

That's it. We can now draw the walls of the maze and draw the solution on top. Thank you Mister Propp and Mister Wilson! (The source of this example is available in the QuickGraph CVS)

posted on Friday, May 07, 2004 9:13:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [8]
# Wednesday, May 05, 2004

MbUnit features a specialized fixture for testing IEnumerable/IEnumerator implementations. This fixtures does a number of test automatically to ensure that the implementation follows the enumerator specification.

In order to use the fixture, you must

  1. Tag your class with EnumerationFixture,
  2. Tag some methods with DataProvider attribute. Those methods will provide data to be added to the collections,
  3. Tag some methods with CopyToProvider attribute. Those methods will copy the data feeded as argument into a collection. The returned collection will be the tested collection.
  4. That's it!

In this example, we use the Data method to provide the data. The enumeration of ArrayList and int[] is tested.

[EnumerationFixture]
public class EnumerationFixtureAttributeAttributeTest
{
    private Random rnd = new Random();
    private int count = 100;
    [DataProvider(typeof(ArrayList))]
    public ArrayList Data()
    {
        ArrayList list = new ArrayList();
        for(int i=0;i<count;++i)
        list.Add(rnd.Next());
        return list;
    }
[CopyToProvider(typeof(ArrayList))] public ArrayList ArrayListProvider(IList source) { ArrayList list = new ArrayList(source); return list; } [CopyToProvider(typeof(int[]))] public int[] IntArrayProvider(IList source) { int[] list = new int[source.Count]; source.CopyTo(list,0); return list; } }
posted on Wednesday, May 05, 2004 7:45:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Monday, May 03, 2004

MbUnit comes with several CodeSmith  templates.The AssertWrapper template creates an strongly-typed assertion wrapper using Reflection. This enables the tester to quickly build "specialized" assertion wrapper for his classes.

The template generates an assertion method for each public property of the class. It adapts the name of the way assertion are handled depending on the type of the property. Below, is the wrapper generated for System.Collections.ICollection:

using MbUnit.Core.Framework;
using System.Collections;
namespace MbUnit.Core.Framework
{
 /// <summary>
 /// Assertion helper for the see <cref="ICollection"/> class.
 /// </summary>
 /// <remarks>
 /// <para>
 /// This class contains static helper methods to verify assertions on the
 /// <see cref="ICollection"/> class.
 /// </para>
 /// <para>
 /// This class was automatically generated. Do not edit (or edit the template).
 /// </para>
 /// </remarks>
 public sealed class ICollectionAssert
 {
  #region Private constructor
  private ICollectionAssert
  {}  
  #endregion
  /// <summary>
  /// Verifies that the property value <see cref="ICollection.Count"/>
  /// of <paramref name="expected"/> and <paramref="actual"/> are equal.
  /// </summary>
  /// <param name="expected"/>
  /// Instance containing the expected value.
  /// </param>
  /// <param name="actual"/>
  /// Instance containing the tested value.
  /// </param>
  public static void AreCountEqual(
   ICollection expected,
   ICollection actual
   )
  {
   Assert.IsNotNull(expected);
   Assert.IsNotNull(actual);
   AreCountEqual(expected.Count,actual);
  }
  ...
  
  /// <summary>
  /// Verifies that the property value <see cref="ICollection.IsSynchronized"/>
  /// is true.
  /// </summary>
  /// <param name="actual"/>
  /// Instance containing the expected value.
  /// </param>
  public static void IsSynchronized(
   ICollection actual
   )
  {  
   Assert.IsNotNull(actual);
   Assert.IsTrue(actual.IsSynchronized,
        "Property IsSynchronized is false");
  }
  ...
}
posted on Monday, May 03, 2004 12:58:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Sunday, May 02, 2004

MbUnit has a decorator that enables you to run you tests against different cultures, i.e. to test that your code is correctly globalized. This is done by taggin the test methods with a MultipleCultureAttribute decorator. This attribute takes a comma separated list of culture names and it will run the test for each of the culture. 

The example below shows how this decorator can test methods that are not correctly globalized.

[TestFixture]
public class MultipleCultureAttributeTest
{
    [Test]
    [ExpectedException(typeof(AssertionException))]
    [MultipleCulture("en-US,de-DE")]
    public void TestConvertFromString()
    {
        string input="2.2";
        double output = double.Parse(input);
        Assert.AreEqual(2.2, output);
    }
    [Test]
    [ExpectedException(typeof(AssertionException))]
    [MultipleCulture("en-US,de-DE")]
    public void TestConvertToString()
    {
        double input = 2.2;
        string output= string.Format("{0}",input);
        Assert.AreEqual("2.2",output); 
    }
}
posted on Sunday, May 02, 2004 12:44:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]