# Friday, July 30, 2004

As MSBuild is getting more and more attention from the community, it was time for MbUnit to propose a MSBuild Task to execute the tests. This is now done (in the CVS) and the task will be available in the next release.

MSBuild Task Creation Process

I have used the MSDN articles on MSBuild as a good source of information for building the MbUnit task (see part 1 and part 2) from Christophe NasarreChristophe Nasarre. More specifically, the part 2 of the articles gives a detailled how-to on the custom task creation. I won't repeat the article here.

First impression: MSBuild way of defining custom properties is rather limited. I would have though that they could have used Xml serialization so that any serializable object could be used in the task definition. That's sad...

Ok, we start by creating a new project, added Microsoft.Build.Framework.dll and Microsoft.Build.Utilities.dll references and we add the MbUnit class.

using System;
using System.IO;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
using MbUnit....; // MbUnit usings

namespace MbUnit.MSBuild.Tasks
{
    public class MbUnit : Task
    {
        public override bool Execute()
        {
            ...
        }
    }
}

As you can see, the MbUnit class derives from Task, which implements ITask. The Execute method is invoked by MSBuild. Next step is to add some parameters to the task in order to setup the test execution. There are a bunch of them, I will focus on the test assembly paths:

public class MbUnit : Task
{
    private string[] assemblies;
    
    [Required]
    public string[] Assemblies
    {
        get { return this.assemblies; }
        set { this.assemblies = value; }
    }

Note the Required attribute which is used to mark required properties. We can now implement the main loop: the Execute method. The execute method outline is rather simple:

  • create an empty test report,
  • for each test assembly file path,
    • load the assembly in a separate AppDomain,
    • run tests,
    • merge results in the report
  • output the report to the desired formats
public override bool Execute()
{
    this.result = new ReportResult();
    try
    {
        foreach (string testFilePath in this.Assemblies)
        {
            string path = GetFilePath(testFilePath);
            if (path==null)
                return false;
            using (TestDomain domain = new TestDomain(path))
            {
                domain.ShadowCopyFiles = false;
                domain.Load();
                domain.TestTree.RunPipes();
                result.Merge(domain.TestTree.Report.Result);
            }
        }
        this.GenerateReports();
    }
    catch (Exception ex)
    {
        this.Log.LogError("Unexpected failure during MbUnit execution");
        return false;
    }
    return true;
}

It is time to prepare the project to debugging the task. As advised in the article, set the output path of the project to the .Net 2.0 folder and use MSBuild.exe as starting program. Put in the command line arguments the name of the XML file containing the MbUnit project.

Sample Project

This is a sample MSBuild project that executes MbUnit tests:


<?xml version="1.0" encoding="utf-8" ?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
    <UsingTask AssemblyName="MbUnit.MSBuild.Tasks" TaskName="MbUnit.MSBuild.Tasks.MbUnit"/>

    <ItemGroup>
        <TestAssemblies Include="..\MbUnit.Demo\bin\Debug\MbUnit.Demo.exe" />
    </ItemGroup>

    <Target Name="Tests"> 
        <MbUnit
            Assemblies ="@(TestAssemblies)"
            HaltOnFailure="false"
            ReportTypes="Xml;Text;Html;Dox"
        /> 
    </Target>
</
Project>

The output of this task is as follows:

Microsoft (R) Build Engine Version 2.0.40607.16
[Microsoft .NET Framework, Version 2.0.40607.16]
Copyright (C) Microsoft Corporation 2004. All rights reserved.
Target "Tests" in project "ProjectSample.xml"
   Task "MbUnit"
      Loading C:\Documents and Settings\dehalleux\My Documents\Projects\mbunit\src\MbUnit.Demo\bin\Debug\MbUnit.Demo.exe
      Found  3 tests
      Running fixtures.
      Tests finished: 3 tests, 1 success, 2 failures, 0 ignored
      Unloading AppDomain
      All Tests finished: 3 tests, 1 success, 2 failures, 0 ignored in 1,56469663043767 seconds
      Generated Xml report at MbUnit.15_26_26.15_26_26.xml
      Generated Text report at MbUnit.15_26_26.15_26_26.txt
      Generated Html report at MbUnit.15_26_26.15_26_26.html
      Generated Dox report at MbUnit.15_26_26.15_26_26.dox.txt
I'll have a couple beers to this tonight :)
posted on Friday, July 30, 2004 12:57:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]
# Thursday, July 29, 2004

I have sent the manuscript of my PhD to the jury today. The thesis speaks about stability and control of quasilinear hyperbolic partial differential equations (for example, stabilization of open channels). I won't speak about it in details here but I can give you the title:

Boundary Stabilization Techniques of Quasi-Linear Hyperbolic Initial-Boundary Value Problem.

Of course, the relation to software testing is not obvious... because there is none.

posted on Thursday, July 29, 2004 1:19:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Wednesday, July 28, 2004

The next version of MbUnit (2.20) will have a new custom attribute that will let you assert on the values of PerformanceCounters: PerfCounterAttribute. (Idea suggested by ISerializable). This means that you can make an assertion an every perfomance counter on your machine!

Here's a sample fixture that shows how the attribute can be used:

using System;
using MbUnit.Core.Framework;
using MbUnit.Framework;
namespace MbUnit.Demo
{
  [TestFixture]
  public class PerfCounterDemo
  {
    [Test]
    [PerfCounter(".NET CLR Memory", "% Time in GC", 10)]
    public void AllocateALotOfObjects()
    {
      ...
    }

    [Test]
    [PerfCounter(".NET CLR Loading", "% Time Loading", 10)]
    [PerfCounter(".NET CLR Security", "% Time in RT checks", 10000)]
    [PerfCounter(".NET CLR Security", "% Time Sig. Authenticating", 10)]
    [PerfCounter(".NET CLR Memory", "# Bytes in all Heaps", 5000000, Relative =true)]
    [PerfCounter(".NET CLR Jit", "% Time in Jit", 10)]
    public void MonitorMultipleCounters()
    {
      ...
    }
  }
}

Note that if you are too lazy to remember the names of the counters, I have written a CodeSmith template that creates a class filled with static helper methods for retreiving the counters. For example, the following class lets you write things like PerfCounterInfo.NetClrExceptions.NbofExcepsThrown.NextValue() using intellisense and without typing errors :)

using System;
using System.Diagnostics;
namespace MbUnit.Core.Framework
{
 public class PerfCounterInfo
 {  
  public sealed class NetClrExceptions
  {
   const string categoryName = @".NET CLR Exceptions";

   public sealed class NbofExcepsThrown
   {
    const string counterName = @"# of Exceps Thrown";

    public static float NextValue()
    {
     return NextValue(Process.GetCurrentProcess().ProcessName);
    }    
   }
   ...
posted on Wednesday, July 28, 2004 3:29:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [8]

The next version of TestFu will feature a DataSet graph object (DataGraph class) , i.e. a bidirectional directed graph G(V,E) where

  • V is the set of vertices, one vertex per DataTable in the DataSet, (DataTableVertex class)
  • E is the set of edges, one edge per DataRelation in the DataSet (DataRelationEdge class). The source vertex of the edge is the ParentTable, while the target is the ChildTable.

Why do we need a graph ?

Once you have the graph structure of the database, you use all the algorithms contained in the QuickGraph.Algorithms project: shortest path, topological sort, etc... In fact, you can even use it to draw the graph using the NGraphviz. There are of course other very interresting applications of this structure. For example, you can use the graph to create a table creation order such that will not violate the constraints, or you estimate the complixity of a join, etc...

Let's see some examples..

Creating a graph and basic usage

DataGraph lives in the TestFu.Data.Graph namespace and is built from a DataSet instance:

using Test.Data.Graph;
...
DataSet ds = ...;
DataGraph graph = DataGraph.Create(ds);

Now that we have a graph, we can iterate over it's vertices and edges:

// iterating the vertices (DataTableVertex)
foreach(DataTableVertex v in graph.Vertices)
{
    DataTable table = v.Table;
    ...
}

// iterating the edges
foreach(DataRelationEdge e in graph.Edges)
{
   DataRelation relation = e.Relation;
   ...
}

Let's see what other things we could do with graph.

Drawing the table structure:

This is an obvious usage of the graph and can be done quite easily using the QuickGraph.Algorithms.Graphviz.GraphvizAlgorithm class:

GraphvizAlgorithm gv = new GraphvizAlgorithm(this.graph);
gv.FormatVertex+=new FormatVertexEventHandler(gv_FormatVertex);
gv.FormatEdge+=new FormatEdgeEventHandler(gv_FormatEdge);
System.Diagnostics.Process.Start(gv.Write(dataSource.DataSetName));

...

void gv_FormatVertex(object sender, FormatVertexEventArgs e)
{
    DataTableVertex v = (DataTableVertex)e.Vertex;
    e.VertexFormatter.Shape = NGraphviz.Helpers.GraphvizVertexShape.Box;
    e.VertexFormatter.Label = v.Table.TableName;
}
void gv_FormatEdge(object sender, FormatEdgeEventArgs e)
{
    DataRelationEdge edge = (DataRelationEdge)e.Edge;
    e.EdgeFormatter.Label.Value = edge.Relation.RelationName;
}

The result on a sample DataSet is displayed below:

DataTable ordering

Another interresting application is to create an ordering of the DataTable such that if you fill the tables using this ordering, you will not break any constraint. This solves the old problem "which table should I populate first?". For example, in the example above, the ordering result would be:

  1. Users,
  2. Orders,
  3. Categories,
  4. Products,
  5. OrderProducts

Since this feature has a major interrest in TestFu, it is built-in in the DataGraph class:

DataTable[] tables = graph.GetSortedTables();
In the background, graph uses the QuickGraph.Algorithms.SourceFirstTopologicalSortAlgorithm algorithm class which creates a topological ordering of the vertices of a graph by choosing the vertices with least in degree (very simple algo, home made):

// sort tables
SourceFirstTopologicalSortAlgorithm topo = new SourceFirstTopologicalSortAlgorithm(graph);
topo.Compute();

// output results
DataTable[] result = new DataTable[topo.SortedVertices.Count];
int i = 0;
foreach (DataTableVertex v in topo.SortedVertices)
{
    result[i++] = v.Table;
}
return result;

The SourceFirstTopologicalSortAlgorithm takes a IVertexAndEdgeListGraph instance which graph implements. As you can see, having a graph representation of your DataSet, compatible with QuickGraph opens a realm interresting applications of existing graph theory algorithms.

Next step: "Smart" Random Data generation

Once the table ordering is computed, you can safely generate "smart" random data and feed your DataSet...

posted on Wednesday, July 28, 2004 9:29:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]
# Tuesday, July 27, 2004

While writing the test for the NUnit fixture loader, I started to have some twisted tests. So twisted, they are worth mentionned in the blog. For instance, the steps I take to test NUnit load are:

  1. Load a NUnit fixture from the Assembly resources (as source code),
  2. Compile it and output an assembly using CodeDom.Compiler namespace,
  3. Load the newly created assembly using MbUnit, look for the tests and execute,
  4. Verify that the tests have been found, are executed and behaved as expected.
  5. Execute all those steps in a fixture

Twisted...

 

using System;
using System.IO;
using System.Configuration;
using System.Reflection;
using MbUnit.Core.Remoting;
using MbUnit.Core.Framework;
using MbUnit.Framework;
using MbUnit.Framework.Utils;
using System.CodeDom.Compiler;
using MbUnit.Core.Reports;
using MbUnit.Core.Reports.Serialization;
namespace MbUnit.Tests.Core.FrameworkBridges
{
  [TestFixture]
  [CurrentFixture]
  public class NUnitBridgeTest
  {
    private SnippetCompiler compiler;
    private ReportCounter counter;
    private ReportResult result;

    [SetUp]
    public void SetUp()
    {
      this.compiler = new SnippetCompiler();
      string nunitFolder = ConfigurationSettings.AppSettings["NUnitFolder"];
      string nunitFrameworkDll = Path.Combine(nunitFolder,@"NUnit.Framework.dll");
      this.compiler.Parameters.ReferencedAssemblies.Add(nunitFrameworkDll);
      this.compiler.LoadFromResource(
         "MbUnit.Tests.Core.FrameworkBridges.NUnitFixture.cs",
         Assembly.GetExecutingAssembly()
        );
      string path = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
      this.compiler.Parameters.OutputAssembly =
          Path.Combine(path, "NUnitBridgeTest.dll");
    }

    [Test]
    public void TestCaseCount()
    {
      LoadAndRunFixture();
      Assert.AreEqual(4, counter.RunCount);
    }

    [Test]
    public void SuccessCount()
    {
      LoadAndRunFixture();
      Assert.AreEqual(2, counter.SuccessCount);
    }

    ...

    private void LoadAndRunFixture()
    {
      this.compiler.Parameters.GenerateInMemory = false;
      this.compiler.Compile();
      this.compiler.ShowErrors(Console.Out);
      Assert.IsFalse(this.compiler.Results.Errors.HasErrors);

      // load assembly using MbUnit
      using (TestDomain domain = new TestDomain(this.compiler.Parameters.OutputAssembly))
      {
        domain.ShadowCopyFiles = false;
        domain.Load();

        // running tests
        domain.TestTree.RunPipes();
        
        result = domain.TestTree.Report.Result;
        counter = domain.TestTree.GetTestCount();
      }
    }
  }
}

And the loaded fixture is as follows:

using System;
using System.IO;
using NUnit.Framework;
namespace MbUnit.Tests.Core.FrameworkBridges
{
  [TestFixture]
  public class NUnitFixture
  {
    [TestFixtureSetUp]
    public void TestFixtureSetUp()
    {
      Console.Out.Write("TestFixtureSetUp");
    }
    [SetUp]
    public void SetUp()
    {
      Console.Out.Write("SetUp");
    }
    [Test]
    public void Success()
    {
      Console.Out.Write("Success");
    }
    [Test]
    public void Failure()
    {
      Console.Out.Write("Failure");
      Assert.Fail();
    }
    [Test]
    [ExpectedException(typeof(ArgumentNullException))]
    public void ExpectedException()
    {
      Console.Out.Write("ExpectedException");
      throw new ArgumentNullException("boom");
    }
    [Test]
    [Ignore("Because I want")]
    public void Ignore()
    {
      Console.Out.Write("Ignore");
      throw new Exception("Ignored test");
    }
    [TearDown]
    public void TearDown()
    {
      Console.Out.Write("TearDown");
    }
    [TestFixtureTearDown]
    public void FixtureTearDown()
    {
      Console.Out.Write("FixtureTearDown");
    }
  }
}
posted on Tuesday, July 27, 2004 6:17:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Sunday, July 25, 2004
Download available at www.dotnetwiki.org download page.
posted on Sunday, July 25, 2004 9:24:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]
# Saturday, July 24, 2004

Reflector.Graph has been recompiled for Reflector 4.0.15.0 along with some bug fixes (unit test generation is working again).

Download it in the www.dotnetwiki.org download section.

posted on Saturday, July 24, 2004 1:33:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [8]

This release brings more userfriendlyness to the GUI and generally it returns more information on what MbUnit is doing. See latest release page on this blog for the download links.

New features:

  • New progress bar displaying number of tests, successs, failures, ignore and test duration,
  • Status bar displaying more information,
  • Console application does pop report by default,
  • Added TestDox report type,
  • MbUnit gui can load and save projects (assemblies + treeview state is serialized)

Screenshot of the "new" gui

posted on Saturday, July 24, 2004 10:39:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Friday, July 23, 2004

MbUnit supports a new report type, similar to TestDox. This reports creates a simple text file, whith fixture and test case names more human readable.

For example, a Test fixture like

[TestFixture]
public class FooTest
{
    [Test]
    public void IsASingletonTest() {}
    [Test]
    public void AReallyLongNameIsAGoodThing() {}
}

MbUnit generates

-- MbUnit.Demo
MbUnit
MbUnit.Demo
    Foo
        - is a singleton
        - a really long name is a good thing

Available in 2.18.1

posted on Friday, July 23, 2004 1:04:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

MbUnit now supports a simple way of running a single fixture (the one you are working on) without the need of a GUI or NUnitAddIn. The method is simple: tag your fixture class with the CurrentFixtureAttribute and launch it with the auto-runner:

Consider this little example:

[TestFixture]
public class MyFixture
{
   ...
}
  1. Tag MyFixture with CurrentFixtureAttribute:
  2. Convert the test assembly to a console application and add the following code to your main function:
    using System;
    namespace MbUnit.Tests
    {
        using MbUnit.Core;
        using MbUnit.Core.Filters;
        public class AutoRunTest
        {
            public static void Main(string[] args)
            {
                using(AutoRunner auto = new AutoRunner()) 
                {
                    auto.Domain.Filter = FixtureFilters.Current;
                    auto.Run();
                    auto.ReportToHtml();
                }
            }
        }
    }
    
  3. Launch the console. The fixture will be executed and a HTML report of the tests will pop-out automatically.
(available in 2.18.1)
posted on Friday, July 23, 2004 7:42:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, July 22, 2004

This is a question that I received a few times: Peli is my nickname, which was given to me by my parents. Their memory on the reason why is somehow vague:

  • It could comes from the pelican (the bird) because of the movie "Jonathan Livingstone the Seagul", which became a pelican,
  • because Pelican have a big mouth,
  • because my mom did not like "Jonathan"  with the french pronounciation

Cheers,

Peli...

posted on Thursday, July 22, 2004 8:31:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]

New features:

 See latest release page on this blog for the download links

posted on Thursday, July 22, 2004 9:02:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [5]
# Wednesday, July 21, 2004

The API Design Guidelines encourage developers to check all their arguments and thereby avoid throwing a NullReferenceException.  If an argument is null and the contract of the method forbids null arguments an ArgumentNullException should be thrown. Brad Adams 

So you agree with Brad (I do) and you always check that arguments are not null before using them. This means a little bit more of code but it is worth it. But this means also a lot more of test code because, ideally, you should test that all your methods check all their arguments. This means writing hundreds of boring, repetitive test cases.... and you don't want to do that.

At least I don't so I added a new feature to MbUnit that does it for me.

Test for ArgumentNullException, first iteration:

 Let's see how it works with an example:

public class ArgumentNullDummyClass
{
    public object ClassicMethod(Object nullable, Object notNullable, int valueType)
    {
        if (notNullable == null)
            throw new ArgumentNullException("notNullable");
        return String.Format("{0}{1}{2}",nullable,notNullable,valueType);
    }
}

As one can see, the nullable parameter can be null, while the notNullable parameter is tested. Now, let's create a fixture that tests this method. We will be using the TestSuiteFixture because we will build a TestSuite:

[TestSuiteFixture]
public class MethodTestSuiteDemo
{
    public delegate object MultiArgumentDelegate(Object o,Object b, int i);
    
    [TestSuite]
    public ITestSuite AutomaticClassicMethodSuite()
    {
        ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();

        MethodTester suite = new MethodTester(
            "ClassicMethod",
            new MultiArgumentDelegate(dummy.ClassicMethod),
            "hello",
            "world",
            1
            );
            suite.AddAllThrowArgumentNull();
        return suite.Suite;
    }
}

The MethodTester class takes the following argument: a name, a delegate  and valid parameters of the delegate. By valid I mean parameters that should not make the delegate invokation fail. The AddAllThrowArgumentNull looks for nullable parameters and create a TestCase that will invoke the delegate with the corresponding parameter nulled. In the example, this means that ClassicMethod will be called with:

  • null, "world", 1
  • "hello", null, 1

Test for ArgumentNullException, second iteration:

There are things I don't like in the example above:

  • you need to create a delegate (tedious),
  • you need to create 1 method tester per method (tedious),

Ok, so let's build a ClassTester class that does that for us... The test code now looks as follows:

[TestSuiteFixture]
public class ClassTesterDemo
{
    [TestSuite]
    public ITestSuite AutomaticClassSuite()
    {
        ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
        ClassTester suite = new ClassTester("DummyClassTest",dummy);
        suite.Add("ClassicMethod","hello","world",1);
        return suite.Suite;
    }
}

That's much better: delegate is gone and we could add more methods to be tested in a single call.

Test for ArgumentNullException, third iteration:

There is still one problem with this technique: there is no way to tell that an argument is authorized to be nulled! In the example, the nullable parameter can be null and the TestCase will always fail because it does not throw ArgumentNullException.

The solution of this problem is done in two steps: first, you, the developper, tag the parameters that can be nulled with a NullableAttribute attribute (could be any of your attributes). In the example, we add a SmartMethod method and the MyNullableAttribute:

[AttributeUsage(AttributeTargets.Parameter,AllowMultiple=false,Inherited=true)]
public class MyNullableAttribute : Attribute
{}

public class ArgumentNullDummyClass
{
    public object ClassicMethod(Object nullable, Object notNullable, int valueType)
    {...}
    public object SmartMethod([MyNullable]Object nullable, Object notNullable, int valueType)
    {...}
}

Next, you must tell MbUnit which attribute is used to tag nullable parameters. This is done with the NullableAttributeAttribute at the assembly level:

[assembly: NullableAttribute(typeof(MbUnit.Demo.MyNullableAttribute))]

Ok, now we just need to update our test case to load the SmartMethod:

[TestSuite]
public ITestSuite AutomaticClassSuite()
{
    ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
    ClassTester suite = new ClassTester("DummyClassTest",dummy);

    suite.Add("ClassicMethod","hello","world",1);
    suite.Add("SmartMethod","hello","world",1);

    return suite.Suite;
}

The result in MbUnit GUI is as follows: the parameters of ClassicMethod were all tested, nullable included which we want to avoid. The parameters of SmartMethod were all tested excluded nullable because it was tagged. :)

Test for ArgumentNullException, fourth iteration:

 

The more I think about this problem, the more I think FxCop should do that for us...

posted on Wednesday, July 21, 2004 10:37:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

The API Design Guidelines encourage developers to check all their arguments and thereby avoid throwing a NullReferenceException.  If an argument is null and the contract of the method forbids null arguments an ArgumentNullException should be thrown. Brad Adams 

So you agree with Brad (I do) and you always check that arguments are not null before using them. This means a little bit more of code but it is worth it. But this means also a lot more of test code because, ideally, you should test that all your methods check all their arguments. This means writing hundreds of boring, repetitive test cases.... and you don't want to do that.

At least I don't so I added a new feature to MbUnit that does it for me.

Test for ArgumentNullException, first iteration:

 Let's see how it works with an example:

public class ArgumentNullDummyClass
{
    public object ClassicMethod(Object nullable, Object notNullable, int valueType)
    {
        if (notNullable == null)
            throw new ArgumentNullException("notNullable");
        return String.Format("{0}{1}{2}",nullable,notNullable,valueType);
    }
}

As one can see, the nullable parameter can be null, while the notNullable parameter is tested. Now, let's create a fixture that tests this method. We will be using the TestSuiteFixture because we will build a TestSuite:

[TestSuiteFixture]
public class MethodTestSuiteDemo
{
    public delegate object MultiArgumentDelegate(Object o,Object b, int i);
    
    [TestSuite]
    public ITestSuite AutomaticClassicMethodSuite()
    {
        ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();

        MethodTester suite = new MethodTester(
            "ClassicMethod",
            new MultiArgumentDelegate(dummy.ClassicMethod),
            "hello",
            "world",
            1
            );
            suite.AddAllThrowArgumentNull();
        return suite.Suite;
    }
}

The MethodTester class takes the following argument: a name, a delegate  and valid parameters of the delegate. By valid I mean parameters that should not make the delegate invokation fail. The AddAllThrowArgumentNull looks for nullable parameters and create a TestCase that will invoke the delegate with the corresponding parameter nulled. In the example, this means that ClassicMethod will be called with:

  • null, "world", 1
  • "hello", null, 1

Test for ArgumentNullException, second iteration:

There are things I don't like in the example above:

  • you need to create a delegate (tedious),
  • you need to create 1 method tester per method (tedious),

Ok, so let's build a ClassTester class that does that for us... The test code now looks as follows:

[TestSuiteFixture]
public class ClassTesterDemo
{
    [TestSuite]
    public ITestSuite AutomaticClassSuite()
    {
        ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
        ClassTester suite = new ClassTester("DummyClassTest",dummy);
        suite.Add("ClassicMethod","hello","world",1);
        return suite.Suite;
    }
}

That's much better: delegate is gone and we could add more methods to be tested in a single call.

Test for ArgumentNullException, third iteration:

There is still one problem with this technique: there is no way to tell that an argument is authorized to be nulled! In the example, the nullable parameter can be null and the TestCase will always fail because it does not throw ArgumentNullException.

The solution of this problem is done in two steps: first, you, the developper, tag the parameters that can be nulled with a NullableAttribute attribute (could be any of your attributes). In the example, we add a SmartMethod method and the MyNullableAttribute:

[AttributeUsage(AttributeTargets.Parameter,AllowMultiple=false,Inherited=true)]
public class MyNullableAttribute : Attribute
{}

public class ArgumentNullDummyClass
{
    public object ClassicMethod(Object nullable, Object notNullable, int valueType)
    {...}
    public object SmartMethod([MyNullable]Object nullable, Object notNullable, int valueType)
    {...}
}

Next, you must tell MbUnit which attribute is used to tag nullable parameters. This is done with the NullableAttributeAttribute at the assembly level:

[assembly: NullableAttribute(typeof(MbUnit.Demo.MyNullableAttribute))]

Ok, now we just need to update our test case to load the SmartMethod:

[TestSuite]
public ITestSuite AutomaticClassSuite()
{
    ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
    ClassTester suite = new ClassTester("DummyClassTest",dummy);

    suite.Add("ClassicMethod","hello","world",1);
    suite.Add("SmartMethod","hello","world",1);

    return suite.Suite;
}

The result in MbUnit GUI is as follows: the parameters of ClassicMethod were all tested, nullable included which we want to avoid. The parameters of SmartMethod were all tested excluded nullable because it was tagged. :)

Test for ArgumentNullException, fourth iteration:

 

The more I think about this problem, the more I think FxCop should do that for us...

posted on Wednesday, July 21, 2004 10:37:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]
# Tuesday, July 20, 2004

MbUnit now supports the new attributes for Roy Osherove (ISerializable) to solve the database Rollback problem:

  • SqlRestoreInfoAttribute contains the information necessary to perform database restore (connection string, etc...),
  • RollBackAttribute uses EnterpriseServices to roll back the transactions done in the test case, (Note that this attribute does not rely on SqlRestoreInfo and can live on its own)
  • RestoreDatabaseFirstAttribute, restores the database before starting the test (using DbAdministrator from TestFu).

I will assume that you have read the article from Roy so I can skip explanation and show an example. Consider the following test fixture:

[TestFixture]
[SqlRestoreInfo("connectionstring","databasename",@"c:\backups\nw.mbk")]
public class NorthWindTest
{
    [Test, RollBack]
    public void TestWithRollBack()
    {...}

    [Test, RestoreDatabaseFirst]
    public void TestWithRestoreFirst()
    {...}
}

This example, which runs in MbUnit,  is similar to what Roy has proposed: SqlRestoreInfo gives information that can be used to restore the db. TestWithRollBack is rolled back using Enterprise services, the database is restored before TestWithRestoreFirst is executed.

What about data abstraction ?

We would like to create a fixture and apply it to different Db provider (Oracle, MySql,etc..). Is this possible ? This is (will**) possible in a minimum of work, it is just a matter of changing SqlRestoreInfo to OracleRestoreInfo:

[TestFixture]
public abstract class DbNorthwindTest
{
    [Test, RollBack]
    public void TestWithRollBack()
    {...}

    [Test, RestoreDatabaseFirst]
    public void TestWithRestoreFirst()
    {...}
}

[SqlRestoreInfo("connectionstring","databasename",@"c:\backups\nw.mbk")]
public class SqlNorthwindTest : DbNorthwindTest
{}

[OracleRestoreInfo("connectionstring","databasename",@"c:\backups\nwporacle.mbk")]
public class OracleNorthwindTest : DbNorthwindTest
{}

This example is quite neat and self-explenatory: SqlNorthwindTest will apply the fixture against a MsSql server using System.Data.SqlClient classes, while OracleNorthwindTest will test against Oracle.

**Currently, only SqlRestoreInfoAttribute is implemented.

posted on Tuesday, July 20, 2004 1:51:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [22]
# Monday, July 19, 2004

TestSuite are suites of test dynamically created. This is a feature that was requested long ago and it finally poped up on my todo list. A TestSuite provides a way of creating dynamically TestCases with tightly-controlled naming. For example, in data-driven testing, you might want to create test cases based on some external data source.

How-to in MbUnit

To use suites, tag your calss with the TestSuiteFixtureAttribute attribute. Each method who creates suite must be tagged with TestSuiteAttribute and return a TestSuite, there can be multiple methods returning suites.

using System;
using MbUnit.Core.Framework;
using MbUnit.Framework;
namespace MyNamespace
{
    [TestSuiteFixture]
    public class MyClass
    {
        public delegate void TestDelegate(Object context);
        [TestSuite] 
        public TestSuite GetSuite()
        {
            TestSuite suite = new TestSuite("Suite1");
            suite.Add( "Test1", new TestDelegate( this.Test ), "hello" );
            suite.Add( "Test2", new TestDelegate( this.AnotherTest), "another test" );
            return suite;
        }
        public void Test( object testContext )
        {
            Console.WriteLine("Test");
            Assert.AreEqual("hello", testContext);
        }
        public void AnotherTest( object testContext )
        {
            Console.WriteLine("AnotherTest");
            Assert.AreEqual("another test", testContext);
        }
    }
}

The resulting naming of the fixture will be as follows:

MyNamespace.MyClass.Suite1.Test1
MyNamespace.MyClass.Suite1.Test2

Not that the namespace name and the class name are used to create the test case name, the name of the method creating the suite is not used. The resulting output of the tests in MbUnit will look as follows:

posted on Monday, July 19, 2004 2:43:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]

MbUnit (2.16.1) has now the ability to load and run NUnit (and csUnit) assemblies without recompilation.

How it works

Jamie Cansdale (NUnitAddin) sent me some classes he was using in his Addin to execute NUnit assemblies. Thank you Jamie :). When a type is explored, it looks into the referenced assemblies for NUnit.Framework.dll or csUnit.Framework.dll. If found, it thens extracts the TestFixtureAttribute type and tests if the type is tagged with it. A couple more details and there you have NUnit tests run by MbUnit...

Sample:

The following sample fixture is a standard NUnit fixture:

using System;
using NUnit.Framework;
namespace MbUnit.Tests.Core.FrameworkBridges
{
    [TestFixture]
    public class NUnitFrameworkTest
    {
        [TestFixtureSetUp]
        public void TestFixtureSetUp()
        {
            Console.WriteLine("TestFixtureSetUp");
        }
        [SetUp]
        public void SetUp()
        {
            Console.WriteLine("SetUp");
        }
        [Test]
        public void Test()
        {
            Console.WriteLine("Test");
        }
        [Test]
        public void AnotherTest()
        {
            Console.WriteLine("Another test");
        }
        [Test]
        [ExpectedException(typeof(AssertionException))]
        public void ExpectedException()
        {
            Assert.Fail("Should be intercepted");
        }
        [Test]
        [Ignore("Testing ignore")]
        public void Ignored()
        {
            Assert.Fail("Must be ignored");
        }
        [TearDown]
        public void TearDown()
        {
            Console.WriteLine("TearDown");
        }
        [TestFixtureTearDown]
        public void TestFixtureTearDown()
        {
            Console.WriteLine("TestFixtureTearDown");
        }
    }
}

The  fixture is loaded and executed by MbUnit and the reports shows:

posted on Monday, July 19, 2004 12:10:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [12]

The MbUnit.Core.AutoRun class is a new lightweight class that explores and executes the tests contained in the entry assembly. This class can be used to create executable test assemblies that are self-contained.

public class AutoRunTest
{
    public static void Main(string[] args)
    {
        using(MbUnit.Core.AutoRunner auto = new MbUnit.Core.AutoRunner())
        {
            auto.Run();
            auto.ReportToHtml();
        }
    }
}

The AutoRunner class also supports filtering and will be available in 2.16.1.

posted on Monday, July 19, 2004 7:38:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]
# Sunday, July 18, 2004

At last, MbUnit loads the assemblies in separate AppDomain and do not lock them...

This release brings the "much awaited" shadow copying feature to MbUnit which means that you can do real TDD development with the GUI. This part of the code has been strongly inspired from the NUnit implementation. Copyright notice have been kept in the source code.

Release Details

  • Separate AppDomain for each test assembly,
  • Handling configuration files as in NUnit (test assembly name + ".config"),
  • Test assembly monitoring and reloading when changes detected,
  • Filtering in the console application,

Many other bugs have been also fixed...

Download MbUnit 2.16beta at www.dotnetwiki.org

 

posted on Sunday, July 18, 2004 9:10:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [10]
# Saturday, July 17, 2004

More and more visualization of the coverage: ThreadTree and Xml:

The ThreadTree is a control available in the Data Visualization Components suite from Microsoft Research, it displays a tree where the nodes are sorted by a ranking. In this case, the tree is the Namespace / Type hierarchy and the ranking is the coverage. Not very useful, but very pretty.

 

The Xml views shows the actual output of NCover.

posted on Saturday, July 17, 2004 9:48:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]

Just added a Tree and the Microsoft TreeMap to visualize the coverage results:

posted on Saturday, July 17, 2004 8:55:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]

I have freshened up the NCover project to build a simple Gui around it. Not much work, just some Category attributes to put there and there...

posted on Saturday, July 17, 2004 12:00:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Friday, July 16, 2004

This blog presents an implementation of an Ant Colony Optimization (ACO) framework using Visual C# 2005 Express.

What is ACO ?

ACO is the evolution of the Ant Algorithms, algorithms that were based on observation on ants. It is a metaheuristic algorithm that is used to solve complex problems (NP-hard) such as the Travelling Salesman Problem(in this problem, a salesman has to travel across each city in a minimum distance). There is a lot of litterate on ACO and TSP on the web...

Where does it come from ?

My implementation is based on the book Ant Colony Optimization from Marco Dorigo. I must say that the authors have takened care of giving clear and well-detailled pseudo-code to make an implementation easy.

The first results

Here are some outputs of the TSP computation of burma14 using my framework. The red line represents the best-so-far solution, the other lines are colored with respect to their pheromone intensity.

  • Iteration 1:
  • Iteration 7:
  • Iteration 28:
  • Iteration 33:
  • Iteration 54:

Download

The download called MetaHeuristics is available at http://www.codeplex.com/metaheuristics .

posted on Friday, July 16, 2004 3:37:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [8]
# Thursday, July 15, 2004

This tutorial shows the basic usage of NCover from GotDotNet. The tutorial starts by creating a toy solution, how to set-up NCover execution, analysis and improvements of the results.

A toy solution

  1. Create a new solution,
  2. Add a C# assembly project named UnderCover,
  3. Add a class JamesBond defined as follows:
    using System;
    namespace UnderCover
    {
        public class JamesBond
        {
            public void Covered()
            {
                Console.WriteLine("Covered");
            }
            public void UnCovered()
            {
                Console.WriteLine("UnCovered");
            }
        }
    }
  4. Add a C# console application project name UnderCover.Console.exe
  5. Edit the Main entry method as follows:
    using System;
    namespace UnderCover.Cons
    {
        class Class1
        {
            [STAThread]
            static void Main(string[] args)
            {
                JamesBond james = new JamesBond();
                james.Covered();
            }
        }
    }
  6. Compile in debug mode
    Note: NCover needs the symbol files (.mdb) in order to work, so you need to work with the debug version.

Our objective is now to compute the coverage of UnderCover.dll when UnderCover.Console.exe is executed. It is straightforward to see that JamesBond.Covered will be fully covered while JamesBond.UnCovered will be not covered.

Setting a NCover batch file

In this step, we create a simple batch file in the bin/debug directory to execute NCover.Console.exe with the command line. The NCover command line takes the command line to execute + the assembly to cover as parameters:

"C:\Program Files\NCover\NCover.Console.exe" /c "UnderCover.Cons.exe" "UnderCover.dll" /v

The output of NCover is as follows

"C:\Program Files\NCover\NCover.Console.exe" /c "UnderCover.Cons.exe" "Under
Cover.dll" /v
NCover.Console v1.3.3 - Code Coverage Analysis for .NET - http://ncover.org

Command: UnderCover.Cons.exe
Command Args: UnderCover.dll
Working Directory:
Assemblies:
Coverage File:
Coverage Log:
******************* Program Output *******************
Covered
***************** End Program Output *****************
Copied 'C:\Program Files\NCover\Coverage.xsl' to '...\Coverage.xsl'

If everything executed correctly, a Coverage.xml and Coverage.xsl has appeared in the directory.

First look at the results

Open Coverage.xml in your browser and you will get something like this:

The coverage has worked, we have the expected results.

Better XSLT template

Now, this works fine for a simple assembly but the files becomes huge if you have a normal project so we need a better XSLT template. MbUnit has it's own NCover Coverage.xsl template (that you can get here) that supportes expand/collapse and computes the percents of coverages. Let us copy the new coverage.xsl to the directory:

Now this looks much better :)

Results in Reflector

As I showed in a previous post, the Reflector Code Coverage Addin can help you visualize the coverage in a more intuitive way. The steps to follow are:

  1. Add Reflector.TreeMap.dll as a Addin of Reflector,
  2. Load UnderCover.dll,
  3. Right-click on UnderCover assembly and choose Coverage TreeMap,
  4. Right-click on the TreeMap and load the Coverage.xml file
  5. Enjoy the results:

Downloads:

The solution of this tutorial is available in the download section of www.dotnetwiki.org .

posted on Thursday, July 15, 2004 10:54:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [13]

Just loaded the MbUnit solution in Visual C# Express 2005 and.... it worked without a single problem! Looks like we will have MbUnit for .NET 2.0 in the next release :)


posted on Thursday, July 15, 2004 6:04:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]

The MbUnit Console Runner Application has undergone a major "plastic surgery" to prepare the next release of MbUnit. The new features are :

  • Report output to Xml, Html or Text, or a combination of those. The output directory of the reports can also be specified in the command line,
  • Filtering of the fixture by
    • Namespace,
    • Type full name,
    • Category,
    • Author name,
    • Importanc
  • Each of the filter can be multiple, i.e. you can filter multiple namespace. The different filters are combined using a AND operator.
  • The help is showed each time it is called (I have copy/pasted) the output of
  • The parsed parameters are displayed before testing starts to give you more information, 

The output of console runner as it is in the CVS:

MbUnit 2.15.1657.17890 Console Application
Author: Jonathan de Halleux
Get the latest at   http://mbunit.tigris.org
------------------------------------------
    /report-folder:                     short form /rf  Target output folder for the reports
    /report-type:{Xml|Html|Text}        short form /rt  Report types supported: Xml, Html, Text
    /filter-category:                   short form /fc  Name of the filtered category
    /filter-author:                     short form /fa  Name of the filtered author name
    /filter-type:                       short form /ft  Name of the filtered type
    /filter-namespace:                  short form /fn  Name of the filtered namespace
    /verbose[+|-]                       short form /v   Return a lot of information or not...
    @file                             Read response file for more options
    files
------------------------------------------
-- Parsed Arguments
Files:
        MbUnit.Tests.dll
Report folder:
Report types:
Filter Category:
Filter Author:
Filter Namespace:
Filter Type:
Verbose: False

ps: Filtering is a brand new feature of MbUnit that I will discuss in another thread.

posted on Thursday, July 15, 2004 10:28:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Wednesday, July 14, 2004

An installation tutorial "for Dummies" is available at http://www.dotnetwiki.org/Default.aspx?tabid=55

posted on Wednesday, July 14, 2004 1:09:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

With all those addins poping out on the blog, maintaining the files was becoming quite messy. In order to simplify a lot of things, I have upgraded my old www.dotnetwiki.org web site to DotNetNuke 2.1.2 with the following components:

  • Issue tracker for the Reflector Addins, please use it instead of sending the bugs to Lutz,
  • Forums, what thread can be watched by emails and news feed,
  • Download section, where the files are stored and link remain permanent

See you seen on www.dotnetwiki.org .

posted on Wednesday, July 14, 2004 12:30:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Tuesday, July 13, 2004

After this afternoon crash of the blog, it is time to get back to some fun activities...and today code coverage is the program. I will be using NCover (gotdotnet) but other tools could apply also.

There is one thing that strikes me with code coverage: there is almost no efficient tool to visualize coverage (I'm leaving aside Team System for now). Usually the framework that compute the coverage already have a big job to do with computing it, and the visualization is somehow "left to the user". It seems to me that this part of the job is not trivial also because of the quantity of information that has to be processed: an assembly can have hundreds of types, thousands of methods. This motivates the introduction of some new...  Reflector Addins for code coverage visualization.

Coverage in a TreeMap

A natural way of visualizing coverage is to use the TreeMap that I was previously using for the Type TreeMap addin.With some minor modifications, we have the following result: the figure shows the test coverage of NCollection project (green is fullly covered, red is not covered).

NCover and Reflector

The most difficult/annoying part of this work was to create the bridge between NCover and Reflector, because of slight difference in the way name, assembly and modules are named.

posted on Tuesday, July 13, 2004 9:02:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [10]

Following the PropertyGrid and the XmlSerialization addin, this addin displays the control classes inside Reflector.

The addin is desesperately simple: if the selected class inherits from System.Windows.Forms, the addin instanciate it and adds it to the window.

posted on Tuesday, July 13, 2004 1:39:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

This is a preview of a new Reflector addin to well let you search for Program Patterns in your assemblies. The addin is mainly an adaptation to .Net of the article from Santany Paul and Atul Prakash, "A Framework for Source Code Search using Program Patterns".

What are Program Patterns ?

In their article, the authors use a pattern language to specify high-level patterns for making queries on the source code. It smells like regular expression but it operates on statements and expression instead of strings. The basic pieces of the pattern languages are:

  • # describes any expression,
  • @ describes any statement,
  • more to come here...

More specific patterns, like if, while, throw,  are also defined. In fact, for each interface of the Reflector.CodeModel namespace, a corresponding pattern class is needed. Let's illustrate this with a simple example of query: Find all occurence of a if followed by a throw statement.
In the original paper notation, the pattern is defined as follows:

if (#) throw #;

Of course this will have to change a bit for C#.

C# implementation

In my implementaiton, the IPattern interface defines a Program Pattern:

public interface IPattern
{
    bool Match(Object target);
}

The interface is intentionaly simple to enable great extensibility. This interface is then specialized for IStatement, IExpression, IBlockStatement, etc.. For example, the pattern to match a throw statement will be implemented as follows:

public class ThrowExceptionStatementPattern : StatementPattern
{
    // pattern for the expression that is throwed. Pats is a helper class
    private ExpressionPattern expression = Pats.AnyExpression;
    ...

    public override bool Match(IStatement statement)
    {
        // trying to cast statement to IThrowExceptionStatement
        IThrowExceptionStatement th = statement as IThrowExceptionStatement;
        // did not cast, no match
        if (th==null)
            return false;
        // expression did not match, no match
        if (!this.Expression.Match(th.Expression))
            return false;
        // we have a match!
        return true;
    }
}

To ease up things, a helper class containing static methods (Pats) takes care of creating those objects.

Implementing the example

Implementing the example is now just a matter of putting the pieces together. We need a pattern for the if, one for the throw and a pattern that will recurse in all the statements:

// provided by Reflector
IMethodDeclaration visitedMethod = ...
// if pattern
ConditionStatementPattern ifthen = Pats.If();
// setting the Then pattern
// StatementInBlock says the pattern should be in the IBlockStatement
// Pats.Throw() returns a ThrowExceptionStatementPattern 
ifthen.Then = Pats.StatementInBlock(Pats.Throw());
// a pattern that will recurse all the statements
RecursiveStatementPattern rec = Pats.Recursive(ifthen);

// launching the seach
rec.Match(visitedMethod.Body);

Testing the example

The above pattern has been applied to the following class where 2 methods match the pattern, and 2 do not match it:

public class CodeMatchingTest
{
    public void If(bool arg)
    {
        Console.WriteLine("This method has a if");
        if (arg)
            Console.WriteLine("arg is true");
        else
            Console.WriteLine("arg is false");
    }
    public void Throw()
    {
        throw new Exception();
    }
    public void IfThrow(bool arg)
    {
        if (arg)
            throw new Exception();
    }
    public void IfThrowHiddenInsideWhileLoop(bool arg)
    {
        int i = 0;
        while(i<10)
        {
            if (i>5)
                throw new Exception();
            Console.WriteLine(i.ToString());
        }
    }
}

And the result in Reflector is displayed below. As expected, IfThrow and IfThrowHiddenInsideWhileLoop have matched the pattern.

posted on Tuesday, July 13, 2004 1:38:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]

Jamie Cansdale (NUnitAddin) has recently started to use the SSCLI tests to verify the output of Reflector.

"The Shared Source Common Language Infrastructure 1.0  is a compressed archive of the source code to a working implementation of the ECMA CLI and the ECMA C# language specification. This implementation builds and runs on Windows XP, the FreeBSD operating system, and Mac OS X 10.2." says the description. If you download this file, you will see that it ships with tons of tests that are used to test the CLI implementation. Each one of the test classes is to be compiled into a console application that returns 0 if successful, 1 otherwise.... Could we harness those test using a test framework ?

First try: all tests in one assembly

Since each fixture is designed to be compiled as a console, we just need to look for classes with a static Main method, load them and execute the main method. This is rather optimistic because it assumes that you can compile all the tests into a single assembly and then load, which is not true because a lot of classes are duplicated and clashes. Anyway, I managed to load some of them "just for fun":

Second try: dynamic compliation

Jamie was ahead of me and had already the solution. He designed a bunch of helper classes that dynamically load the files, compile, and execute them. This combined to a CodeSmith template that would generate a huge fixture (one case per file), he had this harnessed for NUnit. Jamie is kind of "shy" (sorry Jamie) and does not want to write about the amazing stuff he is doing with testing. So if you want to know how he managed to wrap up SSCLI tests in NUnit in a record time, give him a shout ! 

That's all for today, I'll have to modify MbUnit to integrate Jamies work... 

posted on Tuesday, July 13, 2004 1:37:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

How do you handle the inclusion of examples into your C# documentation ? If you've been asking yourself this question, here's a quick tip for automatic example inclusion.

Let's start with an example. Consider the following dummy class MyClass:

public class MyClass
{
 ...
}

We hapilly write anexample for the MyClass class in a separate xml file (say Doc.xml):

<doc>
  <examples>
    <example name="example1">
       public class MyLibrary
       {
           public static Main(string[] args)
           {
               MyClass mc = new MyClass();
           }
       }
    </example>
  </examples>
</doc>

Now the way Visual Studio suggests uses to include the example is to use the include tag with a XPath expression similar to this:

//< include file='Doc.xml' path='example[@name="example1"]' />

This is fine but it can break easily.Moreover, assuming that you have a lots of examples, you may miss some example that could be integrated into the documentation.

A much better solution is to use the power of XPath. In fact, what we want to include is all the examples that involve the MyClass class, which in XPath translates to: contains(descendant-or-self::*,'MyClass'):

/// <include 
///     file='MyLibrary.Doc.xml' 
///     path='//example[contains(descendant-or-self::*,"MyClass")]'
/// />

That's it. Instead of using name=... you stop worrying and let XPath find the proper examples for you.

posted on Tuesday, July 13, 2004 1:36:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

I was waiting for this moment for a long time: we have matrix algebra package for .NET!

dnAnalytics home page: http://www.dnanalytics.net/

posted on Tuesday, July 13, 2004 1:36:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]

From Benjamin blog;

BOF001: James Newkirk on Integrating Unit Testing into the Software Development Lifecycle.

James Newkirk leading the Unit Testing BoFJames Newkirk lead a Birds of a Feather session on Test Driven Development.  The room was packed to the rafters, showing that Unit Testing is starting to reach a critical mass.  Here are my notes on the discussion from the session which covered how to write tests, how to use tests against legacy systems, how to test against the database and many other topics.

Should we write test code against interfaces or something more abstract than the implementation?
James mentioned that MbUnit is a tool that allows you to test against an interface.  The question was whether you should create interfaces that enable tests to be written against them in case further implementations were created in future.  James' attitude was that this might result in wasted work ('you aint gonna need it') since you may not need it, or may not need it now.  Instead, abstract things out when you need them - don't create an interface just to test it.

James also said that an interface is not a good example of the contract of what is being done - it is the name of the method with input and outputs, but does not reflect how the method reacts to the input. James writes tests that show the real interaction between someone that calls the code and what it produces.

I must say I'm disapointed to have missed TechEd Amsterdam and this session (It's only a couple hour from Brussels). Aaaaargghhh!

posted on Tuesday, July 13, 2004 1:35:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

TestFu is a collection of framework that can help the tester to build data generator, state machines, grammars, etc... TestFu v0.1 contains:

I have decided to separate the Production Grammar Framework and the Database Populator Framework from MbUnit so that any user (not only MbUnit users) benefit from those two frameworks.

Download TestFu

posted on Tuesday, July 13, 2004 1:35:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]

Download addins

The following post gives a list and a quick description of the Reflector addins I have written. The addins are organized by assemblies.

Configuration Reflector for the addins

All addins are targeted for .Net v1.0.5 and Reflector is compiled agains 1.0.3. This means that you must add/modify the Reflector.exe.config file accordingly to the one contained in the zip file. Please do not send bug reports to Lutz but directly to me or through the MbUnit issue tracking system.

Reflector.Graph.dll
Addins

Comments
This addin uses QuickGraph and Refly in the background.

Reflector.Graph.Drawing.dll
Addins

Comments
Might still be a little buggy. Intensive on system resources.

Reflector.Goodies.dll
Addins

Comments
Be aware that this addins actually loads the assembly with Reflector and it will lock those assemblies.

Reflector.TreeMap.dll
Addins

Comments
This addin depends on the TreeMap control from Microsoft Research. Make sure you put TreeMapControl.dll and TreeMapGenerator.dll (from Microsoft TreeMap) in the directory of Reflector.

Reflector.Rules.dll (experimental)
Addins

Comments
Does not do much, still under experimentation.

posted on Tuesday, July 13, 2004 1:34:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [18]

Download addins

The following post gives a list and a quick description of the Reflector addins I have written. The addins are organized by assemblies.

Configuration Reflector for the addins

All addins are targeted for .Net v1.0.5 and Reflector is compiled agains 1.0.3. This means that you must add/modify the Reflector.exe.config file accordingly to the one contained in the zip file. Please do not send bug reports to Lutz but directly to me or through the MbUnit issue tracking system.

Reflector.Graph.dll
Addins

Comments
This addin uses QuickGraph and Refly in the background.

Reflector.Graph.Drawing.dll
Addins

Comments
Might still be a little buggy. Intensive on system resources.

Reflector.Goodies.dll
Addins

Comments
Be aware that this addins actually loads the assembly with Reflector and it will lock those assemblies.

Reflector.TreeMap.dll
Addins

Comments
This addin depends on the TreeMap control from Microsoft Research. Make sure you put TreeMapControl.dll and TreeMapGenerator.dll (from Microsoft TreeMap) in the directory of Reflector.

Reflector.Rules.dll (experimental)
Addins

Comments
Does not do much, still under experimentation.

posted on Tuesday, July 13, 2004 1:34:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [18]

I have added abunch of classes for database testing  in the TestFu project (Download Now ):

 

·         SqlAdministrator: backup, restore, create or drop  databases, and also some methods on tables, constraints,etc...

·         SqlExplorer: extracts the schema of database to a DataSet

·         SqlFixture: a mini data abstraction layer "TestFixture" oriented

 

Let's see them in action...

 

SqlAdministrator

 

·         Backup  database to a file:

·         Restore a database from a file:

·         Drop a database:

·         Drop a table:

·         Drop a constraint:

·         etc...

 

using System;
using TestFu.Data;

public class Demo
{
    public static void Main(string[] args)
    {
        DbAdministrator admin = new SqlAdministrator("...");    
        
        // backup Northwind
        admin.Backup("Northwind",SqlBackupDevice.Disk,@"c:\Backups\Northwind.bkp");
        
        // drop Northwind
        admin.Drop("Northwind");
        
        // restore Northwind
        admin.Restore("Northwind",SqlBackupDevice.Disk,@"c:\Backups\Northwind.bkp");
    }
}

SqlFixture

The DbFixture (SqlFixture for MsSQL server) can be used as a base class for the fixtures involving database testing:

[TestFixture]
public class DatabaseTest : SqlFixture
{
    public DatabaseTest()
    :base("Data Source=testserver;...","MyTestDatabase")
    {}
    
    [SetUp]
    public void SetUp()
    {
        this.Open();
        this.BeginTransaction();
    }
    
    [Test]
    public void Selec()
    {
        IDbCollection cmd = this.Connection.CreateCommand("select * from anytable",this.Transaction);
        ...
    }
    
    [TearDown]
    public void TearDown()
    {
        this.Close();
    }
}
posted on Tuesday, July 13, 2004 1:34:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]

I have added abunch of classes for database testing  in the TestFu project (Download Now ):

 

·         SqlAdministrator: backup, restore, create or drop  databases, and also some methods on tables, constraints,etc...

·         SqlExplorer: extracts the schema of database to a DataSet

·         SqlFixture: a mini data abstraction layer "TestFixture" oriented

 

Let's see them in action...

 

SqlAdministrator

 

·         Backup  database to a file:

·         Restore a database from a file:

·         Drop a database:

·         Drop a table:

·         Drop a constraint:

·         etc...

 

using System;
using TestFu.Data;

public class Demo
{
    public static void Main(string[] args)
    {
        DbAdministrator admin = new SqlAdministrator("...");    
        
        // backup Northwind
        admin.Backup("Northwind",SqlBackupDevice.Disk,@"c:\Backups\Northwind.bkp");
        
        // drop Northwind
        admin.Drop("Northwind");
        
        // restore Northwind
        admin.Restore("Northwind",SqlBackupDevice.Disk,@"c:\Backups\Northwind.bkp");
    }
}

SqlFixture

The DbFixture (SqlFixture for MsSQL server) can be used as a base class for the fixtures involving database testing:

[TestFixture]
public class DatabaseTest : SqlFixture
{
    public DatabaseTest()
    :base("Data Source=testserver;...","MyTestDatabase")
    {}
    
    [SetUp]
    public void SetUp()
    {
        this.Open();
        this.BeginTransaction();
    }
    
    [Test]
    public void Selec()
    {
        IDbCollection cmd = this.Connection.CreateCommand("select * from anytable",this.Transaction);
        ...
    }
    
    [TearDown]
    public void TearDown()
    {
        this.Close();
    }
}
posted on Tuesday, July 13, 2004 1:34:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [4]

Roy Osherove's (ISerializable) has come up with a new RollBackAttribute to automatically rollback transaction for each test case. This is pretty neat because you can write things like:

[TestFixture]
public class Test
{
   [Test,RollBack]
   public void SomeTestWithDatabase()
   {
       // all what we do here
       // is automatically rolled back
   }
}

MbUnit implementation

Roy was kind enough to send me the source code of this attribute so I integrated it into MbUnit. Roy plans to make an article about the details of the implementations so I will not discuss it further. The interresting point is that the integration was done in less than 5 minutes, thanks to the extensibility of MbUnit.

Here is a part of the implementation of the attribute (I have stripped out the Service related code):

[AttributeUsage(AttributeTargets.Method,AllowMultiple=false,Inherited=true)] 
public sealed class RollBackAttribute : DecoratorPatternAttribute
{
    public override IRunInvoker GetInvoker(IRunInvoker invoker)
    {
        return new RollbackRunInvoker(invoker);
    }

    private class RollbackRunInvoker : DecoratorRunInvoker
    {
        public RollbackRunInvoker(IRunInvoker invoker)
        :base(invoker){}


        public override object Execute(object o, System.Collections.IList args)
        {
            EnterServicedDomain();
            try
            {
                Object result = this.Invoker.Execute(o,args);
                return result;
            }
            finally
            {
                AbortRunningTransaction();
                LeaveServicedDomain();
            }
        }
    
        #region Service stuff
        private void AbortRunningTransaction()
        {...}

        private void LeaveServicedDomain()
        {...}

        private void EnterServicedDomain()
        {...}
        #endregion
    }
}

There are a few things to say in the above implementation:

  • DecoratorPatternAttribute is an abstract attribute for test case decorators, such as ExpectedException, Ignore, etc... The purpose of the attribute is to return a IRunInvoker implementation that acts as a wrapper around another IRunInvoker,
  • This filter effect can be seen in the RollbackRunInvoker.Execute method, where the actual Test case method is called inside a Try/Catch statement. This guard is used to roll back transactions.
posted on Tuesday, July 13, 2004 1:33:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

While working on my DotNetNuke upgrade, I screwed up my blog database. I've recovered a backup from June 24, there I will find in Google cache I hope...

posted on Tuesday, July 13, 2004 1:32:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]