Monday, November 22, 2010

Ever wanted to host a constraint solver in your web page? It is now possible to host the http://rise4fun.com web site in an iframe (without the chrome). It just looks like this:

posted on Monday, November 22, 2010 3:45:12 PM (Pacific Standard Time, UTC-08:00)      Comments [0]
Monday, June 28, 2010

Is there anything else to say than… try it out now at http://www.pexforfun.com !

posted on Monday, June 28, 2010 2:35:18 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Monday, June 07, 2010

We’ve just released a new version of Pex and Moles v0.92. This version brings Rex integration (smarter about regular expressions), Silverlight support (Alpha) and a number of bugs/improvements here and there.

Read all about the new stuff on the release notes page. Happy Pexing!

posted on Monday, June 07, 2010 12:13:52 PM (Pacific Daylight Time, UTC-07:00)      Comments [2]
Saturday, April 24, 2010

We just uploaded a new release 0.91 on MSDN, Visual Studio gallery and our research web site. Learn more about the changes at http://research.microsoft.com/en-us/projects/pex/releasenotes.aspx#0_91

posted on Saturday, April 24, 2010 6:51:44 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Friday, January 15, 2010

We just released Pex 0.21.50115.2. This release brings bug fixes, a big renaming from “Stubs” to “Moles” and improved infrastructure to build behaved types (formerly known as beavers).

Bug Fixes

• The Moles VsHost fails to execute unit tests in different assemblies.
• Pex deletes the report folder it is currently writing to.
• Support for named indexers in Stubs
• Fixed bugs in how Pex reasons about System.Convert.ToBase64, DateTime
• Invalid support for protected members in the stubs generation

Breaking changes

• The Stubs framework was renamed to Moles framework. We have decided to make the Moles the center of the framework and as a consequence, renamed ‘Stubs’ to ‘Moles’. (This does not mean that we encourage writing untestable code, as Mole help to make it testable. You should still refactor your code to make it testable whenever possible, and only use Moles when that’s the only choice). The impact is that
• Microsoft.Stubs.Framework was renamed to Microsoft.Moles.Framework
• The moles and stubs get generated in subnamespaces ‘.Moles’ rather ‘.Stubs’.
• See below for the list of steps to upgrade your applications.
• BaseMembers in Moles have been deprecated: this helper is not useful as it can be acheive in a better way through a constructor. We decided to remove it to reduce code size. The second reason is that BaseMembers would only work for types inside of the same assembly, which might seem inconsistent.
• PexGoal.Reached is replaced by PexAssert.ReachEventually(). The PexGoal class has been integrated into PexAssert through the ReachEventually method which should be used with the [PexAssertReachEventually] attribute.
• PexChoose simplified: we’ve simplified the PexChoose API; you can now get auxiliary test inputs with a single method call: PexChoose.Value<T>(“foo”).

Migrating from previous version of Pex

Since we’ve renamed Stubs to Moles, any existing .stubx files will not work anymore.

Take a deep breath, and apply the following steps to adapt your projects:

• change the project reference from Microsoft.Stubs.Framework.dll to Microsoft.Moles.Framework.dll
• rename all .stubx files to .moles, and
• rename the top <Stubs xml element to <Moles.
• Change the XSD namespace to http://schemas.microsoft.com/moles/2010/
• Right click on the .moles file in the Solution Explorer and change the Custom Tool Name to ‘MolesGenerator’.
• Delete all the nested files under the .moles files
• Remove references to any compiled .Stubs.dll files in your project
• In general, remove all .Stubs.dll, .Stubs.xml files from your projects.
• Rename .Stubs namespace suffixes to .Moles.
• replace all [HostType(“Pex”)] attribute with [HostType(“Moles”)]
• in PexAssemblyInfo.cs,
• rename using Microsoft.Pex.Framework.Stubs to Microsoft.Pex.Framework.Moles
• rename [assembly: PexChooseAsStubFallbackBehavior] to [assembly: PexChooseAsBehavedCurrentBehavior]
• rename [assembly: PexChooseAsStubFallbackBehavior] to [assembly: PexChooseAsMoleCurrentBehavior]
• In general, the ‘Fallback’ prefix has been dropped in the following methods:
• rename FallbackAsNotImplemented() to BehaveAsNotImplemented()
• rename class MoleFallbackBehavior to MoleBehaviors
• rename class StubFallbackBehavior to BehavedBehavors
posted on Friday, January 15, 2010 8:14:07 PM (Pacific Standard Time, UTC-08:00)      Comments [0]
Wednesday, December 30, 2009

It is holiday time so I could spend some time to play with CCI. The result is a bunch of interesting assembly mutators. Let’s start with the first one: automatic rich assertion messages.

The problem with Assertions

When an assertion fails, the error message is usually insufficient to understand failure. The problem is that we are usually lacking the values in the expression to immediately diagnose the problem. Consider this example,

Assert.True(x == 123);

When this assertion fails, one would like to know what was the value of ‘x’. Of course, that value of x is not serialized in the test output since the assertion method only sees the ‘false’ value. What we would really like to get is something like ‘x (64) != 123’. A number of techniques have been developed to work around this issue.

Solution #1: Specialized Assertions

A common approach is to provide specialized assertion methods with enhanced logging. For example, a special method to test equality:

Assert.Equal(x , 123);

When this assertion fails, the Equal method has the value of both sides of the equality and can render it in the message: ‘expected 123, actual 64’. Unfortunately, expressions are more readable when you write them, and specialized assertions cannot cover all scenarios. This takes us to the next solution.

Solution #2: Expression Trees at Run time

Jafar Husain wrote a very interresting post on how to use Expression Trees to generate rich assertion messages (this is also supported in MbUnit).

Assert.True(() => x == 123);

Instead of taking a bool, the assert method takes an expression tree (Expression<Func<bool>>). Thus, when the expression evaluates to false, the expression tree can be traversed to extract interesting values (such as x) and generate a rich log of the failure.

Unfortunately, there is a major drawback to this technique: what used to be 3 MSIL instructions (ldloc.1, ldc.i4 123, ceq) to evaluate the condition becomes hundreds of methods calls, millions of instructions executed through the System.Linq namespaces. This performance is a overhead on the Pex whitebox analysis.

Jafar considers unit test frameworks as an extension of the compiler and uses expression trees to access what the compiler knows: expression and statements. Following his idea takes us to the CCI based solution.

Solution #3: Assembly Rewritter at Compile time

In the previous approach, expression trees were used to build a little compiler at runtime. A better solution would be to rewrite the expressions at compile time. In other words, we want to build an assembly mutator that takes the original method call and appends code that generates the logging:

Assert.True(x == 123, String.Format(“x == 123 where x = ‘{0}’”, x));

The assembly mutator extracts the expression sources from the pdb (‘x == 123’), collect the local/variable/field references by traversing the expression tree, generate a String.Format friendly message and replace the assertion method call to the assertion method that takes the additional string.

Hello Common Compiler Infrastructure (CCI) and CciSharp

Of course, to rewrite an assembly at compile time, we need a framework that can read and write MSIL, decompile expressions etc… This is where CCI (from http://ccimetadata.codeplex.com ) comes. It allows to manipulate .NET assemblies using an object model and save them back to disk. The assertion message mutator is just one of other mutations that have been or will be implemented as part of CciSharp, a post compiler for .NET that is built on top of CCI.

(There’s already a bunch of useful operators in CciSharp that i’ve been coding over the holidays: assigning the value of auto-properties, making auto-properties readonly or lazy (or weakly lazy), or even implementing the DependencyProperty stuff automatically. We’ll talk about this later.)

Where can I find it?

CciSharp binaries and sources are avaiable on http://ccisamples.codeplex.com/wikipage?title=CciSharp. Grab it!

posted on Wednesday, December 30, 2009 8:45:27 PM (Pacific Standard Time, UTC-08:00)      Comments [0]
Thursday, December 03, 2009

I’ll be presenting the latest development on using Moles and Pex to unit test SharePoint 2010 (or 2007) Services at the SharePoint Connnections in Amsterdam, 18th-19th of January.

MSC26: Pex - Unit Testing of SharePoint Services that Rocks!
SharePoint Services are challenging for unit testing because it is not possible execute the SharePoint Service without being connected to a live SharePoint site. For that reason, most of the unit tests written for SharePoint are actually integration tests as they need a live system to run. In this session, we show how to use Pex, an automated test generation tool for .NET, to test SharePoint Services in isolation. From a parameterized unit test, Pex generates a suite of closed unit tests with high code coverage. Pex also contains a stubbing framework, Moles, that allows to detour any .NET method to user-defined delegates, e.g. replace any call to the SharePoint Object Model by a user-defined delegate.

posted on Thursday, December 03, 2009 10:45:36 PM (Pacific Standard Time, UTC-08:00)      Comments [0]
Wednesday, October 07, 2009

We have just release a new version of Pex (v0.17.xxx) which brings new features for Moles and more bug fixes.

Per-Instance Moles

Instance methods may be moled differently per object instance. This is particularly useful when multiple instances of same type need to behave differently in the same test. For example, let’s consider a simple class Bar that has a ‘Run’ method to check that the values are not the same. The important point here is that we would not be able to deal with such cases if we could not mole the behavior on a per-instance basis.

In the test case of Run, we create to instance of MBar (the mole type of Bar) and assign two different delegates to the ‘Value’ getter. Moles are active as soon as they are attached (unlike the previous version) so we can just call the ‘Run’ method once the moles are set up. In the code below, we are using the C# Object Initializers feature that allows us to set properties of a newly created object within curly braces after the ‘new’ call.

As you may have noticed, we just wrote a parameterized unit test that takes arbitrary ‘left’ and ‘right’ values. We then simply execute Pex to find the relevant values to cover the ‘Run’ method:

Moles for Constructors

In same cases, objects created inside of other methods need to be moled to test the actual program logic. With moles you can replace any constructor with your own delegate, which you can use to attach a mole to the newly created instance. Let’s see this with another simple example where a  ‘Run’ method creates and uses an instance of a ‘Foo’ class.

Let’s say we would like to mole the call to the ‘Value’ property to return any other value. To do so, we would need to attach a mole to a future instance of Foo, and then mole the Value property getter. This is exactly what is done in the method below.

We then run Pex to find that 2 different values are needed to trigger all code paths.

Mole interface binding

All the member implementations of an interface may be moled at once using the new ‘Bind’ method. This smells like duck typing with type safety as the Bind methods are strongly typed. For example, we want to mole a collection type (Foes) to return a custom list of Foo elements (which need to be moled too). The goal is to test the sum method…

In the parameterized unit test case, we create an instance of Vector, which implements IEnumerable<int>. Then, we can simply bind the ‘values’ array to the mole to make Vector iterate over the array. The call to Bind will mole all methods of MVector that are implemented by the ‘values’ parameters, effectively redirecting all the enumeration requests to the ‘values’ array.

When Pex runs through the sample, it correctly understand the data flow of the parameters through the moles and finds inputs that break the ‘sum >= 0’ assertion:

Compilation of Stubs assemblies

When one needs stubs or moles for a system assembly, it does not really make sense to re-generate the stubs each time the solution is loaded. To that end, Stubs ships with a command line tool that lets you easily compile a stubs assembly into a .dll:

stubs.exe mscorlib

Once the stubs assembly is compiled, i.e. mscorlib.Stubs.dll is created, you simply have to add a reference to it in your test project to start using it. In future versions of Pex, we will provide a better experience to support this scenario.

Bug Fixes

Breaking Changes

• The code generation of Moles was significantly changed. This might mean that you will have to recompile your solution, and adapt all existing uses of Moles.
posted on Wednesday, October 07, 2009 3:02:46 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Wednesday, September 16, 2009

We just released v0.16.40915.5 of Pex. The major highlights of this release are Moles, a lightweight detour framework, and better support for test frameworks.

• Moles, a lightweight detour framework: Moles is a new extension of the Stubs framework: it lets you replace any .NET method (including static methods) with your own delegate.(sample outdated)
The moles framework is not yet feature complete; moles does not support constructors and external methods. Some types of mscorlib cannot be moled as they interact too deeply with the CLR.
• Pex gets its own HostType: A HostType is a feature for the Visual Studio Unit Test framework that lets specific unit tests be run under specialized hosts such as Asp.Net or VisualStudio iself. In order to create reproducible test cases using Moles, we had to implement a HostType that lets tests be run under the Pex profiler. This is very exciting because it also opens the door for many uses of Pex such as fault injection, dynamic checking, etc… in future versions of Pex. When generating test cases with Pex, all the necessary annotations will be created automatically. To turn the Pex HostType on hand-written (non-parameterized) unit tests, simply add [HostType(“Pex”)] on your test case.

This feature only works with Visual Studio Unit Test 2008.
• Select your test framework: the first time you invoke ‘Pex’ on your code, Pex pops up a dialog to select your test framework of choice. You can select which test framework should be used by default and, more importantly, where it can be found on disk.

If you do not use any test framework, Pex ships with what we call the “Direct test framework”: in this “framework”, all methods are considered as unit tests without any annotation or dependencies.

These settings are stored in the registry and Pex should not bug you again. If you want to clear these settings, go to ‘Tools –> Options –> Pex –> General’ and clear the TestFramework and TestFrameworkDirectory fields:

• Thumbs up and down: We’ve added thumbs up and down buttons in the Pex result view. We are always looking for feedback on Pex, so don’t hesitate to click them when Pex deserves a thumbs up or a thumbs down.
• Performance: Many other performance improvements under the hood which should avoid Pex hog too much memory in long running scenarios.
• Miscellanous improvements and bug fixes:
• Support for String.GetHashCode(): we now faithfully encode the hashing function of strings, which means Pex can deal with dictionaries where the key is a string.
• Fewer “Object Creation” messages that are not actually relevant.
• In VS 2010, the package used to trigger an exception when the solution loaded. This issue is now fixed.
• In Visual Studio when an assembly had to be resolved manually (i.e. that little dialog asking you where an assembly is), Pex remembers that choice and does not bug you anymore.
• And many other bugs that were reported through the forums.
posted on Wednesday, September 16, 2009 2:16:33 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Tuesday, June 02, 2009

Inversion of Control (IoC) is a very important practice to make code testable. It usually relies on defining interfaces for each external dependency. Various frameworks and techniques, i.e. Dependency Injection (DI), Service Locator, exist to bind the interface implementations to the components. In this blog post, we will *not* focus on this aspects of IoC but rather on it’s main building block: interfaces. This article describes how Contracts for interfaces improves IoC… regardless of which DI framework/technique you are using. The Contracts for interfaces are a feature of the Code Contracts for .Net tool.

### Interfaces are not Contracts

Interfaces are often referred to as Contracts in the context of IoC: they define the methods and properties that a service should implement and are a key factor in achieving the loose coupling: one can build the code against the interface and can plug in various implementation with no risks.

Unfortunately, interfaces are not contracts: while they specify how the methods signatures should be, interfaces do not specify the functional behavior. To illustrate this, let’s take a look at a well-known simple interface of the BCL:

For this interface, I could naively implement as follows:

While my implementation is really really wrong, it fulfills all requirements of the IServiceProvider interface: a method GetService that returns object. Of course, if I would have read the MSDN documentation of GetService, I would have known that object should implement serviceType. Unfortunately, the interface did not tell me anything about that and compilers don’t understand MSDN documentation either.

### Contracts for interfaces

Code Contracts provides an API for design-by-contracts (pre-condition, post-condition, etc…). It also supports defining contracts for interface or abstract classes. Contracts for interfaces can be used to specify the functional behavior of interface members. While they also serve as documentation, those contracts can also be turned into runtime checks or leveraged by static checkers.

• since interface members cannot have method bodies, the contracts are stored in a ‘buddy’ type.
• The [ContractClass]/[ContractClassFor] attributes bind the interface type and the contract type together.
• Contract.Result<object>() is a helper method to refer to the result value, since this is not supported in C#.

Let’s take a closer look at the body of IServiceProviderContract.GetService. It contains a pre-condition (Requires) that the serviceType should not be null and a post-condition (Ensures) that the return value should be null or should be assignable to the serviceType:

In this simple case, the contracts captured precisely the specification that we found in MSDN. The critical difference is that they are stored in a format that compilers and runtimes know pretty well: MSIL byte code. The benefits are huge:

• documentation: the contracts are stored in a programming-language agnostic format that mined and rendered for your favorite programming language,
• static checking: static analysis tools can (and will) use contract to try to find bugs before you execute the code, or prove that it is correct with respect to the contracts,
• runtime checking: the runtime checker will instrument all implementations of the interface with the interface contracts automatically. Once you’ve specified how an interface should behave, you do not have to repeat yourself when re-implementing it, the re-writer takes care of that.
• automated white box testing: tools like Pex can leverage the runtime contract checks to try to find inputs that violate the contracts.
• IoC/DI framework agnostic: It does not matter which DI framework you use, as soon as you use interface, you could also provide contracts for it.
posted on Tuesday, June 02, 2009 5:14:03 PM (Pacific Daylight Time, UTC-07:00)      Comments [1]
Monday, May 25, 2009

If you’ve been thinking about presenting Pex to your co-workers or your local .NET community, you can use our slide decks at http://research.microsoft.com/en-us/projects/pex/documentation.aspx . The slide decks are there to help you, don’t hesitate to shuffle them, cut them or pick whatever you need in them (and of course, tell us about it).

Cheers, Peli.

posted on Monday, May 25, 2009 10:54:47 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Saturday, May 02, 2009

This is a cool new feature that builds on Stubs, Pex  and Code Contracts  (in fact, it is really just a consequence of the interaction of the 3 technologies): stubs that act as parameterized models while complying to the interface contracts.

• Code Contracts provide contracts for interfaces. All implementations of that particular interface will be instrumented with the runtime checks by the rewriter at compile time.
• Stubs is really just a code generator that provides a minimal implementation of interfaces and relies on the C# compiler to build the code. Therefore, if a stub implements an interface with contracts, it will automatically be instrumented by the runtime rewriter as well.
• Pex choices (i.e. dynamic parameter generation) may be used as the fallback behavior of stubs. In other words, whenever a stubbed method without user-defined behavior is called, Pex gets to pick the return value. Since the stub implementation itself may have been instrumented with contracts, we’ve added special handling code so that post-condition violation coming from a stub are considered as an assumption. This effectively forces Pex to generate values that comply with the interface contracts.

Let’s see how this works with an example. The IFoo interface contains a single ‘GetName’ method. This method has one precondition, i should be positive and one postcondition, the result is non null. Since interfaces cannot have a method body, we use a ‘buddy’ class to express those contracts and bind both classes using the [ContractClass] and [ContractClassFor] attributes:

We then turn on runtime rewriting for both the project under test and the test project (Project properties –> Code Contracts –> Runtime checks). As a result, the stub of IFoo automatically gets instrumented with the contracts. We can clearly see that from the generated code of GetName in Reflector below:

The last step is to write a test for IFoo. In this case, we write a parameterized unit test that takes an IFoo instance and an integer, then simply call GetName.

Since the contracts of IFoo.GetName specifies that the result should not be null, we should not see any assertion violation in this test. Moreover, we should see a test for the precondition considered as an expected exception:

Note that all those behaviors rely on extensions to Pex that the wizard automatically inserted in the PexAssemblyInfo.cs file:

where

• [PexAssumeContractEnsuresFailureAtStubSurface] filters post-condition violations in stubs as assumptions,
• [PexChooseAsStubFallbackBehavior] installs Pex choices as the fallback behavior of stubs,
• [PexStubsPackage] load the stubs package (this is a standard procedure for Pex packages),
• [PexUseStubsFromTestAssembly] specifies to Pex that it should considers stub types when trying to test interfaces or abstract classes.
posted on Saturday, May 02, 2009 10:19:16 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Friday, May 01, 2009

We just made a quick release to fix another installer issue related to missing packages. Along the way, we’ve added an Exploration Tree View and Partial Stubs support.

Exploration Tree View

The exploration tree view displays the list of explorations to be executed, running and finished. It serves as a progress indicator but also as a smooth result explorer. When browsing through the tree, Pex will synchronize the exploration result view and the tree view.

The tree view populates each namespace with the fixture types and exploration methods, and provide a visual feedback on the progress of Pex.

When you browse through the exploration and generated test nodes, Pex automatically synchronizes the exploration result display. This makes it really easy to start from an high-level view of the failures and drill into a particular generated test, with stack trace and parameter table.

Partial Stubs

Stubs lets you call the base class implementation of methods as a fallback behavior. This functionality is commonly referred as Partial Mocks or Partial Stubs and is useful to test abstract classes in isolation. Stubs inheriting from classes have a “CallBase” property that can turn this mode on and off.

Let see this with the RhinoMocks example on partial mocks. Given an abstract ProcessorBase class,

we write a test for the Inc method. To do so, we provide a stub implementation of Add that simply increment a counter.

Miscellaneous

• PexAssume/PexAssert.EnumIsDefine checks that a particular value is defined in an enum.
• Missing OpenMP files in the installer break Pex.

Poll: should we skip 0.13 and go straight for 0.14? :)

posted on Friday, May 01, 2009 1:28:25 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Saturday, April 25, 2009

I recorded a Channel9 video last week with Manuel Fahndrich on the interaction of Code Contracts and Pex. The demo gives a glimpse at the nice interaction between Contracts (design by contracts) and Pex (automated white box).

Code Contracts gives you a great way to specify what your code is supposed to do. These contracts can be leverage for documentation, static checkers but also – tada – by Pex! Contracts can also be turned into runtime checks which Pex will try to explore. Pex will try to cover the post conditions, assertions or in other words, find inputs that violates of your contracts etc… By adding contracts to your code, you give Pex a ‘direction’ to search for (and a test oracle).

Drop me a note if you want more of those movies.

posted on Saturday, April 25, 2009 9:09:57 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Wednesday, April 22, 2009

Update: Andrew Kazyrevich does the final performance analysis on his blog.
Update II: Stubs v0.12.40430.3 has Partial Stubs support.

Stubs is a lightweight stub framework that is entirely based on delegates. We designed it so that it brings as little overhead as possible, and as a side effect, is very effective with Pex. In this post, we’ll see how Stubs ‘look’ with respect to other frameworks.

Mocking-framework-compare

Andrew Kazyrevich started a project, mocking-framework-compare, back in February that compared Moq, Rhino, NMock2 and Isolator. This is project compares various ‘mocking scenarios’ across all those frameworks. I added Stubs to the mix today (go check it out).

What’s a stub in Stubs anyway?

Before we dig in, let’s recap how stubs look like when you use Stubs. Stubs uses delegate to ‘hook’ behavior to interface members. For every stubbed interface and class, code is generated at a compile time: One stub class per interface and class. For each stubbed method, the stub class contains a field. The field has a delegate type matching the stubbed method. As a user, you can simply attach delegates (or lambdas) to the delegate fields to assign*any* behavior to each member. If no user delegate is set, Stubs calls into a fallback behavior, i.e. throw or return the default value.

Let’s take one of the examples of mocking-framework-compare. The little snippet below ‘stubs’ the TouchIron() method (which is implemented explicitly) by attaching a lambda to the TouchIron field.

SIHand is the stub type of the IHand interface, it was generated at compile time by the Stubs framework. Its implementation looks more or less like this:

The TouchIron methods really shows the main idea behind stubs: delegates are all we need to move behavior around.

Stubs: relying on the language rather than an API

One of the differences of Stubs with other mock frameworks is that Stubs does not have any API: delegates, lambdas, closures are all features of the programming language, while assertions are part of the test framework. This is quite obvious when we compare the BrainTest using Moq and using Stubs. In this test, we need to ensure that the brain coordinates the hand and the mouth so that when a hot iron is touched by the hand, the mouth yells.

• Moq: we set up the hand to throw a burn exception when touched with a hot iron and verify that the mouth yelled.

Moq, as Rhino or Isolate, make smart use of expression trees and/or Reflection.Emit to define the mock behavior. The expectation and behavior are set through nice and fluent APIs which really make the developer’s life’s easier.

• Stubs: same scenario, we attach a lambda that throws if iron is hot and use a local to verify that Yell is called (this local is pushed to the heap by the compiler).

Stubs does not have any API. It relies on lambdas (to define new behaviors), closures (to track side effects) which are given for free by the C# 3.0 language.

Performance numbers

Does performance matter when it comes to mocking? It’s not really the question we are trying to answer here. We’re just looking at the benchmark results from the mocking-framework-compare project. When looking at the results, you’ll notice that Stubs may be 100x to 1000x faster than other frameworks. This is no surprise since stubs boil down to a virtual method call, while other frameworks do much more work (in fact we expected worse numbers).

Data units of msec resolution = 0.394937 usec

Mocking methods.
Moq      : 100 repeats:  37.471 +- 12%   msec
Rhino    : 100 repeats:  38.030 +- 7%    msec
NMock2   : 100 repeats:  24.035 +- 4%    msec
Stubs    : 100 repeats:   0.115 +- 8%    msec

Mocking events.
Moq      : 100 repeats:  86.913 +- 7%    msec
Rhino    : 100 repeats:  61.142 +- 6%    msec
NMock2   : 100 repeats:  27.378 +- 6%    msec
Stubs    : 100 repeats:   0.071 +- 6%    msec

Mocking properties.
Moq      : 100 repeats:  82.434 +- 6%    msec
Rhino    : 100 repeats:  47.471 +- 5%    msec
NMock2   : 100 repeats:  11.334 +- 10%   msec
Stubs    : 100 repeats:   0.042 +- 15%   msec

Mocking method arguments.
Moq      : 100 repeats: 142.668 +- 4%    msec
Rhino    : 100 repeats:  45.118 +- 5%    msec
NMock2   : 100 repeats:  22.344 +- 7%    msec
Stubs    : 100 repeats:   0.078 +- 4%    msec

Partial mocks.
Moq      : 100 repeats: 117.581 +- 5%    msec
Rhino    : 100 repeats:  58.827 +- 6%    msec
Stubs : 100 repeats:   0.054 +- 6%    msec

Recursive mocks.
Moq      : 100 repeats:  92.482 +- 4%    msec
Rhino    : 100 repeats:  40.921 +- 3%    msec
Stubs    : 100 repeats:   0.493 +- 18%   msec
Press any key to continue . . .

Stubs limitations

There are many areas where the Stubs fall short or simply don’t support it. Currently, Stubs are only emitted for C# and partial mocks will be supported in the next version.

Getting started with Stubs

Stubs comes with the Pex installer. If you’re interested on using it, check out the getting started page on the Stubs project page where you can also download the full Stubs primer.

Cheers, Peli

posted on Wednesday, April 22, 2009 9:58:03 PM (Pacific Daylight Time, UTC-07:00)      Comments [3]
Friday, April 10, 2009

Today we’ve released a new release of Pex on DevLabs and on our academic downloads. This highlights of this release are: NUnit, MbUnit and xUnit.net support out of the box, writing parameterized unit tests in VisualBasic.NET and F#, better Code Contracts support. As always, if we encourage you to send us feedback, bugs, stories on our forums at http://social.msdn.microsoft.com/Forums/en-US/pex/threads/ .

NUnit, MbUnit and xUnit.net supported out of the box

Pex now supports MSTest, NUnit, MbUnit and xUnit.net out of the box. Pex will automatically detect which framework you are using by inspecting the assembly reference list, and automatically save the generated tests decorated with the correct attributes for that framework.

The default test framework can also be specified through the global options (Tools –> Options –> Pex –> enter the test framework name in TestFramework).

Writing Parameterized Unit Tests in VisualBasic.NET

While the Pex white box analysis engine works at the MSIL level, Pex only emits C# code for now. In previous releases, this limitation made it impossible to use Pex parameterized unit tests from non-C# code. In this release, we have worked around this problem by automatically saving the generated tests in a ‘satellite’ C# project.

Let’s see this with an example. The screenshot below shows a single VisualBasic.NET test project with a Pex parameterized unit test:

We can right-click in the HelloTest.Hello method and select “Run Pex Explorations”:

At this point, Pex will start exploring the test in the background as usual. This is where the new support comes in: When a generated test comes back to Visual Studio, Pex will save it in a separate C# project automatically (after asking you where to drop the new project):

The generated tests are now ready to be run just as any other unit tests!

Writing Parameterized Unit Tests from F#

Similarly to VisualBasic.NET, we’ve made improvements in our infrastructure to enable writing parameterized unit tests in F#. Let’s see this with a familiar example. We have a single F# library that has xUnit.net unit tests and reference Microsoft.Pex.Framework (project Library2 below). In that project, we add a parameterized unit test (hello_test):

We can right-click on the test method name and Pex will start the exploration of that test in the background. Because of the limitations of the F# project system, you absolutely need to right-click on the method name in F# if you want contextual test selection to work. Because the project is already referencing xunit.dll, Pex will also automatically detect that you are using xUnit.net and use that framework. When the first test case comes back to VisualStudio, Pex saves it in a separate C# project:

The tests are saved in the generated test project and ready to be run by your favorite test runner!

PexObserve: Observing values, Asserting values

We’ve completely re-factored the way values can be logged on the table or saved as assertions in the generated tests. The following example shows various ways to log and assert values:

In the Observe method, we use the return value and out parameter output to automatically log and assert those values. Additionally, we add “view input” on the fly to the parameter table through the ValueForViewing method, and we add “check input” to be asserted through the ValueAtEndOfTest method. After running Pex, we get the following results:

As expected, input, ‘view input’, output and result show up in the parameter table.

In the generated test, we see assertions for the return value, out parameters and other values passed through the ValueAtEndOfTest method.

Code Contracts : Reproducible generated tests

When Pex generates a unit test that relied on a runtime contract, Pex also adds a check to the unit test which validates that the contracts have been injected into the code by the contracts rewriter. If the code is not rewritten when re-executing the unit test, it is marked as inconclusive. You will appreciate this behavior when you run your unit tests both in Release and in Debug builds, which usually differ in how contracts get injected.

Code Contracts:  Automatic filtering of the contract violations

When Pex generates a test that violates a Code Contract pre-condition (i.e. Contract.Requires), there are basically two scenarios: the precondition was on top of the stack and should be considered as an expected exception; or it is a nested exception and should be considered as a bug. Pex provides a default exception filtering that implements this behavior.

Stubs: simplified syntax

We’ve considerably simplified the syntax of stubs by removing the ‘this’ parameter from the stub delegate definition. Let’s illustrate this with a test that stubs the ‘ReadAllText’ method of a fictitious ‘IFileSystem’ interface.

Stubs: generic methods

The Stubs framework now supports stubbing generic methods by providing particular instantiations of that method. In the following example, the generic Bar<T> method is stubbed for the particular Bar<int> instantiation:

Stubs and Pex: Pex will choose the stubs behavior by default

We provide a new custom attribute, PexChooseAsStubFallbackBehaviorAttribute, that hooks Pex choices to the Stub fallback behavior. To illustrate what this means, let’s modify slightly the example above by removing the stub of ReadAllText:

If this test was to be run without the PexChooseAsStubFallbackBehavior attribute, it would throw a StubNotImplementedException. However, with the PexChooseAsStubFallbackBehavior attribute, the fallback behavior calls into PexChoose to ask Pex for a new string. In this example in particular, on each call to ReadAllText, Pex will generate a new string for the result. You can see this string as a new parameter to the parameterized unit test. Therefore, when we run this test under Pex, we see different behavior happening, including the “hello world” file:

Note that all the necessary attributes are added at the assembly level by the Pex Wizard.

Miscellanous bug fixes and improvements

• [fixed] Dialogs do not render correctly under high DPI
• When a generic parameterized unit tests does not have any generic argument instantiations, Pex makes a guess for you.
• When a test parameter is an interface or an abstract class, Pex now searches the known assemblies for implementations and concrete classes. In particular, that means that Pex will often automatically use the automatically generated Stubs implementations for interfaces or abstract classes.
• Static parameterized unit tests are supported (if static tests are supported by your test framework)
• Better solving of decimal and floating point constraints. We will report on the details later.

Breaking Changes

• The PexFactoryClassAttribute is no longer needed and has been removed. Now, Pex will pick up object factory methods marked with the PexFactoryMethodAttribute from any static class in the test project containing the parameterized unit tests. If the generated tests are stored in a separate project, that project is not searched.
• The PexStore API has been renamed to PexObserve.
• Pex is compatible with Code Contracts versions strictly newer than v1.1.20309.13. Unfortunately, v1.1.20309.13 is the currently available version of Code Contracts. The Code Contracts team is planning on a release soon.

Happy Pexing!

posted on Friday, April 10, 2009 12:06:31 PM (Pacific Daylight Time, UTC-07:00)      Comments [12]
Tuesday, February 03, 2009

Ben presented a talk on Pex a couple months ago at DDD7. The video of the talk is now up on the web. Check it out!

posted on Tuesday, February 03, 2009 8:38:53 AM (Pacific Standard Time, UTC-08:00)      Comments [0]
Thursday, January 15, 2009

Phil Haacked has started an interresting discussion on the implementation of a named formatter whose format strings are of the form {name} instead of {0}. In fact, he started with 3 implementations and his reader submitted 2 more! Since Phil was kind enough to package all the implementations (and the unit tests) in a solution, I took the liberty to run Pex on them and see if there was any issue lurking out. The goal of this post is not really to show that those implementation are correct or not, but give you an idea where Pex could be applicable in your code.

But wait, it’s already Unit Tested!

Indeed, Phil diligently wrote a unit test suite that covers many different use cases. Does it mean you are done with testing? The named format syntax can be tricky since may involve escaping curly braces… What is the named format syntax anyway? It’s pretty obvious that it should let you write something like “{Name} is {Status}” instead of “{0} is {1}” but that still pretty vague. In particular, what are the syntactic rules for escaping curly braces (what’s the meaning {{}{}{{}}}, etc…) or is there even a grammar describing named formats.

As it is often the case, there is no clear and definitive answer – at least I could not find it –. In this post, I’ll show 2 techniques that can be used here out of the many patterns for Pex which we documented.

Technique 1: Explore for runtime errors (pattern 2.12 – parameterized stub)

The most basic way of running Pex is to simply apply Pex on a method without adding any assertions. At this point, you are looking for NullReferenceException, IndexOutOfRanceException or other violations of lower level APIs. Although this kind of testing won’t help you answer whether your code is correct, it may give you back instances where it is wrong. For example, by passing any format and object into the format method in a parameterized unit test; the code below is a parameterized unit test for which Pex can generate relevant inputs:

[PexMethod]
public void HenriFormat(string format, object o)
{
format.HenriFormat(o);
}

Note that it took a couple runs of Pex to  figure the right set of assemblies to be instrumented. Hopefully, this process is mostly automated and you simply have to apply the changes that Pex suggests.

The screenshot below shows the input generated by Pex. Intuitively, I would think that FormatException is expected and part of properly rejecting invalid inputs. However, there are ArgumentOutOfRanceExceptions triggered inside of the DataBinder code that probably should intercepted earlier. If this implementation uses the DataBinder code, it should make sure that it only gets acceptable values. Whether those issues are worth fixing is a tricky question: maybe the programmer intended to let this behavior happen.

Technique 2: Compare different implementations (pattern 2.7 – same observable behavior)

A second approach is to compare the output of the different implementations. Since all the implementations implement the same (underspecified) named format specification, their behavior should match with respect to any input. This property makes it really easy to write really powerful parameterized unit tests. For example, the following test asserts that the Haacked and Hanselman implementation have the same observable behavior, i.e. return the same string or throw the same exception type (PexAssert.AreBehaviorEqual takes care of asserting this):

[PexMethod]
public void HaackVsHansel(string format, object o)
{
PexAssert.AreBehaviorsEqual(
() => format.HaackFormat(o),
() => format.HanselFormat(o)
);
}

Again, this kind of testing will not help to answer if the code is correct but it will give you instance where the different implementations behave differently, which means one of the two (or even both) have a bug. This is a great technique to test a new implementation against an old (fully tested) implementation which can be used as oracle. We run the test and Pex comes back with this list of test cases. For example, the format string “\0\0{{{{“ leads to different output for both implementation. From the different outputs, “\0\0{{“ vs “\0\0{{{{“, it seems that curly braces are escaped differently. If I wanted to dig deeper, I could also simply debug the generated test case.

Comparing All Implementations

Now that we’ve seen that Phil and Scott were not agreeing, could we apply this to the other formatters. I quickly set up a T4 template to generate the code for all the parameterized unit tests between each formatters. Note that order matters for Pex: calling A then B might lead to a different test suite compared to B then A, just because Pex explores the code in different orders.

<#@ template language="C#" #>
<#
string[] formatters = new string[] {
"Hansel",
"Henri",
"James",
"Oskar",
"Haack"
};
#>
using System;
using Microsoft.Pex.Framework;
using Microsoft.Pex.Framework.Validation;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UnitTests;
using StringLib;

namespace UnitTests
{
[TestClass]
[PexClass(MaxConstraintSolverTime = 2, MaxRunsWithoutNewTests = 200)]
public partial class StringFormatterTestsTest
{
<# foreach(string left in formatters) {
foreach(string right in formatters) {
if (left == right) continue;
#>[PexMethod]
public void <#= left #>Vs<#= right #>(string format, object o)
{
PexAssert.AreBehaviorsEqual(
() => format.<#= left #>Format(o),
() => format.<#= right #>Format(o)
);
}
<#
}
} #>
}
}

The answer that Pex returns is interresting and scary: none of the implementation behaviors match. This is somewhat not surprising since they all use radically different approaches to solve this problem: regex vs string api etc…

Try it for yourself.

The full modified source is available for download here. You will need to install Pex from http://research.microsoft.com/pex/downloads.aspx to re-execute the parameterized unit tests.

posted on Thursday, January 15, 2009 4:38:25 PM (Pacific Standard Time, UTC-08:00)      Comments [1]
Saturday, January 10, 2009

Stubs is a framework for generating stubs (that we announced earlier with Pex). The framework generates stubs (not mocks) that are

• lightweight, i.e. relying on delegates only,
• strongly typed, i.e. no magic strings – refactoring friendly,
• source code generated, i.e. no dynamic code or expression trees,
• great debugging experience, i.e. break and step through stubs,
• minimalistic API (there is almost none),
• friendly for Pex!

More details about Stubs are also available in the stubs reference document.

Quick Start

Let’s start from a simple interface IFileSystem for which we want to write stubs:

public interface IFileSystem
{
void WriteAllText(string fileName, string content);
}

Stubs relies solely on delegates to attach behaviors, and leverages the lambda syntax from C# 3.0 to accomplish pretty much anything:

// SFileSystem was generated by Stubs and stubs IFileSystem
var stub = new SFileSystem();

// always returns “...”
stub.ReadAllText = (me, file) => "...";

// expectations: checks file = “foo.txt”
{
Assert.AreEqual("foo.txt", file);
return "...";
};

// storing side effects in closures: written saves content
string written = null;
stub.WriteAllText = (me, file, content) => written = content;

// hey, we can do whatever we want!
{
if (file == null) throw new ArgumentNullException();
return "...";
};
// downcast to the interface to use it
IFileSystem fs = stubs;

Anything is possible… as long as C# (or your favorite language) allows it.

Anatomy of a Stub

Each stubbed method has an associated field delegate that can be set freely (e.g. WriteAllText and ReadAllText). If this delegate field is set, it will used when the method is called; otherwize a default action occurs. Let’s see this with a simplified stub of the IFileSystem interface, SFileSystem, which shows how the IFileSystem.WriteAllText is implemented:

class SFileSystem
: StubBase<SFileSystem>
, IFileSystem
{
public Action<SFileSystem, string, string> WriteAllText; // attach here
void IFileSystem.WriteAllText(string fileName, string content)
{
var stub = this.WriteAllText;
if (stub != null)
stub(this, fileName, content); // your code executed here
else
this.DefaultStub.VoidResult(this);
}
}

The actual generated code may look more complicated because it contains custom attributes for debugging, comments and globally qualified types to avoid name clashing:

Code generation: How does it work?

Stubs is a single file generator that pre-generates stubs to source code. Stubs also monitors the build and regenerates the source code when a change occurs.

The stub generation is configured through an XML file (.stubx file) that contains which code and how it should be generated. The generated code is saved in a ‘Designer’ file, similarly to other code generation tools (type dataset etc…).

Great debugging experience

A cool side effect of simply using delegates: you can step through and debug your stubs! This is usually not the case with mock framework using dynamic code. Below, you can see that one can set breakpoints in the body of a stub and debug as usual.

Where do I get it?

The Stubs framework comes with Pex but you can use it in any unit testing activity. It provides a simple lightweight alternative to define stub for testing. Pex can be downloaded from http://research.microsoft.com/pex .

posted on Saturday, January 10, 2009 7:41:32 AM (Pacific Standard Time, UTC-08:00)      Comments [2]
Sunday, January 04, 2009

The k-shortest paths computes the k first shortest path between two vertices in a directed graph. If you put this in the context of route planning, it gives you k alternative routes in case the shortest path is blocked by snow :) While the single source shortest path is well-known and implemented in many languages, there are not many implementations available for this problem, although it has been extensively studied. After looking around, I stumbled on a nice article comparing various approaches. The authors pointed out an algorithm from 1959 from Hoffman and Pavley that solved this problem (there are actually many others). This algorithm looked like a good fit:

• it requires a single call to a single-source shortest path algorithm. Other approaches will require as much as kn calls to a shortest path algorithm.
• it sounded simple and did not require new specialized data structures.

I want to try it!

The algorithm is available in QuickGraph 3.1.40104.00 and up. You can take a look at it at http://www.codeplex.com/quickgraph/SourceControl/changeset/view/29858#377982. To use it on a BidirectionalGraph,

IBidirectionalGraph<TVertex, TEdge> g = …;
foreach(IEnumerable<TEdge> path in g.RankedShortestPathHoffmanPavley(weights, source, goal, 4))
…

A glimpse at the Hoffmann-Pavley algorithm

The algorithm works in 2 phases.

On the first phase, we build a minimum ‘successor’ tree towards the goal vertex. This tree can be used to build a shortest path from any (reachable) vertex to the goal vertex. To build this tree, we can simply apply the Dijkstra shortest path algorithm on the reversed graph. This can be done in a couple lines with QuickGraph.

As for many other k-shortest path algorithm, phase works by building deviation path and picking the best one. In the case of the Hoffman-Pavley algorithm, it works as follows: pick the latest shortest path, for each vertex of this path, build a deviation path (more later) for each out-edge and add it to a priority queue. Then start again:

var queue = new PriorityQueue<Path>();
queue.Enqueue(shortestpath);
while(queue.Count > 0) {
var path = queue.Dequeue();
foreach(var vertex in path)
foreach(var edge in graph.OutEdges(vertex))
queue.Enqueue(CreateDeviation(path, vertex, edge));
}

A deviation path is composed of three parts:

1. the initial part of the ‘seeding’ path, i.e. the edges before the deviation edge,
2. the deviation edge
3. the remaining shortest path to the goal. When we build the deviation path, we also compute it’s weight. A nice property of the deviation paths is that they can be ‘built’ when needed. This saves a lot of space and computation as most deviation paths will probably not end up in the winning set of paths – instead of storing a full path, we store a path index and an edge index.

That’s it! The details contain more code to deal with self-edges and loops but the main idea is there. This is definitely a very elegant algorithm!

Algorithm authors usually illustrate their approach with an example. This is a good way to get started on a small graph example and ensure that the algorithm works as the original author expected. This is the kind of unit test I get started with.

The next step is to apply the algorithm to a large number of graph instances. Unfortunately, I do not have any other k-shortest path algorithm, so the oracle is harder to build here. Nevertheless, the result of the algorithm, i.e. the shortest path collection, has a couple of properties that should always be true:

• the algorithm produces loopless paths,
• path k is lower or equal to path k + 1.

The problem with this test is that it does not guarantee that some shortest path have been missed. At this point, I’m a bit puzzled on how to test that.

posted on Sunday, January 04, 2009 11:05:14 PM (Pacific Standard Time, UTC-08:00)      Comments [1]
Thursday, January 01, 2009

When you implement an algorithm that computes an optimal solution (i.e. minimum/maximum with respect to a cost function), how do you test that the solution is actually optimal?

This is the kind of question I face when implementing algorithms in QuickGraph. For example, I was recently looking at the minimum spanning tree of a graph (MST)? While checking that the result tree is a spanning tree is easy, checking that it is minimum is not obvious: nobody has written Assert.IsMinimum yet :). Here are a couple techniques that I found useful along the way:

Input-Output table

The most obvious approach is to pre-compute the result for a number of problems and assert the solution of the algorithm matches. In this MST case, use a small set of graphs for which the MST is well known, and check that the computed MST has the correct weight. This approach will take you only so far and requires a lot of manual work since you need to solve the problem (or find a known solution) for a number of representative cases.

Solution Perturbation

If the algorithm computes a solution that is minimal with respect to a cost function, one can try to perturbate the solution to see if there’s a smaller one. If so, it clearly violates the fact that the solution should be minimal, thus you just found a bug. In the case of MST, this would mean randomly picking edges that are not in the minimum spanning tree, swapping them and evaluate if the result tree is smaller.

This is kind of approach is actually used in optimization where the search space might have local minima (see simulated annealing).

Multiple Implementations

A nice thing about graph problems is that there are usually many (vastly) different ways to solve them. If multiple implementations are available, we can simply compare the result of each algorithm against each other and make sure that they match each other. Each algorithm might have bugs but it is unlikely that they share common ones. Since we have now a good oracle, we can apply this approach that a large number of inputs to increase our coverage.

In the MST case, two popular algorithms are Prim’s and Kruskal’s. Those are 2 different approaches: Prim is built on top of Dijkstra single source shortest path, while Kruskal is built on top of the disjoint-set data structure. By carefully picking the weights of the edges, we can assert different things:

• if the edge weights are all equals, any spanning tree is minimal. So we can compare the result to a depth-first-search algorithm (which can easily compute a spanning tree).
• if some edge weights are different, there may be many minimum spanning tree. In this case, we can still assert that weight of the tree is minimum.
• if all the edge weights are different, then the MST is unique. This fact can be used to precisely pinpoint the differences in solution during testing.

There is a corner case that needs to be checked: if all algorithm are no-op, i.e. they don’t do anything. There solution will always match!

Happy New Year!

posted on Thursday, January 01, 2009 9:03:14 AM (Pacific Standard Time, UTC-08:00)      Comments [0]
Tuesday, December 30, 2008

I recently implemented a Disjoint-Set data structure in QuickGraph, which is the main building block of the Kruskal’s minimum spanning tree algorithm. This is a fun data-structure to look at, as it purpose is quite different from the ‘main’ BCL collections. The disjoint-set is useful to partition elements into sets, and defines 2 main operation to that purpose: Find, finds the set an element belongs to (can be used to check if 2 elements are in the same set), Union merges two sets.

The source of the disjoint-set is in the QuickGraph source, if you are curious about the details.

Testing the disjoint-set

There is not much left to TDD when it comes to write data structure. Such data structure are usually described in details in an article, and you ‘just’ have to follow the authors instruction to implement. Nevertheless, unit testing is really critical to ensure that the implementation is correct – but there is a risk of having to re-implement the algorithm to test it.

For the disjoint-set, I used 2 tools developed at RiSE: Code Contracts and… Pex. When implementing data structure from the literature, I usually start ‘dumping’ the code and the contracts. As much as possible, any invariant or property that the author describes should be translated to contracts. It will give more opportunities for Pex to find bugs in my code.

Contracts First

For example, the contracts for the Union method look as follows:

private bool Union(Element left, Element right)
{
Contract.Requires(left != null);
Contract.Requires(right != null);
Contract.Ensures(Contract.Result<bool>()
? Contract.OldValue(this.SetCount) - 1 == this.SetCount
: Contract.OldValue(this.SetCount) == this.SetCount
);
Contract.Ensures(this.FindNoCompression(left) == this.FindNoCompression(right));


The two first requires clauses do the usual null checks. The first ensures clause checks that if Union returns true, a merge has been done and the number of sets has decreased by 1. The last ensures checks that left and right belong to the same set at the end of the union.

Note that so far, I did not have to provide any kind of implementation details. The other methods in the implementation receive the same treatment.

A Parameterized Unit Test

I wrote a single parameterized unit test while writing/debugging the implementation of the forest. It could probably have been re-factored into many smaller tests, but for the sake of laziness, I used a single one.

The parameterized unit test implement a common scenario: add elements to the disjoint-set, then apply a bunch of union operations. Along the way, we can rely on test assertion and code contracts to check the correctness of our implementation.

[PexMethod]
public void Unions(int elementCount, [PexAssumeNotNull]KeyValuePair<int,int>[] unions) {

The test takes a number of element to add and a sequence of unions to apply. The input data as is needs to be refined to be useful. In that sense, we add assumptions, under the form of calls to PexAssume, to tell Pex ‘how it should shape’ the input data. In this case, we want to ensure that elementCount is positive and relatively small; and that the values in unions are within the [0…elementCount) range.

    PexAssume.IsTrue(0 < elementCount);
PexSymbolicValue.Minimize(elementCount);
PexAssume.TrueForAll(
unions,
u => 0 <= u.Key && u.Key < elementCount &&
0 <= u.Value && u.Value < elementCount
);

Now that we have some data, we can start writing the first part of the scenario: filling the disjoint-set. To do so, we simply add the integers from [0..elementCount). Along the way, we check that the Contains, ElementCount, SetCount all behave as expected:

    var target = new ForestDisjointSet<int>();
// fill up with 0..elementCount - 1
for (int i = 0; i < elementCount; i++)
{
target.MakeSet(i);
Assert.IsTrue(target.Contains(i));
Assert.AreEqual(i + 1, target.ElementCount);
Assert.AreEqual(i + 1, target.SetCount);
}

The second part gets more interesting. Each element of the unions array is a ‘union’ action between 2 elements:

    // apply Union for pairs unions[i], unions[i+1]
for (int i = 0; i < unions.Length; i++)
{
var left = unions[i].Key;
var right= unions[i].Value;

var setCount = target.SetCount;
bool unioned = target.Union(left, right);

There is 2 things we want to assert here: first, then left and right now belong to the same set. Secondly, that the SetCount has been updated correctly: if unioned is true, it should have decreased by one.

// should be in the same set now

Assert.IsTrue(target.AreInSameSet(left, right));
// if unioned, the count decreased by 1

PexAssert.ImpliesIsTrue(unioned, () => setCount - 1 == target.SetCount);
}
}

From this parameterized unit test, I could work on the implementation, and refine again and again until it had all passing tests (I did not write any other test code).

Happy Ending

The resulting test suite generated by Pex is summarized in the screenshot below: the number of elements does not really matter, what is interesting is the sequence of unions performed. This test suite achieves 100% coverage of the methods under test :).

In fact, the Union method involved some tricky branches to cover, due to some optimization occurring in the disjoint-set. Pex managed to generate unit tests for each of the branches.

Thanks to the contracts, the test assertions, and the high code coverage of the generated test suite, I have now a good confidence that my code is properly tested. The End!

(Next time, we will talk about testing minimum spanning tree implementations…)

posted on Tuesday, December 30, 2008 11:41:45 AM (Pacific Standard Time, UTC-08:00)      Comments [0]
Wednesday, November 05, 2008

Check out the session on Code Contracts and Pex on Channel 9. You will learn about the new cool API to express pre-conditions, post-conditions and invariants in your favorite language – i.e. design by contracts (DbC) for .NET and the new Code Digger experience in Pex, and most importantly how DbC and Pex play well together.

http://channel9.msdn.com/pdc2008/tl51/

posted on Wednesday, November 05, 2008 11:10:22 PM (Pacific Standard Time, UTC-08:00)      Comments [4]
Tuesday, October 21, 2008

Wonder what we’ve been up to for the last months… We’ve been building a very cool development experience on top of Pex that we call Code Digging. Check out Nikolai’s blog post on what it means to you!       .

posted on Tuesday, October 21, 2008 9:49:02 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]

Have you ever written code that directly used the .NET File APIs? We probably all did although we knew it would make the code less testable and dependent on the file system state. As bad as it sounds, it really requires a lot of discipline and work to avoid this: one would need to create an abstraction layer over the file system, which is not a short task (think long/tedious).

// in how many ways can this break?
public static void CleanDirectory(string path)
{
if (Directory.Exists(path))
Directory.Delete(path, true);
Directory.CreateDirectory(path);
}


Abstraction

Fortunately, there always someone else who got motivated at some point. Ade Miller digged an abstraction of the File System, the IFileSystem interface, that Brad Wilson had written for the CodePlex client project. Very nice since it provides a solid foundation for cleanly abstracting from the File System, and thus increase the testability of our code.

// a little better, testable code at leastpublic static void CleanDirectory(IFileSystem fs, string path)
{
if (fs.DirectoryExists(path))
fs.DeleteDirectory(path, false);
fs.CreateDirectory(path);
}


Mocking

So with this interface we can write code that we’ll be able to test in isolation from the physical file system. That’s great but there is still a lot of work on the should of the developer: the developer will have write intricate scenarios involving mocks to simulate the different possible configurations of the file system. No matter which mock framework (Moq, Rhino, Isolator, …), he’ll be using, (1) it’s going to be painful, (2) he’ll miss cases. It’s probably easy to write a single “happy path”, but especially with the file system there are quite some realistic “unhappy paths”.

This test case uses Moq to create the scenario where there is a directory already. Although Moq has a very slick API to set expectations, it is still a lot of work to write this basic scenario. (And what exactly is the meaning of “Expect”, the delegate or expression inside, “Returns” and “Callback”?)

[TestMethod]
public void DeletesAndCreateNewDirectory()
{
var fs = new Mock<IFileSystem>();
string path = "foo";

fs.Expect(f => f.DirectoryExists(path)).Returns(true);
fs.Expect(f => f.DeleteDirectory(path, false)).Callback( () => Console.WriteLine("deleted"));
fs.Expect(f => f.CreateDirectory(path)).Callback(() => Console.WriteLine("created"));

DirectoryExtensions.CleanDirectory(fs.Object, path);
}

Modeling

We had our intern, Soonho Kong, work on a Parameterized Model of the File System, built on top of the IFileSystem interface (yes that same interface Brad Wilson published on CodePlex). We say that the model is parameterized because it uses the Pex choices API to create arbitrary initial File System states; Pex “chooses” each such state (actually, Pex carefully computes the state using a constraint solver) to trigger different code paths in the code. You can think of each choice as a new parameter to the test. Or to put this with an example: if your code checks that the file “foo.txt” exists, then the parameterized model would choose a file system state that would contain a “foo.txt” file (or not, in another state, to cover both branches of the program).

So what does it mean for you? Well, the way you write tests that involve the file system changes radically. You simply need to pass the file system model to your implementation. The model is an under-approximation of the real file system (which means that we didn’t model every single nastiness that can occur when the moon is full), but it definitely captures more practically relevant corner cases than we (humans) usually think about. Let’s see this in the following test:

[PexMethod]
public void CleanDirectory()
{
var fs = new PFileSystem();
string path = @"\foo";
try
{
DirectoryExtensions.CleanDirectory(fs, path);

// assert: the directory exists and is empty
Assert.IsTrue(fs.DirectoryExists(path));
Assert.AreEqual(0, fs.GetFiles(path).Length);
}
finally
{
fs.Dir();
}
}


When we run Pex, we get 7 generated tests. In fact, Pex finds an interesting bug that occurs when a file with the name of the directory to clean already exists. In the Pex Exploration Results window, you can see a ‘dir’-like output of the file system model associated with a particular test case (the fs.Dir() method call outputs that text to the console which Pex captures).

This bug is the kind of corner-case that makes testing the file system so fun/hard. Thanks to the parameterized model (and Soonho), we got it for free. Note also that the assertion in our test is pretty powerful since it must be true for any configuration of the file system (it almost smells like a functional specification to me):

// assert: the directory exists and is empty
Assert.IsTrue(fs.DirectoryExists(path));
Assert.AreEqual(0, fs.GetFiles(path).Length);

Happy modeling!

The full source of PFileSystem will be available in the next version of Pex (0.8)..

posted on Tuesday, October 21, 2008 12:15:09 PM (Pacific Daylight Time, UTC-07:00)      Comments [6]
Thursday, October 02, 2008

We are very excited to announce that Pex has a session at PDC 2008. We will be talking about code contracts and Pex, and how they play nicely together. Book it now in your conference agenda!!! (look ‘Research’ or ‘Pex’ in the session list).

See you there and don’t forget to swing by our booth.

posted on Thursday, October 02, 2008 10:42:09 PM (Pacific Daylight Time, UTC-07:00)      Comments [2]
Monday, September 22, 2008

How do you write good parameterized unit tests?
Where do they work the best?
Are there some Test Patterns ? Anti Patterns?

This is the kind of questions that we have received many times from Pex users. We just released Pex 0.7 which contains a list patterns and anti-patterns for parameterized unit testing (this is still a draft but we feel that we already have a number of good patterns that would be helpful for anyone giving a shot at Pex):

Note that most of the patterns in this document are not Pex specific and apply to parameterized unit tests in general; including MbUnit RowTest/CombinatorialTest/DataTest, NUnit RowTest, MSTest Data Test, etc…

The ‘triple A’ pattern is a common way of writing a unit test: Arrange, Act, Assert. Even more ‘A’crobatic, we propose the ‘quadruple A’ where we added one more ‘A’ for assumption:

Pex is an automated white box testing tool from Microsoft Research.

posted on Monday, September 22, 2008 3:41:15 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Wednesday, September 17, 2008

In the previous post, we implemented the insertion method of a binary heap using Test Driven Development (TDD) and parameterized unit tests (I'll leave the full implementation of the insertion method as an exercise).

In this post, we will take a closer look at the development flow that we used and show how it relates to traditional TDD. For many people, combining TDD and automated test generation makes no sense. I believe this is not true anymore, and this is what this post is about.

Test Driven Development Flow

TDD has a well-defined flow where developers

1. write a unit test,
2. run the test and watch it fail,
3. fix the code,
4. run the test and watch the test pass. Start again (I'll skip refactoring in this discussion)

During this flow, practitioners also refer to the green state when all tests are passing and red state when some tests are failing. The little picture below depicts this little state machine.

A key aspect of this approach is that the design of the API is inferred by writing the 'scenarios', i.e. tests. Therefore, unit tests are a critical building block of the TDD flow.

Note also that xUnit-like test frameworks (pick your favorite framework here) provide the automation tools so that the execution of the test and the investigation of a failure is painless for the programmer.

Test Driven Development Flow With Parameterized Unit Tests

Parameterized unit tests and Pex change the TDD flow while retaining its essence: building the design from tests. Here are the steps, we'll discuss them in detail later on:

1. Write a parameterized unit test,
2. Run Pex and watch some generated unit tests fail,
3. Fix the code,
4. Run Pex again,
1. Some previously generated unit tests now pass, at least one new failing unit test get generated (go to 3)
2. All generated tests are passing, start again (go to 1)

The key difference is the shortcut from step 4 (generating unit tests) to 3 (fix the code), without passing through step 1 (write a new test). This is illustrated by the yellow feedback loop in the diagram below:

To the risk of repeating myself, let me emphasize some important points here:

• you still need to write unit tests, it's just that they can have parameters: Pex generates unit tests from parameterized unit tests, by instantiating them. The person who writes the parameterized unit test is you, not the tool!
• it is still about design: using parameterized unit tests is as much about design as closed (i.e. parameterless) unit tests. In fact, one can argue that parameterized unit tests are way closer to a specification that closed unit tests.
• it is test first: in case it was not obvious by now :)

The shortcut from 4 to 3

As mentioned above, the main difference in the flow is that jump from running the tool (step 4) to fixing the code (step 3), without writing new tests (step 1). This happens because a parameterized unit tests captures equivalence classes rather than a single scenario like closed unit tests. As a result,

• you spend more time fixing/implementing the code: the nice thing about the shortcut is that you can spend more time writing the code, rather than writing tests.
• you can leverage automated white-box testing tools: Pex tries to get the maximum coverage out of your parameterized unit test (note remember that getting coverage also means covering the throwing branches in Assert), using an automated white-box code analysis. Now that you have also those manycore CPUs on your motherboard, you can finally make a good use of them :)

To Pex and not to Pex

An important aspect of parameterized unit tests (and tools like Pex) is that you do not have to drop (completely) your existing habits: in many cases, it is easier to write closed unit tests! In fact, you can always start from a closed unit test and refactor it later. We do not expect users to parameterized unit tests write exclusively (another nice read here), but when you do write them, we expect you'll get much more 'out of your buck'.

In future posts, we'll discuss different ways to write parameterized unit tests: from refactoring existing unit tests to using test patterns.

To be continued...

Next time, we'll go back to the heap and look at implementing the 'remove minimum' method...

(Pex is a automated structural testing tool from Microsoft Research. More information at http://research.microsoft.com/pex.)

posted on Wednesday, September 17, 2008 5:15:26 PM (Pacific Daylight Time, UTC-07:00)      Comments [6]
Monday, September 08, 2008

The other day I stumbled on a first draft on a new book on algorithms (Data Structure and Algorithms). After taking a peak at the draft, I found some (hidden) motivation to finally write a decent binary heap for QuickGraph. A heap is a critical data structure for Dijkstra shortest path or Prim's minimum spanning tree algorithm, since it is used to build efficient priority queues.

In this post, we'll start building a binary heap using Test-Driven Development (write the tests first, etc...) and parameterized unit tests.

BinaryHeap?

The heap is a tree where each parent node has a value smaller or equal to the child nodes. The binary heap is a heap implemented through a binary tree, and to make things more interesting (and fast), the tree is usually mapped to an array using indexing magic:

• parent node index: (index - 1) /2
• left child node: 2*index + 1
• right child node: 2*index + 2

The indexing magic is typically the kind of things that introduce bugs.

Let's write that first test

We start by writing a test that simply fills any binary heap with a number of entries. A possible test for this is written as follows:

[PexMethod]
public void Insert<TPriority, TValue>(
[PexAssumeUnderTest]BinaryHeap<TPriority, TValue> target,
[PexAssumeNotNull] KeyValuePair<TPriority, TValue>[] kvs)
{
var count = target.Count;
foreach (var kv in kvs) {
AssertInvariant<TPriority, TValue>(target);
}
Assert.IsTrue(count + kvs.Length == target.Count);
}

There are a number of unusual annotations in this test. Let's review all of them:

• The test is generic: Pex supports generic unit tests. This is really convenient when testing a generic type.
• PexAssumeUnderTest, PexAssumeNotNull: it basically tells Pex that we don't care about the case where kvs or target is null.
• We've added a Add(TPriority priority, TValue value) method and Count property to the BinaryHeap

There are also two assertions in the test. Inlined in the loop, we check the invariant of the heap (AssertInvariant). We'll fill up this method as we go. At the end of the test, we check that the Count property.

BinaryHeap version #0

public class BinaryHeap<TPriority, TValue> {
public void Add(TPriority priority, TValue value) { }
public int Count { returnt 0; }}

Now that the code compiles, we run Pex which quickly finds that a non-empty array breaks the count assertion.

BinaryHeap version #1

We have a failing test so let's fix the code by storing the items in a list:

public class BinaryHeap<TPriority, TValue> {
List<KeyValuePair<TPriority, TValue>> items = new List<KeyValuePair<TPriority, TValue>>();
public void Add(TPriority priority, TValue value) {
}
public int Count {
get { return this.items.Count; }
}


We run Pex again and all the previous failing tests are now passing :)

There's a new object creation event that happened (bold events should be looked at). Remember that the test takes a binary heap as argument, well we probably need to tell Pex how to instantiate that class. In fact, this is exactly what happens when I click on the object creation button:

Pex tells me that it guessed how to create the binary heap instance and gives me the opportunity to save and edit the factory. The factory looks like this and get included automatically in the test project:

[PexFactoryClass]
public partial class BinaryHeapFactory {
[PexFactoryMethod(typeof(BinaryHeap<int, int>))]
public static BinaryHeap<int, int> Create() {
BinaryHeap<int, int> binaryHeap = new BinaryHeap<int, int>();
return binaryHeap;
}
}

All our tests are passing, we can write the next test... but wait!

We have an invariant!

The nice thing about data structures is that they have fairly well defined invariants. These are very useful for testing!

In the case of the heap, we know that the parent node priority should always be less or equal to the priorities of both of his left and right children. Therefore, we can add a method to BinaryHeap that walks the array and checks this property on each node:

[Conditional("DEBUG")]
public void ObjectInvariant() {
for (int index = 0; index < this.count; ++index) {
var left = 2 * index + 1;
Debug.Assert(left >= count || this.Less(index, left));
var right = 2 * index + 2;
Debug.Assert(right >= count || this.Less(index, right));
}
}private bool Less(int i, int j) {
return false;
}

Remember that AssertInvariant method, let's call ObjectInvariant in that method and run Pex again.

void AssertInvariant<TPriority,TValue>(BinaryHeap<TPriority,TValue> target) {
target.ObjectInvariant();
}

Pex immediately finds an issue:

This assertion is due to our overly simplified implementation of Less, which always return false.

Fixing tests, and finding new failing tests

We have failing tests so it's time to fix the code again. Let's start fixing on the Less method by using a comparer:

readonly Comparison<TPriority> comparison =
Comparer<TPriority>.Default.Compare;
private bool Less(int i, int j)
{
return this.comparison(this.items[i].Key, this.items[j].Key) >= 0;
}

We run Pex and it comes back with the following tests:

Two interesting things happened here:

• the previous failing test (with [0,0], [0,0]) was fixed by fixing Less
• Pex found a new issue where the input array involves a small key (3584), then a larger key (4098). A correct heap implementation would have kept the smaller key at the first position.

The coolest part is we did not have to write any additional line of code to get to this point: Pex updated the previous tests and generated the new failure for us.

This is a new kind of flow that occurs when using Pex in TDD: a code update has fixed some issues but created new ones. We are still moving towards our goal but we did not have to pay the price of writing a new unit test.

In fact, to fulfill the invariant and make this test pass we will have to write a correct implementation of the Add method.... without writing a single additional line of test code :)

to be continued...

posted on Monday, September 08, 2008 10:39:35 PM (Pacific Daylight Time, UTC-07:00)      Comments [1]
Tuesday, August 26, 2008

Update: renamed project to YUnit to avoid clashing with other frameworks.

I've been playing with custom test types for team test lately and the result of this experiment is YUnit. A microscopic test framework that lets you write tests anywhere since it only uses the Conditional attribute. That's right, any public static parameterless method tagged with [Conditional("TEST")] becomes a test :)

If you've always dreamed of implementing your own custom test type, this sample could be helpful to you. Remember that this is only a sample and comes with no guarantees of support.

Sources and installer available at: http://code.msdn.microsoft.com/yunit.

posted on Tuesday, August 26, 2008 8:26:36 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Wednesday, August 06, 2008

Read on Nikolai's announcement on the latest drop of Pex.

posted on Wednesday, August 06, 2008 11:06:59 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Thursday, July 31, 2008

Alexander Nowak has started a blog post chronicle on Pex and already has 6 episodes to it!

• Pex - test Case 5 (regular expressions)
• Pex - test case 4 (strings and parameter validation)
• Pex - Test case 3 (enums and business rules validation)
• Pex - test case 2
• Pex - test case 1
• Starting with Pex (Program Exploration)

The posts give a nice point of view of Pex from a user perspective, and against classic testing techniques such as equivalence classes.

• posted on Thursday, July 31, 2008 7:25:49 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Tuesday, July 29, 2008

Linear programming problems are usually solved using the simplex algorithm. While it is easy to encode a constraint system of linear equalities and inequalities as a Parameterized Unit Test for Pex, there is currently no way to tell Pex that we want test inputs that are “minimal” according to a custom objective function. However, Pex can still generate *surprising* feasible solutions.

Let's start with a simple set of linear inequalities that define our problem.

[PexMethod]
public int Test(int x, int y)
{    // PexAssume is used to add 'constraints' on the input    // in this case, we simply encode the inequalities in a boolean formula
PexAssume.IsTrue(
x + y < 10 & // using bitwise & to avoid introducing branches
5 * x + 2 * y > 20 &
-x + 2 * y > 0 &
x > 0 & y > 0);    // the profit is returned so that it is automatically logged by Pex
return x + 4 * y;
}

After running Pex, we get one feasible solution. It is not optimal as expected since we don't apply the simplex algorithm.

Enter overflows

Remember that .Net arithmetic operations will silently overflow unless you execute them in a checked context? Let's push our luck and try to force an overflow by changing x > 0 constraint to x > 1000:

[PexMethod]
public int Test(int x, int y)
{
PexAssume.IsTrue(
x + y < 10 &
5 * x + 2 * y > 20 &
-x + 2 * y > 0 &
x > 1000 & y > 0
);
return x + 4 * y;
}

Z3, the constraint solver that Pex uses to compute new test inputs, uses bitvector arithmetic to find a surprising solution that fulfills all the inequalities (our profit has just gone of the roof :)).

Z3 is truly an astonishing tool!

Checked context

In order to avoid overflow, one should use a checked context. Let's update the parameterized unit test:

[PexMethod]
public int Test(int x, int y)
{
checked
{
PexAssume.IsTrue(
x + y < 10 &
5 * x + 2 * y > 20 &
-x + 2 * y > 0 &
x > 0 & y > 0
);
}
return x + 4 * y;
}

In fact, in that case, Pex generates 2 test cases. One test that passes and the other test that triggers an OverflowException (implicit branch).

Stay tuned for more surprising discoveries using Pex.

posted on Tuesday, July 29, 2008 6:35:31 PM (Pacific Daylight Time, UTC-07:00)      Comments [2]
Wednesday, February 13, 2008

This is a general recommendation if you're planning to use a tool like Pex in the future: make sure that preconditions (i.e. parameter validation) fails in a different fashion that other assertions.

Here's a snippet that shows the problem:

// don't do this
void Clone(ICloneable o) {
Debug.Assert(o != null); // pre-condition
...
object clone = o.Clone();
Debug.Assert(clone); // assertion
}

A tool like Pex will explore your code and try to trigger every Debug.Assert it finds on its way. When the assertion is a precondition, it is likely expected and one would like to emit a negative test case (i.e. 'expected exception').

The problem in the snippet above is that both failure will yield to the same assertion exception and it will very difficult to *automatically* triage the failure as expected or not.

How do I fix this?

Make sure different classes of assertions can be differentiated automatically, through different exception types, tags in the message, etc...

posted on Wednesday, February 13, 2008 9:51:00 AM (Pacific Standard Time, UTC-08:00)      Comments [3]
Thursday, December 06, 2007

Update: this talk has been cancelled.

I'll be giving a talk about Pex in Diegem on January 3.

posted on Thursday, December 06, 2007 8:33:33 AM (Pacific Standard Time, UTC-08:00)      Comments [0]
Wednesday, December 05, 2007

In the previous post, we went through the exploration testing process to exercise a simple method, CheckPositive. In this post, we'll try the same exploration testing, but will let Pex do it.

// mehod under test
1 void CheckPositive(int i, bool @throw) {
2     if (i < 0) {
3          Console.WriteLine("not ok");
4          if (@throw)
5             throw new ArgumentException();
6     }
7     else
8         Console.WriteLine("ok");
9 }
// hand-crafted unit tests
[TestMethod] void Zero() {
CheckPositive(0, false);
}
[TestMethod] void MinusOne() {
CheckPositive(-1, false);
}
[TestMethod] void MinusOneAndThrow() {
CheckPositive(-1, true);
}

Exploration testing with Pex

To let Pex explore the CheckPositive, we write a little test wrapper around that method:

[TestClass, PexClass]
public partial class ExplorationTesting {
[PexMethod]
public void Test(int i, bool @throw) {
CheckPositive(i, @throw);
}

We also instrumented the original method with additional methods to track down the path conditions that Pex computes along the execution traces. Pex generates 3 pairs of values which are equivalent to what the test we manually created:

• 0, false
• int.MinValue, false
• int.MinValue, true (throws)
posted on Wednesday, December 05, 2007 8:54:15 AM (Pacific Standard Time, UTC-08:00)      Comments [1]
Friday, November 30, 2007

I realized that I had not talked much about how Pex computes the values for the test parameters... the most important part of the tool!

#### It's not ...

Let's start by getting to wrong ideas out of the way. Parameterized tests are nothing new, they exist in MbUnit, VSTS, XUnit.Net, FIT, etc... so what's different with Pex?

• it is not random: when it comes to generate data, the easiest solution is to plugin a random generator. If the state space is big enough (e.g. integers 2^32), it highly unlikely that random tests will find the interesting corner cases,

if (i == 123456)
throw new Exception(); <---------- random won't find this

• it does not require ranges or hints for the data: Pex does not require annotation to specify the range of particular inputs. All the relevant input values are inferred from the code itself (we'll see later how).
• it is not pairwize testing: following the comment above, Pex does use a pairwize approach.
• it is not data testing: data testing such as FIT or MbUnit RowTest usually consists of rows containing a set of inputs and the expected output. In Pex, you cannot provide the expected output as a 'concrete' value, you need to express it as code (through assertions for example). This is a subtle difference that radically changes the way you write your tests.

[RowTest, Row(0, 1, 1), Row(1, 0, 1)] // data test for the addition
void AddTest(int a, int b, int result)
{   Assert.AreEqual(result, a + b);  }

[PexMethod] // 0 is the neutral of the addition operation
void ZeroNeutralTest(int b)
{   Assert.AreEqual(b, 0 + b);  }

• it is not a static analysis tool: Pex does a dynamic analysis of the code; it analyses the code that *is* running. Pex does this by rewriting the IL before it's jitted and instrumenting it with (many) callbacks to track precisely which IL instruction is being run by the CLR. So yes, Pex analyses the IL but on the fly rather than 'statically'.

Ok, now we've got a better idea of what Pex is not. So how does it work? ....

posted on Friday, November 30, 2007 1:19:05 AM (Pacific Standard Time, UTC-08:00)      Comments [3]
Friday, October 19, 2007

In it's Weekly Source Code, Scott Hanselman presents a new CodePlex project, NDepend.Helpers.FileDirectoryPath from Patrick Smacchia. Nice, better path handling should have been part of the BCL a while ago.

Path stuff is hard

Path normalization and parsing is not an easy task so when Patrick Smacchia mentions that his code "100% unit-tested", I decided to see if Pex could not find a little bug over there.

A dumb parameterized unit test

So I added wrote the following parameterized unit test, which 'simply' calls the constructor of FilePathRelative. Under the hood, there is some string manipulation done by the library to normalize the path, it should be interresting to see what comes out of this. I also added calls to PexComment.XXX to log the input/output values (Pex will build a table out of this):

    [PexMethod]    public void FilePathRelativeCtor(string path) {        PexComment.Parameters(path);        FilePath result = new FilePathRelative(path);        PexComment.Value("result", result.FileName);    }

Ooops

So Pex starts running and soon enough an assert pops up. Pex had just found a neat little path that broke an assertion in the library:

[Test]public void FilePathRelativeCtor_String_71019_003302_0_05() {    this.FilePathRelativeCtor("/");}

Popping up the reports, I went for the parameter table (remember the PexComment calls) that shows one row for each generated test. In fact, the 5-th test that Pex generated was triggering the assert:

Note that "//" also triggers the bug which seems to indicate that any path finishing by "/" will have this behavior.

The path condition

Lastly, I took a quick look at the path condition that Pex solved to discover the bug (see red below). Luckily this one is fairly easy and one can clearly see 'path[0] == '/' in there.

What did we learn today?

Handling paths is hard :)

Also, we saw that a dumb parameterized unit test (just calling a ctor), could find a bugs. If you use assertions, it will help Pex look for bugs in your code.

posted on Friday, October 19, 2007 1:46:37 AM (Pacific Daylight Time, UTC-07:00)      Comments [3]
Tuesday, October 16, 2007

Update: I will not be at the Seattle Code Camp, too much rescheduling.

I'll be presenting Pex at the Seattle Code Camp in Nov.

Pex – Automated White Box Unit Testing

Parameterized unit testing is becoming a mainstream feature of most unit test frameworks; MbUnit RowTest (and more), VSTS data tests, xUnit.net Theories, etc... Unfortunately, it is still the responsibility of the developer to figure out relevant parameter values to exercise the code. With Pex, this is no longer true. Pex is a unit test framework addin that can generate relevant parameter values for parameterized unit tests. Pex uses an automated white box analysis (i.e. it monitors the code execution at runtime) to systematically explore every branches in the code. In this talk, Peli will give an overview of the technology behind Pex (with juicy low-level .NET profiling goodness), then quickly jump to exiting live demos.

posted on Tuesday, October 16, 2007 6:36:52 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Wednesday, October 10, 2007

When someone is writing a book that contains code snippets, the question of (automatically) keeping those in sync quickly becomes very imporant. There's already lots of different solutions to this problem (every author has probably it's own), here's yet another one for C# that we've developed to author the Pex documentation.

Goals

A couple things that we wanted to acheive with this tool:

• snippets are always compilable and run as expected,
• snippets can be full classes, methods or even partial statements
• simple :)

'#region' based solution

This solution uses the #region directive to define a snippet. The region describe contains the snippet name, which will be used to dump it into a file. For example, given this piece of C#,

...#region snippet StackExamplePart3stack.Push(new object);#endregion...

Our parse will extract the code in the region and write it to StackExamplePart3.tex, which gets pulled in our LaTeX scripts.

\begin{verbatim}stack.Push(new object);\end{verbatim}

That's it?

Yes, you can author snippets that stay compilable and up to date:

• we can author all the snippets in Visual Studio and we are sure they always compile
• it's very easy to parse the #region's (left as exercise ;))
• #region are very flexible in terms what they contain so we can have snippets containing partial methods, statement, etc...
• the scheme aslo supports nested regions which is usefull when one explains an example line by line, and integrate the entire sample at the end. For example, DeclaringUnitTest is a 'sub'-snippet of UnitTest:
#region snippet UnitTest#region snippet DeclaringUnitTest[TestMethod]void Test(int i)#endregion{    }#endregion
• we can integrate our snippets in unit tests and verify they work as expected
• the tool can be integrated into the build process as a post-command build

posted on Wednesday, October 10, 2007 11:41:32 PM (Pacific Daylight Time, UTC-07:00)      Comments [2]
Sunday, September 23, 2007

xUnit, the new variation on the 'unit test framework' theme comes with support for data driven tests: 'Theories' (funny name by btw). Pex is a plugin for test frameworks, so we've added support for xUnit as well.

[PexClass] // xUnit does not have fixture attributes
public class MyTests
{
[Theory, DataViaXXXX] // xUnit theories
public void Test(int a, ....)
{}
}

posted on Sunday, September 23, 2007 9:26:32 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Friday, September 07, 2007

In a previous, we were looking at partial trust the lack of support for it. In this post, I'll show the key 'fixes' that we did to make Pex 'partial trust' aware.

Simulating Partial Trust

The easiest way to run under partial trust is to run your .net application from the network. However, in the context of a test framework, this would not work since many required permissions would not be granted (reflection, i/o, etc...). So we need a new AppDomain whose security policy considers the test framework assemblies as fully trusted.

• Get a new AppDomain:
string trust = "LocalIntranet";AppDomain domain = AppDomain.CreateAppDomain(trust);
• Load the named permission set
PermissionSet permission = GetNamedPermissionSet(trust);
• Create the code group structure that associate the partial trust permission to any code
UnionCodeGroup code= new UnionCodeGroup(    new AllMembershipCondition(),    new PolicyStatement(permission, PolicyStatementAttribute.Nothing));
• give full trust to each test framework assembly:
StrongName strongName = CreateStrongName(typeof(TestFixtureAttribute).Assembly);PermissionSet fullTrust = new PermissionSet(PermissionState.Unrestricted);UnionCodeGroup fullTrustCode = new UnionCodeGroup(    new StrongNameMembershipCondition(strongName.PublicKey, strongName.Name, strongName.Version),     new PolicyStatement(fullTrust, PolicyStatementAttribute.Nothing));code.AddChild(fullTrustCode);
• Assign the policy to the AppDomain
PolicyLevel policy = PolicyLevel.CreateAppDomainLevel();policy.RootCodeGroup = code;domain.SetAppDomainPolicy(policy);

This is basically it (the rest of the details are left as an exercise :)).

Let them call you

Make sure to add the AllowPartiallyTrustedCallers to the test framework assembly otherwize users won't be allowed to call into it...

A twist...

Pex is bit invasive when it comes to partial trust. Pex rewrites the IL at runtime and turns all method bodies into... unsafe code (that is unverifiable). At this point, any will not run because of the SkipVerification permission.

No problemo, just add it to the permissionset:

permission.AddPermission(    new SecurityPermission(SecurityPermissionFlag.SkipVerification)    );

posted on Friday, September 07, 2007 11:39:22 PM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Thursday, August 23, 2007

A common requirement for unit test framework is the ability to test internal types.

That's easy! use InternalsVisibleToAttribute

With .Net 2.0 and up, this is a fairly easy task thanks to the InternalsVisibleToAttribute: add it to the product assembly to give 'visibility rights' to the test assembly.

// in assembly Foointernal class Foo {}// giving visibility rights to the Foo.Tests assembly[assembly:InternalsVisibleTo("Foo.Tests")]

On the test assembly side, this works because unit test are 'closed' methods which do not expose any internal types.

[Test]public void FooTest() {    Foo foo = new Foo(); // we're using the internal type Foo                          // but it's hidden in the unit test}

What about parameterized tests? Make them internal as well

If one of the parameters of the test is internal, the test method will have to be internal as well in order to compile:

[PexTest]internal void FooTest(Foo foo) {   ... }

Not pretty but still gets the job done. Pex will generate public unit test methods that invoke the internal parameterized test method, and we'll be happy:

[Test]public void FooTest_12345() {    this.FooTest(null);}

This issue was never faced by MbUnit RowTest because it only accepts intrinsic types such as int, long, etc... Those types are obviously public :)

posted on Thursday, August 23, 2007 10:24:08 AM (Pacific Daylight Time, UTC-07:00)      Comments [0]
Saturday, July 14, 2007

Pex can analyze regular expressions*** and generate strings that matches them automatically!

What does it mean? Well, maybe, somewhere deep in your code your are validating some string with a regex (for example a url). In order to test the validation code, one needs to craft inputs that does not match (easy) and matches (harder) the regex.

Let Pex do it:

So what if Pex could be smart enough to understand a regex and craft inputs accordingly? In the example below, it would be very hard for a random generate to generate a string that matches the regex.

[PexTest]
public void Url([PexAssumeIsNotNull]string s)
{
if (Regex.IsMatch(s, "(?<Protocol>\w+):\/\/(?<Domain>[\w@][\w.:@]+)\/?[\w\.?=%&=\-@/$,]*")) throw new PexCoverThisException(); // random won't find this } "" would be a failing match and foo://foo.com a good match. (To generate the correct match, my brain simulated the regex automaton and estimated one possible path). Interestingly, Pex generates.... Ä€://Ä€Ä Pretty ugly... but correct! This reminds us that while regex are used to validate input, what they'll let through is sometimes scary. Compiled Regex + Pex = Love The great part about supporting the regular expressions is that it comes for free (almost) since Regex can be compiled to IL in .Net. When the BCL generates the regex IL code, it effectily builds the automaton... which can be analyzed by Pex!!! Refresher: Pex works analyzing the MSIL being executed. Hey but not all Regex are compiled! That's true. Compiling the regex is optional so Pex needs to do a little bit of 'plumbing' to make sure all regular expressions are compiled. This is simply done by substituting the real .ctor of the Regex class with a customized version that compiles the regex. I'll talk about substitutions deeper in the future. *** Of course, the bigger the regex is, the harder it is going to be for Pex to craft a successful match. posted on Saturday, July 14, 2007 2:49:53 PM (Pacific Daylight Time, UTC-07:00) Comments [1] Tuesday, July 03, 2007 One of the problem with Pex is that.... it's yet another dependency in your test project. Pex has it's own attributes which makes it very difficult to 'strip it out' of the test source. Many teams won't allow to check-in assemblies, they won't install external components on the build machine and just forget about touching the source code! So how do you strip Pex? In "Condition" lies the answer An elegant solution uses a bit of Reflection, CodeDom and the MSBuild Condition attribute: generate Attribute stubs (shadows) and bind them to the project. Pex modifies the project file and uses MSBuild conditions to conditionally include files and assembly references in the generated test project. • A boolean property PexShadows to controls the shadowing state: true if shadowed, false or missing otherwize. <Project DefaultTargets="Build" ...> <PropertyGroup> ... <PexShadows>true</PexShadows> </PropertyGroup>  • a conditional reference to Microsoft.Pex.Framework:  <Reference Include="Microsoft.Pex.Framework" Condition="$(PexShadows) != 'true'" />


when the property PexShadows evalutes to true, the Microsoft.Pex.Framework assembly is not referenced anymore.

• a file containing all the custom attributes 'stubs' (for all attributes in the Microsoft.Pex.Framework) is generated (automatically of course :)). Pex also dumps the source of several other helper classes. Each generated file is added to the test project conditionally:
    <Compile Include="Properties\PexAttributes.cs" Condition="$(PexShadows) == 'true'"> <AutoGen>True</AutoGen> <DependentUpon>AssemblyInfo.cs</DependentUpon> </Compile> That's it! Flip the value of PexShadows to bind/unbing your test project from Pex. So what does Visual Studio says ? Visual Studio is totally fooled by the maneuver. It shows the conditional files and references as if nothing happened. Add a menu item in the project context menu to switch it on and off and we are good to go :) posted on Tuesday, July 03, 2007 10:28:12 PM (Pacific Daylight Time, UTC-07:00) Comments [0] Saturday, June 30, 2007 In Pex, we added the possibility to specify the type under test of a given fixture: public class Account {...} [TestFixture, PexClass(typeof(Account))]public class AccountTest {...}  That's nice but why would it be useful... Beyond the fact that it clearly expresses the 'target' of the fixture, this kind of information can be leverage by tools like Pex. For example, since we know that Account is the type under test, we can tune Pex to prioritize the exploration of the Account type. Another interesting side effect is the targeted code coverage data. Instead of getting coverage information over the entire assembly, we can directly provide coverage over the type under test: the AccountTest covered xx% of Account. Still toying around the concept, one can add a special filtering mode to the command line to execute all tests that target 'Account': pex.exe /type-under-test:Account Bank.Tests.dll posted on Saturday, June 30, 2007 12:23:24 AM (Pacific Daylight Time, UTC-07:00) Comments [0] Saturday, May 26, 2007 .Net 2.0 has been out for a while and it seems that 'generics' have not made it into unit test frameworks (that I know of). When I write unit tests for generics, I don't want to have to instantiate them! For example, if I have an generic interface, interface IFoo<T> {...} then I'd really like to write this kind of test and let the test framework figure out an interresting instantiation (i.e. choice of T): [Test]public void Test<T>(IFoo<T> foo) { ... } In the example above, System.Object can be trivially used to instantiate IFoo<T>. Of course, things get more interresting when mixing type argument, method argument and constraints :) interface IFoo<T>{ R Bar<R>(T t) where R : IEnumerable<T>} In Pex, we've started to look at this problem... stay tuned. posted on Saturday, May 26, 2007 11:09:00 PM (Pacific Daylight Time, UTC-07:00) Comments [3] Thursday, April 19, 2007 With parameterized unit tests, it is not uncommon to generate a large number of exceptions. An basic exception log usually looks like this: a small message and the stack trace. Whith this kind of output, a lot of work is still left to the user since he has, *for each frame*, to manually open the source file and move to the line refered by the trace. Give me the source context! In the Pex reports we added a couple lines of code to read the source for each frame and display it in the reports (no rocket science). The cool thing is that you now can get a pretty good idea of what happened without leaving the test report. Multiply that by dozens of exceptions and you've won a loooot of time. Here are some screenshots to illustrate this: an exception was thrown in some arcane method. The error message is not really useful (as usual). If we expand the source and actually see the code, things become much clearer... posted on Thursday, April 19, 2007 5:12:40 PM (Pacific Daylight Time, UTC-07:00) Comments [0] Tuesday, April 10, 2007 MbUnit supports different flavors of parameterized unit tests: RowTest, CombinatorialTest, etc... If you are already using those features, it would be very easy for you to 'pexify' them: namespace MyTest{ using MbUnit.Framework; using Microsoft.Pex.Framework; [TestFixture, PexClass] public partial class MyTestFixture { [RowTest] [Row("a", "b")] [PexTest] public void Test(string a, string b) { ... } } } Isn't this nice? :) Some little notes: • 'partial' is helpfull to emit the generated unit test in the same class... but not the same file. Pex also support another mode where partial is not required. • the Pex attributes do not 'interfere' with the MbUnit ones. Your unit tests will still run exactly the same with MbUnit. posted on Tuesday, April 10, 2007 9:47:27 PM (Pacific Daylight Time, UTC-07:00) Comments [2] Sunday, April 01, 2007 This is the first post about Pex and how to use it. Since Pex is a fairly large project, I'll probably stretch the content of a large number of entries. Stay tunned... Pex gives you Parameterized Unit Tests Data-driven tests are not something new. They exist under different forms such as MbUnit's RowTest or CombinatorialTest, VSTS's DataSource, etc... So what's so special about Pex? The major difference is that Pex finds the parameter inputs for you (and generates a unit test out of it). Whereas the user had to (smartly) guess a set of input for the data-driven tests, Pex tries to compute the relevant inputs to those tests automatically (and may also suggest fixes). Note: The way Pex finds the input will be covered in more details later. In a couple words, Pex performs a systematic white box analysis of the program behavior. It tries to generate a minimal test suite with maximum coverage. Parameterized Unit Tests, what does it feel like? Parameterized unit tests are methods with parameters. There's nothing magic about that. Pex provides a set of custom attributes so that you can author them side-by-side with classic unit tests: [TestClass, PexClass] // VSTS fixture containing parameterized tests public class AccountTest { // parameterized unit test // account money should never be negative, for any 'money', 'withdraw' [PexTest] public void TransferFunds(int money, int withdraw) {...} } When Pex finds interresting data to feed the parameterized unit test, it generates a unit test method that calls the parameterized unit test. This also means that once the unit test has been generated, you do not need Pex anymore to run the repro. [TestClass, PexClass] // VSTS fixture containing parameterized tests public class AccountTest { [PexTest] public void TransferFunds(int money, int withdraw) {...} ... [TestMethod, GeneratedBy("Pex", "1.0.0.0")] public void TransferFunds_12345() { this.TransferFunds(12, 13); } } Why do I need parameterized unit tests anyway? A parameterized unit tests generally captures more program behaviors than a single unit test which is like a micro-scenario. This will become more apparent when we start looking at some examples. If you were already using [RowTest] or [DataSource] as part of your testing, then you will definitely like Pex. posted on Sunday, April 01, 2007 3:52:37 PM (Pacific Daylight Time, UTC-07:00) Comments [2] Monday, March 12, 2007 The Pex screencast was a bit mysterious without sound and comments. I've added a 'storyboard' to help you understand it: posted on Monday, March 12, 2007 5:31:29 PM (Pacific Daylight Time, UTC-07:00) Comments [1] Thursday, March 08, 2007 I'm thrilled to present the project I joined last October: 'Pex' (for Program EXploration). Pex is a powerfull plugin for unit test frameworks that let the user write parameterized unit tests**. Pex does the hard work of computing the relevant values for those parameters, and serializing them as classic unit tests. Here's a short screencast where we test and implement a string chunker. In the screencast, we use a parameterized unit test to express that for *any* string input and *any* chunk length, the concatenation of the chunks should be equal to the original sting. More info on Pex is available at http://research.microsoft.com/pex/. ** It's actually much more than that... but let's keep that for later :) posted on Thursday, March 08, 2007 9:58:36 AM (Pacific Standard Time, UTC-08:00) Comments [5] Saturday, February 24, 2007 posted on Saturday, February 24, 2007 1:09:02 AM (Pacific Standard Time, UTC-08:00) Comments [2] Saturday, September 09, 2006 After 2 years in the CLR, I'm moving job (and building) to Microsoft Research. I will be working on Parametrized Unit Testing. posted on Saturday, September 09, 2006 11:21:39 AM (Pacific Daylight Time, UTC-07:00) Comments [3] Monday, July 25, 2005 In this post, I’ll show how to build an MsBuild (msdn2) task will automatically generate XSD schemas for your custom tasks. The post is separated in two sections: if you are interested to see how the task works, continue on reading. If you don’t care and want to get straight down to the beef, scroll down until you see TaskSchema reference. #### What is MsBuild? Why do we want to generate schemas? MsBuild is the new .Net build system. It is based on XML files containing the projects, targets and tasks (see also Ant, NAnt). The framework comes with a set of tasks that allows easy compilation of solutions or projects and doing a couple more actions. Of course, everybody has different needs and you will probably end up writing your own specialized task to solve your problems. When editing a MsBuild script, it is *very* convenient to take advantage of the “built-in intellisense” of XML by attaching it to the MsBuild schema file. You can do this in VS2005 in two mouse clicks (see below How to setup intellisense for msbuild). Unfortunately, this is not true for your custom tasks where you need to create yourself a XSD schema. I’ll assume you have a basic knowledge of custom msbuild tasks in the following. Note that NAnt has had this feature (nantschema) for a while now. #### What are we looking for? Before diving into the coding details, let’s see what our “final products” looks like. Let’s build a simple task to use it as our example during the article: the sleep tasks makes msbuild sleep x seconds (note that this implementation is very very poor): using System;using System.Threading;using Microsoft.Build.Framework;using Microsoft.Build.Utilities;namespace Foo{ public sealed Sleep : Task { private int seconds; [Required] public int Seconds { get { return this.seconds;} set { this.seconds = value;} } public override bool Execute() { this.Log.LogMessage(“Sleeping {0} seconds”, this.Seconds); Thread.Sleep(this.Seconds * 1000); } }} The sleep task has a single (required) parameter “seconds”. We expect the schema for this task to look as follows: <xs:schema xmlns:msb=http://schemas.microsoft.com/developer/msbuild/2003 elementFormDefault="qualified" targetNamespace="http://schemas.microsoft.com/developer/msbuild/2003" xmlns:xs="http://www.w3.org/2001/XMLSchema"> < xs:include schemaLocation=" Microsoft.Build.Commontypes.xsd" /> < xs:element name="Sleep" substitutionGroup="msb:Task"> < xs:complexType> < xs:complexContent mixed="false"> < xs:extension base="msb:TaskType"> < xs:attribute name="Seconds" type="msb:non_empty_string" /> </ xs:extension> </ xs:complexContent> </xs:complexType> </xs:element> </xs:schema> There’s a couple things to say about this schema: 1. we include the MsBuild schema so we get all the type that are already defined for msbuild, 2. Sleep is in the substitutionGroup msb:Task, 3. the extension has “msb:TaskType” as base class to inherit the Task properties, 4. Since Seconds is an int, we restrict the attribute to msb:non_empty_string. We cannot restrict it to xs:integer to handle cases where the users pases a property, e.g.$(Property).
5. Latter on, we could add information from the xml documentation into the schema as well.

Now that we know kind of output we are looking for, let’s see what “ingredients” we need.

#### Recipe ingredients

Here’s a little summary of tools and techniques that we will use to generate the schemas:

• System.Reflection: we’ll use reflection to enumerate types, find custom attributes, etc…
• System.Xml.Schema: this namespace contains the object model to create schemas (what a surprise!)
• System.Xml: to integrate document parts (if available) in the output
• Microsoft.Build.Utilities: this namespace contains the Task base class,

The TaskSchema has the following properties (it actually has more properties but I’ll skip them for simplicity):

[Required]ITaskItem[] Assemblies {get;set}[Output]ITaskItem[] Schemas {get;}

Assemblies is a list of assemblies containing the tasks to “schematize”. Schemas is an output parameter that will contain the path of each schema, as we will generate one schema per assembly.

This is the high level “factored” Execute method (almost pseudo-code):

CreateAndInitializeSchema(); // (add namespace, includes, etc...)foreach(string assemblyName in Assemblies){    // load error handling comes here    Assembly assembly = LoadAssembly(assemblyName);    // iterating exported types    foreach(Type type in assembly.GetExportedTypes())    {    //if (type is not a task) continue;    if(!typeof(ITask).IsAssignableFrom(type)) continue;    // create a new schema element and name it after the type    // create a simpleType + extension    XmlSchemaElement element = CreateElement(type);    foreach(PropertyInfo property in type.GetProperties())    {        // if (property is defined in some base class) continue;        if (property.DeclaringType != type) continue;        // create schema attribute and name it after property name        // create a simpleType that        XmlSchemaAttribute attribute = CreateAttribute(property);        // add attribute to current element        AddAttribute(element, attribute);    }}// saving to diskSaveSchema(schema);

There are a lot of interesting details to go through to implement this pseudo-code.
I won’t go through all of them. I will rather focus on some parts:

##### Translating enums into XSD

An enumeration can be represented in XSD by simple type restriction. For example,

[C#]

public enum HelloWorld{    Hello,    World}

[XSD]

<xs:simpleType name="HelloWorldType" base="msb:non_empty_integer">
<
xs:restriction base="xs:string">
<
xs:enumeration value="Hello" />
<
xs:enumeration value="World" />
</
xs:restriction>
</
xs:simpleType>

##### Getting custom attributes, the generic way

This is a well know (and very handy trick) that add a generic type parameter to a method to “strongly type” it.

T GetAttribute<T>(ICustomAttributeProvider t) where T : Attribute{    object[] attributes = t.GetCustomAttributes(typeof(T), true);    if (attributes != null && attributes.Length > 0)        return (T)attributes[0];    else        return null;}

##### Finding and adding documentation to the schema

Adding documention to the schema is straightforward, it is a matter of adding a annotation containing a document element:

<xs:attribute name="Assemblies" use="required">
<
xs:annotation>
<
xs:documentation>
Gets or sets the list of path to analyse.
</xs:documentation>
</
xs:annotation>
</
xs:attribute>

The real trouble is to find what kind of documentation we are going to put there. We cannot rely solely on Reflection for this task because MsBuild custom attributes does not store any “documentation” data. Therefore, the best source of documentation is the XML documentation that is generated by the C# compiler (don’t forget to turn it on in Projects -> Properties -> Build). By default, the compiler dumps that file along side of the assembly so it is easy to find. Here’s a sample of how that file looks like:

<?xml version="1.0"?>
<
doc>
<
assembly>
<
</
assembly>
<
members>
<
<
summary>
A Task that generates a XSD schema of the tasks in an assembly.
</summary>
</
member>
...

That is sweet. We can access the summary of each member very easily using XPath. For example:

//member[@name=”T:taskType.FullName”]/summary

This expression will return me the summary of the taskType type. Similar constructs can be mode for the property summary.

##### Features:
• Generates ready-to-use XSD schemas. No edition needed.
• If documentation is available, automatically adds documentation to the schema,
• Task attribute are strongly typed,
• Generates specific types for enumerations
##### Attributes:
• Assemblies: Required ITaskItem[] expression. The list of assemblies to analyse.
• OutputPath: Optional string expression. The desired schemas output path.
• Schemas: Output ITaskItem[] expression. For each assembly, the corresponding schema path.
• CreateTaskList: Optional Boolean expression. A value that indicates if TaskList (see below) should be generated as well.

There are a couple more options. Run TaskSchema to have the full schema!

##### Examples:

This little msbuild project applies TaskSchema to itself (it is part of Churn.Tasks.dll):

<?xml version="1.0" encoding="utf-8" ?>
<
Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="CreateSchema">

<

<
ItemGroup>
<
</
ItemGroup>

<
Target Name="CreateSchema">
<
Assemblies="@(Assemblies)"
/>
</
Target>
</
Project>

[Output]

Microsoft (R) Build Engine Version 2.0.50215.44[Microsoft .NET Framework, Version 2.0.50215.44]Copyright (C) Microsoft Corporation 2005. All rights reserved.Build started 5/26/2005 8:19:51 PM.__________________________________________________Project "D:\Projects\Churn.NET\Churn.Tasks\taskschema.self.xml" (default targets):Target CreateSchema:Analysing bin\Debug\Churn.Tasks.dllFound documentation file at Debug\Churn.Tasks.xmlAnalyzing Churn.Tasks.TaskSchemaCreating Schema bin\Schemas\Churn.Tasks.Tasks.xsdCreate Task list bin\Schemas\Churn.Tasks.TasksBuild succeeded.0 Warning(s)0 Error(s)Time Elapsed 00:00:00.93

Execute the sample and see for yourself!

posted on Monday, July 25, 2005 10:27:33 PM (Pacific Daylight Time, UTC-07:00)      Comments [5]
Wednesday, July 21, 2004

The API Design Guidelines encourage developers to check all their arguments and thereby avoid throwing a NullReferenceException.  If an argument is null and the contract of the method forbids null arguments an ArgumentNullException should be thrown.

So you agree with Brad (I do) and you always check that arguments are not null before using them. This means a little bit more of code but it is worth it. But this means also a lot more of test code because, ideally, you should test that all your methods check all their arguments. This means writing hundreds of boring, repetitive test cases.... and you don't want to do that.

At least I don't so I added a new feature to MbUnit that does it for me.

Test for ArgumentNullException, first iteration:

Let's see how it works with an example:

public class ArgumentNullDummyClass
{
public object ClassicMethod(Object nullable, Object notNullable, int valueType)
{
if (notNullable == null)
throw new ArgumentNullException("notNullable");
return String.Format("{0}{1}{2}",nullable,notNullable,valueType);
}
}

As one can see, the nullable parameter can be null, while the notNullable parameter is tested. Now, let's create a fixture that tests this method. We will be using the TestSuiteFixture because we will build a TestSuite:

[TestSuiteFixture]
public class MethodTestSuiteDemo
{
public delegate object MultiArgumentDelegate(Object o,Object b, int i);

[TestSuite]
public ITestSuite AutomaticClassicMethodSuite()
{
ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();

MethodTester suite = new MethodTester(
"ClassicMethod",
new MultiArgumentDelegate(dummy.ClassicMethod),
"hello",
"world",
1
);
return suite.Suite;
}
}

The MethodTester class takes the following argument: a name, a delegate  and valid parameters of the delegate. By valid I mean parameters that should not make the delegate invokation fail. The AddAllThrowArgumentNull looks for nullable parameters and create a TestCase that will invoke the delegate with the corresponding parameter nulled. In the example, this means that ClassicMethod will be called with:

• null, "world", 1
• "hello", null, 1

Test for ArgumentNullException, second iteration:

There are things I don't like in the example above:

• you need to create a delegate (tedious),
• you need to create 1 method tester per method (tedious),

Ok, so let's build a ClassTester class that does that for us... The test code now looks as follows:

[TestSuiteFixture]
public class ClassTesterDemo
{
[TestSuite]
public ITestSuite AutomaticClassSuite()
{
ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
ClassTester suite = new ClassTester("DummyClassTest",dummy);
return suite.Suite;
}
}


That's much better: delegate is gone and we could add more methods to be tested in a single call.

Test for ArgumentNullException, third iteration:

There is still one problem with this technique: there is no way to tell that an argument is authorized to be nulled! In the example, the nullable parameter can be null and the TestCase will always fail because it does not throw ArgumentNullException.

The solution of this problem is done in two steps: first, you, the developper, tag the parameters that can be nulled with a NullableAttribute attribute (could be any of your attributes). In the example, we add a SmartMethod method and the MyNullableAttribute:

[AttributeUsage(AttributeTargets.Parameter,AllowMultiple=false,Inherited=true)]
public class MyNullableAttribute : Attribute
{}

public class ArgumentNullDummyClass
{
public object ClassicMethod(Object nullable, Object notNullable, int valueType)
{...}
public object SmartMethod([MyNullable]Object nullable, Object notNullable, int valueType)
{...}
}

Next, you must tell MbUnit which attribute is used to tag nullable parameters. This is done with the NullableAttributeAttribute at the assembly level:

[assembly: NullableAttribute(typeof(MbUnit.Demo.MyNullableAttribute))]

Ok, now we just need to update our test case to load the SmartMethod:

[TestSuite]
public ITestSuite AutomaticClassSuite()
{
ArgumentNullDummyClass dummy = new ArgumentNullDummyClass();
ClassTester suite = new ClassTester("DummyClassTest",dummy);


Test for ArgumentNullException, fourth iteration: The more I think about this problem, the more I think FxCop should do that for us...