# Saturday, March 12, 2011

pex4fun is now available for Windows Phone 7. Write C# with auto-completion and background compilation in your favorite smartphone!!! Download it from the Marketplace

posted on Saturday, March 12, 2011 10:39:27 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Thursday, September 16, 2010

We just released Pex and Moles v0.94 which brings MSBuild support for Moles! 

Moles is now entirely based on MSBuild and does not require to check in any generated files. All the previous headaches related to locked files in TFS are now a history. Moles assemblies will be generated on demand during the build. We strongly recommend you read the upgrade instructions in the release to see how you can smoothly upgrade to v0.94.

And while the installer is running, don’t forget to solve a duel or two at http://www.pexforfun.com

Cheers, the Pex Team.

posted on Thursday, September 16, 2010 9:05:30 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Monday, June 28, 2010

Is there anything else to say than… try it out now at http://www.pexforfun.com !


posted on Monday, June 28, 2010 2:35:18 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Monday, June 07, 2010

We’ve just released a new version of Pex and Moles v0.92. This version brings Rex integration (smarter about regular expressions), Silverlight support (Alpha) and a number of bugs/improvements here and there.

Read all about the new stuff on the release notes page. Happy Pexing!

posted on Monday, June 07, 2010 12:13:52 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Sunday, May 16, 2010

Nikolai Tillmann announced it on our page, he just finished integrating Rex in to Pex….

posted on Sunday, May 16, 2010 9:55:16 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, May 01, 2010

Although Pex is only available for MSDN subscribers, Moles can be freely downloaded as a standalone tool on Visual Studio Gallery.


(off course, both tools are also available under an academic license too)

posted on Saturday, May 01, 2010 4:36:45 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, April 24, 2010

We just uploaded a new release 0.91 on MSDN, Visual Studio gallery and our research web site. Learn more about the changes at http://research.microsoft.com/en-us/projects/pex/releasenotes.aspx#0_91

Get and download Pex!

posted on Saturday, April 24, 2010 6:51:44 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Friday, April 16, 2010

Are you using Pex and/or Moles? Do you want to become a fan (on Facebook)? It’s possible now!!!

Pex and Moles on Facebook

posted on Friday, April 16, 2010 6:06:31 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Tuesday, March 23, 2010

La presentation sur Pex et Moles pour SharePoint que j’avais presente a Paris est online…

posted on Tuesday, March 23, 2010 10:02:42 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Monday, February 15, 2010

We just released an update of Pex on Devlabs that adds support for Visual Studio 2010 RC (the release also contains a number of bug fixes). Read more about it on our release notes page.

posted on Monday, February 15, 2010 1:03:30 PM (Pacific Standard Time, UTC-08:00)  #    Comments [7]
# Friday, January 29, 2010

We just released a new version of Pex and Moles that brings a number of bug fixes and various improvements to the behaviors for SharePoint and Asp.NET.


  • Interaction with the Source Control provider has been significantly improved for Moles
  • Support for Moles of nested types.
  • Improved logging in the Host Type
  • Improved Behaved collections
  • Added Behaved types for mscorlib and System.Web types.

Bug fixes

  • Updated several outdated section in the documentation
  • Updated missing classes in the SharePoint samples
  • Fixed a limitation of the profiler that would silently fail to instrument certain methods

Breaking Changes

  • We have formalized the naming convention of Moles and Stubs and found some holes along the way. Some mole method might have a new name under this version of the compiler.
posted on Friday, January 29, 2010 9:48:59 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, January 15, 2010

We just released Pex 0.21.50115.2. This release brings bug fixes, a big renaming from “Stubs” to “Moles” and improved infrastructure to build behaved types (formerly known as beavers).

Bug Fixes

  • The Moles VsHost fails to execute unit tests in different assemblies.
  • Pex deletes the report folder it is currently writing to.
  • Support for named indexers in Stubs
  • Fixed bugs in how Pex reasons about System.Convert.ToBase64, DateTime
  • Invalid support for protected members in the stubs generation

Breaking changes

  • The Stubs framework was renamed to Moles framework. We have decided to make the Moles the center of the framework and as a consequence, renamed ‘Stubs’ to ‘Moles’. (This does not mean that we encourage writing untestable code, as Mole help to make it testable. You should still refactor your code to make it testable whenever possible, and only use Moles when that’s the only choice). The impact is that
    • Microsoft.Stubs.Framework was renamed to Microsoft.Moles.Framework
    • The moles and stubs get generated in subnamespaces ‘.Moles’ rather ‘.Stubs’.
    • See below for the list of steps to upgrade your applications.
  • BaseMembers in Moles have been deprecated: this helper is not useful as it can be acheive in a better way through a constructor. We decided to remove it to reduce code size. The second reason is that BaseMembers would only work for types inside of the same assembly, which might seem inconsistent.
  • PexGoal.Reached is replaced by PexAssert.ReachEventually(). The PexGoal class has been integrated into PexAssert through the ReachEventually method which should be used with the [PexAssertReachEventually] attribute.
  • PexChoose simplified: we’ve simplified the PexChoose API; you can now get auxiliary test inputs with a single method call: PexChoose.Value<T>(“foo”).

Migrating from previous version of Pex

Since we’ve renamed Stubs to Moles, any existing .stubx files will not work anymore.

Take a deep breath, and apply the following steps to adapt your projects:

  • change the project reference from Microsoft.Stubs.Framework.dll to Microsoft.Moles.Framework.dll
  • rename all .stubx files to .moles, and
    • rename the top <Stubs xml element to <Moles.
    • Change the XSD namespace to http://schemas.microsoft.com/moles/2010/
    • Right click on the .moles file in the Solution Explorer and change the Custom Tool Name to ‘MolesGenerator’.
    • Delete all the nested files under the .moles files
  • Remove references to any compiled .Stubs.dll files in your project
  • In general, remove all .Stubs.dll, .Stubs.xml files from your projects.
  • Rename .Stubs namespace suffixes to .Moles.
  • replace all [HostType(“Pex”)] attribute with [HostType(“Moles”)]
  • in PexAssemblyInfo.cs,
    • rename using Microsoft.Pex.Framework.Stubs to Microsoft.Pex.Framework.Moles
    • rename [assembly: PexChooseAsStubFallbackBehavior] to [assembly: PexChooseAsBehavedCurrentBehavior]
    • rename [assembly: PexChooseAsStubFallbackBehavior] to [assembly: PexChooseAsMoleCurrentBehavior]
  • In general, the ‘Fallback’ prefix has been dropped in the following methods:
    • rename FallbackAsNotImplemented() to BehaveAsNotImplemented()
    • rename class MoleFallbackBehavior to MoleBehaviors
    • rename class StubFallbackBehavior to BehavedBehavors
posted on Friday, January 15, 2010 8:14:07 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, December 18, 2009

We just released the version 0.20 of Pex. This version brings a Beavers, an extension of Stubs to write models, a global event view in Visual Studio and many bug fixes. Beavers are a small programatic models that build upon Moles…

  • Beavers: With stubs and moles, we provided a framework to write record/replay tests (i.e. mock-based tests) against any .NET type, interface or not. Beavers take you to the next level and allow you to write simple implementations with behavior (hence the ‘B’eaver) that can be used to replace the external system during testing. The benefit of using Beavers is that the resulting test cases are much more robust to code changes since the tests specify the state of the system, rather than a sequence of method calls and outcomes. We will talk a lot about Beavers in the future. Let’s take a look at an example. The following snippet uses Moles to redirect the constructor of a Bank class, then sets up the GetAccountById(int) method to return an instance of IAccount, etc… This is a typical test that is based record and replay. The problem with this technique is that it is really fragile with respect to code changes: if the code calls different methods, the test will break.
    The same behavior can be specified using Beavers. Beaver are simple implementations that simulate that capture enough behavior of the original code for testing. Using Beavers, one can specify the state of the system. The functional behavior is re-factored away in a Beaver class and reused in all tests!
  • Global Event View: when executing multiple explorations, it was cumbersome to go through each exploration to investigate the events (instrumentation, object creation, etc…). We have added a view in the Pex Explorer tree view that centralizes the events for the entire run.
  • Beavers for SharePoint: The samples that come with Pex contains a number of Beavers for the SharePoint object model (SPList, SPListItem, SPField, etc…) that should help you get started with testing SharePoint with Beavers. As an example, you can compare a mole-based test (left) and a beaver-based test (right) for a simple SharePoint service:
    image image
  • Bug fixes and Improvements:
    • The HostType crashes non-deterministically.
    • Visual Studio 2010 crashes when loading with a ‘DocumentSite already has a site’ error message.
    • The Stubs code generator uses a lot of memory and may crash Visual Studio. The Stubs generator now runs out of process and does not take down Visual Studio when it runs out of memory.
    • The Stubs code generator detects version changes of the Stubs framework to regenerate the stubs assemblies.
    • The Pex execution produces less warnings. We’ve reviewed our warnings and moved some, who did not matter, as messages.
    • When a Contract.Requires is failed multiple time in a stacktrace, do not allow it.
    • The object factory guesser has partial support for internal types and internal visible to (signed assemblies not supported).
    • The ‘add’ and ‘remove’ of events can be moled to deal with events.
    • Digger does not work in Visual Studio 2010 when no test project exists.
  • Breaking Changes: The Beavers introduced a number of refactoring that might impact users of Stubs and Moles. The .stubx file should regenerate themselves automatically, otherwise right click on the file and select ‘Run Custom Tool’.
    • Most types of the Stubs framework have been moved to sub namespaces.
    • IStubBehavior was renamed to IBehavior
    • StubFallbackBehavior was renamed to BehaviorFallbackBehavior
    • PexChooseAsStubFallbackBehaviorAttribute was renamed to PexChooseAsBehavedFallbackBehaviorAttribute
    • PexChooseStubBehavior was renamed to PexChooseBehavedBehavior
    • PexExpectedGoalsAttribute and PexGoal.Reached(…) were renamed and refactored into PexAssertReachEventuallyAttribute and PexAssert.ReachEventually(…);
posted on Friday, December 18, 2009 6:14:57 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Thursday, December 17, 2009

Je viens du publier une video en francais sur les Code Contracts et Pex avec Manuel.


posted on Thursday, December 17, 2009 8:35:26 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Wednesday, December 16, 2009

I’ll be presenting the PDC talk on Code Contracts and Pex at on January 20th, 2010 in Celestijnenlaan 200A, Heverlee (Leuven). Things change fast, so there will probably a couple changes to the presentation as well. I’ll probably talk a little but more about the tools to mine Code Contracts, etc…


posted on Wednesday, December 16, 2009 3:22:06 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Tuesday, December 15, 2009

I give a visit to Jeffrey Van Gogh who works for the Reactive Extensions framework, where they have been using Pex and writing Parameterized Unit Tests to test it. See the full video on Channel 9 at


posted on Tuesday, December 15, 2009 9:36:19 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Sunday, December 13, 2009

J’aurai le plaisir de presenter ‘Stubs, Moles and Pex’ au premier rendez-vous de DotNetHub 20 janvier 2010 a LLN. DotNetHub est une communaute .NET francophone recemment cree. Au programme, beaucoup de demo a propos de Pex, Stubs et Moles, et aussi d’un nouvel animal – les Beavers.

Ce sera ma premiere presentation 100% en francais, ce qui en soit est un challenge assez interressant!

ps: Pas d’accent sur les claviers qwerty, ce qui rend l’ecriture du francais beaucoup plus facile :)

posted on Sunday, December 13, 2009 8:39:18 PM (Pacific Standard Time, UTC-08:00)  #    Comments [4]
# Thursday, December 03, 2009

I’ll be presenting the latest development on using Moles and Pex to unit test SharePoint 2010 (or 2007) Services at the SharePoint Connnections in Amsterdam, 18th-19th of January.

MSC26: Pex - Unit Testing of SharePoint Services that Rocks!
SharePoint Services are challenging for unit testing because it is not possible execute the SharePoint Service without being connected to a live SharePoint site. For that reason, most of the unit tests written for SharePoint are actually integration tests as they need a live system to run. In this session, we show how to use Pex, an automated test generation tool for .NET, to test SharePoint Services in isolation. From a parameterized unit test, Pex generates a suite of closed unit tests with high code coverage. Pex also contains a stubbing framework, Moles, that allows to detour any .NET method to user-defined delegates, e.g. replace any call to the SharePoint Object Model by a user-defined delegate.

posted on Thursday, December 03, 2009 10:45:36 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Thursday, November 12, 2009

We just released the Pex 0.19.41110.1 just in time for TechEd session. This version brings an updated support for Visual Studio 2010 Beta2 and .NET 4.0 and many improvements to the Moles.

  • Better Dev10 Beta 2 support! We’ve upgraded our support for Visual Studio 2010 Beta 2. This includes full support for .NET 4.0 Beta 2, the Visual Studio Unit Test HostType and the Visual Studio integration.
    Known Beta 2 issues: 1) When running Pex for the first time, the path to Visual Studio Unit Test is not found. You need to specify the Visual Studio ‘PublicAssemblies’ path (“c:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies”), 2) Don’t run Pex on v2 projects, Pex will crash.
  • Smooth generation of Stubs and Moles: when you create a stubs and moles assembly that is not a project reference, like one of the system assemblies, the stubs and moles code generator will automatically compile it into an assembly and add it to the project. This dramatically increases the performance of projects using Stubs and Moles.
  • Stubs and Moles for VisualBasic.NET: the Stubs and Moles may be used in VisualBasic.NET projects. In that case, they are embedded as a compiled assembly.
  • Moles support for NUnit, xUnit.Net and others: Unit tests involving Moles require the Pex profiler to work. We’ve added a new mode to our command line runner that allows to execute your favorite unit test framework runner under Pex. For example, the command “pex /runner=xunit.console.exe tests.dll” will launch the xunit runner for test.dll under Pex. x64 assemblies are still not supported.
  • Nicer Moles Syntax: we’ve added some little tweaks to the Moles API, such as an implicit conversion operator to the moled type, that simplifies the syntax. Here is a small example that showcases the current syntax: step 1 moles the Bank constructor, step 2 moles the GetAccount method and returns a mole of Account, step 3 moles the getter and setter of the Balance property.
  • Fluent Mole Binding: The Bind method returns the mole itself which allows it to be used in a fluent fashion. Interface binding is specially useful when you need to mole custom collections – just attach an array! For example, we attach an array of integers to a the MyList type that implements IEnumerable<int>.
  • Breaking Changes in Stubs and Moles:
    • The stubs and moles code generator tool is now ‘StubsGenerator’. You need to path this value in the property pages of your .stubx pages.
    • No more AsStub and AsMole: the AsStub and AsMole helpers were confusing and pretty useless so we decided to reduce the amount of generated code by removing them.
    • The naming convention for Stubs delegate fields always mangles the parameter types of the stubbed method, i.e. Foo(int) yields to a field called FooInt32. We used to be smart about adding parameters but we decided to make the scheme stable and symmetric with respect to the Moles.
    • The ‘FallbackBehavior’ property has been renamed to ‘InstanceFallbackBehavior’ in both Stubs and Moles to make it more explicit.

As usual, we’ve also fixed a number of bugs and added features that were reported through our MSDN forums.

posted on Thursday, November 12, 2009 11:57:20 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3]
# Sunday, October 25, 2009

I’ll be presenting Stubs and Moles at the Visual Studio User Group on November 3rd. See you there…

Stubs is a lightweight framework for test stubs and detours in .NET that is entirely based on delegates, type safe, Refactorable, debuggable and source code generated.
Stubs also allows to replace  any .NET method with a user-defined delegate, including non-virtual/static methods in sealed types. Stubs is fully integrated into Pex , an automated white box test generation tool for .NET.

posted on Sunday, October 25, 2009 2:45:44 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Wednesday, October 07, 2009

We have just release a new version of Pex (v0.17.xxx) which brings new features for Moles and more bug fixes.

Per-Instance Moles

Instance methods may be moled differently per object instance. This is particularly useful when multiple instances of same type need to behave differently in the same test. For example, let’s consider a simple class Bar that has a ‘Run’ method to check that the values are not the same. The important point here is that we would not be able to deal with such cases if we could not mole the behavior on a per-instance basis.


In the test case of Run, we create to instance of MBar (the mole type of Bar) and assign two different delegates to the ‘Value’ getter. Moles are active as soon as they are attached (unlike the previous version) so we can just call the ‘Run’ method once the moles are set up. In the code below, we are using the C# Object Initializers feature that allows us to set properties of a newly created object within curly braces after the ‘new’ call.


As you may have noticed, we just wrote a parameterized unit test that takes arbitrary ‘left’ and ‘right’ values. We then simply execute Pex to find the relevant values to cover the ‘Run’ method:


Moles for Constructors

In same cases, objects created inside of other methods need to be moled to test the actual program logic. With moles you can replace any constructor with your own delegate, which you can use to attach a mole to the newly created instance. Let’s see this with another simple example where a  ‘Run’ method creates and uses an instance of a ‘Foo’ class.


Let’s say we would like to mole the call to the ‘Value’ property to return any other value. To do so, we would need to attach a mole to a future instance of Foo, and then mole the Value property getter. This is exactly what is done in the method below.


We then run Pex to find that 2 different values are needed to trigger all code paths.


Mole interface binding

All the member implementations of an interface may be moled at once using the new ‘Bind’ method. This smells like duck typing with type safety as the Bind methods are strongly typed. For example, we want to mole a collection type (Foes) to return a custom list of Foo elements (which need to be moled too). The goal is to test the sum method…


In the parameterized unit test case, we create an instance of Vector, which implements IEnumerable<int>. Then, we can simply bind the ‘values’ array to the mole to make Vector iterate over the array. The call to Bind will mole all methods of MVector that are implemented by the ‘values’ parameters, effectively redirecting all the enumeration requests to the ‘values’ array.


When Pex runs through the sample, it correctly understand the data flow of the parameters through the moles and finds inputs that break the ‘sum >= 0’ assertion:


Compilation of Stubs assemblies

When one needs stubs or moles for a system assembly, it does not really make sense to re-generate the stubs each time the solution is loaded. To that end, Stubs ships with a command line tool that lets you easily compile a stubs assembly into a .dll:

stubs.exe mscorlib

Once the stubs assembly is compiled, i.e. mscorlib.Stubs.dll is created, you simply have to add a reference to it in your test project to start using it. In future versions of Pex, we will provide a better experience to support this scenario.

Bug Fixes

Breaking Changes

  • The code generation of Moles was significantly changed. This might mean that you will have to recompile your solution, and adapt all existing uses of Moles.
posted on Wednesday, October 07, 2009 3:02:46 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, September 16, 2009

We just released v0.16.40915.5 of Pex. The major highlights of this release are Moles, a lightweight detour framework, and better support for test frameworks.

  • Moles, a lightweight detour framework: Moles is a new extension of the Stubs framework: it lets you replace any .NET method (including static methods) with your own delegate.(sample outdated) 
    Want read more about this sample? here is the full walkthrough.
    The moles framework is not yet feature complete; moles does not support constructors and external methods. Some types of mscorlib cannot be moled as they interact too deeply with the CLR.
  • Pex gets its own HostType: A HostType is a feature for the Visual Studio Unit Test framework that lets specific unit tests be run under specialized hosts such as Asp.Net or VisualStudio iself. In order to create reproducible test cases using Moles, we had to implement a HostType that lets tests be run under the Pex profiler. This is very exciting because it also opens the door for many uses of Pex such as fault injection, dynamic checking, etc… in future versions of Pex. When generating test cases with Pex, all the necessary annotations will be created automatically. To turn the Pex HostType on hand-written (non-parameterized) unit tests, simply add [HostType(“Pex”)] on your test case.
    This feature only works with Visual Studio Unit Test 2008.
  • Select your test framework: the first time you invoke ‘Pex’ on your code, Pex pops up a dialog to select your test framework of choice. You can select which test framework should be used by default and, more importantly, where it can be found on disk.
    If you do not use any test framework, Pex ships with what we call the “Direct test framework”: in this “framework”, all methods are considered as unit tests without any annotation or dependencies.
    These settings are stored in the registry and Pex should not bug you again. If you want to clear these settings, go to ‘Tools –> Options –> Pex –> General’ and clear the TestFramework and TestFrameworkDirectory fields:
  • Thumbs up and down: We’ve added thumbs up and down buttons in the Pex result view. We are always looking for feedback on Pex, so don’t hesitate to click them when Pex deserves a thumbs up or a thumbs down.
  • Performance: Many other performance improvements under the hood which should avoid Pex hog too much memory in long running scenarios.
  • Miscellanous improvements and bug fixes:
    • Support for String.GetHashCode(): we now faithfully encode the hashing function of strings, which means Pex can deal with dictionaries where the key is a string.
    • Fewer “Object Creation” messages that are not actually relevant.
    • In VS 2010, the package used to trigger an exception when the solution loaded. This issue is now fixed.
    • In Visual Studio when an assembly had to be resolved manually (i.e. that little dialog asking you where an assembly is), Pex remembers that choice and does not bug you anymore.
    • And many other bugs that were reported through the forums.
posted on Wednesday, September 16, 2009 2:16:33 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, July 15, 2009

We just released a new version of Pex, v0.15.40714.1. This version does not contain any new features but… it brings better memory management and bug fixes for issues that were reported through our MSDN forums. Happy testing!

It contains a couple breaking changes too:

  • Microsoft.Pex.Stubs.dll does not exist anymore and has been integrated into Microsoft.Pex.Framework.dll
  • PexExplorableFromFactoriesAttribute renamed to PexExplorableFromFactoriesFromAssemblyAttribute
posted on Wednesday, July 15, 2009 3:11:19 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, May 28, 2009

We’ve just released the Pex 0.13.40528.2 (Nikolai has all the details about the new features). In this release, a number of attributes were added to force Pex produce more friendly inputs.

The problem with Enums

When you apply Pex to code that uses Enumerations, you quickly learn something: the CLR does not constraints the enumeration values to be defined. This means than whenever you take an Enum as a parameter, you should validate it is properly defined:

void Test(SomeEnum e) {
    if (!Enum.IsDefined(typeof(SomeEnum), e)) throw new ArgumentException(“…”);

Look at the BCL code, they actually do this kind of validation. For good or bad, some of our users have been complaining that Pex should restrict the enumeration to defined values, in fact considering the generated failing test case as noise. To restrict generated enumartion values to be defined, you can add the [PexEnumValuesDefined(typeof(SomeEnum))] attribute on your test/fixture/assembly. Internally, this attribute attaches an invariant to the given enumeration type that tells Z3 the allowed values for that type.

What about multi-dimensional arrays?

Ever wakened a day wanting to iterate arrays starting from –12345? Well you can with multidimensional arrays… The Array.CreateInstance method has an overload that takes length and lower bounds to instantiate multidimensional arrays:

int[,] array = (int[,])Array.CreateInstance(typeof(int),
    new int[] {10, 10}, // lengths
    new int[] {–12345, – 789} // lower bounds
a[-12345, –789] = –1;

Of course, this feels a bit weird because most languages won’t allow to create such arrays, but the Array.CreateInstance method is still available nonetheless. It feels a bit weird but this behavior is perfectly valid from the runtime point of view. To restrict Pex from generating such bounds, you can add the [PexZeroMdArrayLowerBounds(0)] attribute (one attribute per dimension).

What about Booleans?

Booleans is another interesting citizen of the CLR. At the MSIL level, the Boolean type is really a byte, widened as an integer when on the stack. Compilers try really hard to keep Booleans as 0 or 1 but… it is possible to craft Boolean values anywhere the byte range using safe MSIL code (not from C# though or any BCL method I know of). The implication of this are quite disturbing, for example, consider the following parameterized test:

[PexMethod] {
void BooleanMadness(bool a, bool b) {
     if (a && b && a != b)
         throw new Exception(“this is madness!”);

Clearly, in a word of Booleans constrained to 0 or 1, the branch that throws the exception should be unreachable. If you execute Pex on this test, it will generate 4 test one of which raises the exception:


Notice the actual values (0x02, 0x03) of the Booleans in the pretty printing. If you want Pex to generate Boolean that are restricted to 0 or 1, use the [PexBooleanAsZeroOrOne] attribute. This attribute forces Z3 to pick boolean values as 0 or 1. Btw, there’s already a connect bug for this behavior.

posted on Thursday, May 28, 2009 5:06:06 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Monday, May 25, 2009

If you’ve been thinking about presenting Pex to your co-workers or your local .NET community, you can use our slide decks at http://research.microsoft.com/en-us/projects/pex/documentation.aspx . The slide decks are there to help you, don’t hesitate to shuffle them, cut them or pick whatever you need in them (and of course, tell us about it).

Cheers, Peli.

posted on Monday, May 25, 2009 10:54:47 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, May 02, 2009

This is a cool new feature that builds on Stubs, Pex  and Code Contracts  (in fact, it is really just a consequence of the interaction of the 3 technologies): stubs that act as parameterized models while complying to the interface contracts.

  • Code Contracts provide contracts for interfaces. All implementations of that particular interface will be instrumented with the runtime checks by the rewriter at compile time.
  • Stubs is really just a code generator that provides a minimal implementation of interfaces and relies on the C# compiler to build the code. Therefore, if a stub implements an interface with contracts, it will automatically be instrumented by the runtime rewriter as well.
  • Pex choices (i.e. dynamic parameter generation) may be used as the fallback behavior of stubs. In other words, whenever a stubbed method without user-defined behavior is called, Pex gets to pick the return value. Since the stub implementation itself may have been instrumented with contracts, we’ve added special handling code so that post-condition violation coming from a stub are considered as an assumption. This effectively forces Pex to generate values that comply with the interface contracts.

Let’s see how this works with an example. The IFoo interface contains a single ‘GetName’ method. This method has one precondition, i should be positive and one postcondition, the result is non null. Since interfaces cannot have a method body, we use a ‘buddy’ class to express those contracts and bind both classes using the [ContractClass] and [ContractClassFor] attributes:


We then turn on runtime rewriting for both the project under test and the test project (Project properties –> Code Contracts –> Runtime checks). As a result, the stub of IFoo automatically gets instrumented with the contracts. We can clearly see that from the generated code of GetName in Reflector below:


The last step is to write a test for IFoo. In this case, we write a parameterized unit test that takes an IFoo instance and an integer, then simply call GetName.


Since the contracts of IFoo.GetName specifies that the result should not be null, we should not see any assertion violation in this test. Moreover, we should see a test for the precondition considered as an expected exception:


Note that all those behaviors rely on extensions to Pex that the wizard automatically inserted in the PexAssemblyInfo.cs file:



  • [PexAssumeContractEnsuresFailureAtStubSurface] filters post-condition violations in stubs as assumptions,
  • [PexChooseAsStubFallbackBehavior] installs Pex choices as the fallback behavior of stubs,
  • [PexStubsPackage] load the stubs package (this is a standard procedure for Pex packages),
  • [PexUseStubsFromTestAssembly] specifies to Pex that it should considers stub types when trying to test interfaces or abstract classes.
posted on Saturday, May 02, 2009 10:19:16 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Friday, May 01, 2009

We just made a quick release to fix another installer issue related to missing packages. Along the way, we’ve added an Exploration Tree View and Partial Stubs support.

Exploration Tree View

The exploration tree view displays the list of explorations to be executed, running and finished. It serves as a progress indicator but also as a smooth result explorer. When browsing through the tree, Pex will synchronize the exploration result view and the tree view.

The tree view populates each namespace with the fixture types and exploration methods, and provide a visual feedback on the progress of Pex.


When you browse through the exploration and generated test nodes, Pex automatically synchronizes the exploration result display. This makes it really easy to start from an high-level view of the failures and drill into a particular generated test, with stack trace and parameter table.


Partial Stubs

Stubs lets you call the base class implementation of methods as a fallback behavior. This functionality is commonly referred as Partial Mocks or Partial Stubs and is useful to test abstract classes in isolation. Stubs inheriting from classes have a “CallBase” property that can turn this mode on and off.

Let see this with the RhinoMocks example on partial mocks. Given an abstract ProcessorBase class,


we write a test for the Inc method. To do so, we provide a stub implementation of Add that simply increment a counter.



  • PexAssume/PexAssert.EnumIsDefine checks that a particular value is defined in an enum.
  • Missing OpenMP files in the installer break Pex.

Poll: should we skip 0.13 and go straight for 0.14? :)

posted on Friday, May 01, 2009 1:28:25 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, April 25, 2009

I recorded a Channel9 video last week with Manuel Fahndrich on the interaction of Code Contracts and Pex. The demo gives a glimpse at the nice interaction between Contracts (design by contracts) and Pex (automated white box).

Code Contracts gives you a great way to specify what your code is supposed to do. These contracts can be leverage for documentation, static checkers but also – tada – by Pex! Contracts can also be turned into runtime checks which Pex will try to explore. Pex will try to cover the post conditions, assertions or in other words, find inputs that violates of your contracts etc… By adding contracts to your code, you give Pex a ‘direction’ to search for (and a test oracle).


Drop me a note if you want more of those movies.

posted on Saturday, April 25, 2009 9:09:57 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, April 22, 2009

Update: Andrew Kazyrevich does the final performance analysis on his blog.
Update II: Stubs v0.12.40430.3 has Partial Stubs support.

Stubs is a lightweight stub framework that is entirely based on delegates. We designed it so that it brings as little overhead as possible, and as a side effect, is very effective with Pex. In this post, we’ll see how Stubs ‘look’ with respect to other frameworks.


Andrew Kazyrevich started a project, mocking-framework-compare, back in February that compared Moq, Rhino, NMock2 and Isolator. This is project compares various ‘mocking scenarios’ across all those frameworks. I added Stubs to the mix today (go check it out).

What’s a stub in Stubs anyway?

Before we dig in, let’s recap how stubs look like when you use Stubs. Stubs uses delegate to ‘hook’ behavior to interface members. For every stubbed interface and class, code is generated at a compile time: One stub class per interface and class. For each stubbed method, the stub class contains a field. The field has a delegate type matching the stubbed method. As a user, you can simply attach delegates (or lambdas) to the delegate fields to assign*any* behavior to each member. If no user delegate is set, Stubs calls into a fallback behavior, i.e. throw or return the default value.

Let’s take one of the examples of mocking-framework-compare. The little snippet below ‘stubs’ the TouchIron() method (which is implemented explicitly) by attaching a lambda to the TouchIron field.


SIHand is the stub type of the IHand interface, it was generated at compile time by the Stubs framework. Its implementation looks more or less like this:


The TouchIron methods really shows the main idea behind stubs: delegates are all we need to move behavior around.

Stubs: relying on the language rather than an API

One of the differences of Stubs with other mock frameworks is that Stubs does not have any API: delegates, lambdas, closures are all features of the programming language, while assertions are part of the test framework. This is quite obvious when we compare the BrainTest using Moq and using Stubs. In this test, we need to ensure that the brain coordinates the hand and the mouth so that when a hot iron is touched by the hand, the mouth yells.

  • Moq: we set up the hand to throw a burn exception when touched with a hot iron and verify that the mouth yelled.


Moq, as Rhino or Isolate, make smart use of expression trees and/or Reflection.Emit to define the mock behavior. The expectation and behavior are set through nice and fluent APIs which really make the developer’s life’s easier.

  • Stubs: same scenario, we attach a lambda that throws if iron is hot and use a local to verify that Yell is called (this local is pushed to the heap by the compiler).


Stubs does not have any API. It relies on lambdas (to define new behaviors), closures (to track side effects) which are given for free by the C# 3.0 language.

Performance numbers

Does performance matter when it comes to mocking? It’s not really the question we are trying to answer here. We’re just looking at the benchmark results from the mocking-framework-compare project. When looking at the results, you’ll notice that Stubs may be 100x to 1000x faster than other frameworks. This is no surprise since stubs boil down to a virtual method call, while other frameworks do much more work (in fact we expected worse numbers).

Data units of msec resolution = 0.394937 usec

Mocking methods.
Moq      : 100 repeats:  37.471 +- 12%   msec
Rhino    : 100 repeats:  38.030 +- 7%    msec
NMock2   : 100 repeats:  24.035 +- 4%    msec
Stubs    : 100 repeats:   0.115 +- 8%    msec

Mocking events.
Moq      : 100 repeats:  86.913 +- 7%    msec
Rhino    : 100 repeats:  61.142 +- 6%    msec
NMock2   : 100 repeats:  27.378 +- 6%    msec
Stubs    : 100 repeats:   0.071 +- 6%    msec

Mocking properties.
Moq      : 100 repeats:  82.434 +- 6%    msec
Rhino    : 100 repeats:  47.471 +- 5%    msec
NMock2   : 100 repeats:  11.334 +- 10%   msec
Stubs    : 100 repeats:   0.042 +- 15%   msec

Mocking method arguments.
Moq      : 100 repeats: 142.668 +- 4%    msec
Rhino    : 100 repeats:  45.118 +- 5%    msec
NMock2   : 100 repeats:  22.344 +- 7%    msec
Stubs    : 100 repeats:   0.078 +- 4%    msec

Partial mocks.
Moq      : 100 repeats: 117.581 +- 5%    msec
Rhino    : 100 repeats:  58.827 +- 6%    msec
Stubs : 100 repeats:   0.054 +- 6%    msec

Recursive mocks.
Moq      : 100 repeats:  92.482 +- 4%    msec
Rhino    : 100 repeats:  40.921 +- 3%    msec
Stubs    : 100 repeats:   0.493 +- 18%   msec
Press any key to continue . . .

Stubs limitations

There are many areas where the Stubs fall short or simply don’t support it. Currently, Stubs are only emitted for C# and partial mocks will be supported in the next version.

Getting started with Stubs

Stubs comes with the Pex installer. If you’re interested on using it, check out the getting started page on the Stubs project page where you can also download the full Stubs primer.

Cheers, Peli

posted on Wednesday, April 22, 2009 9:58:03 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
# Tuesday, April 21, 2009
Nikolai just announced the latest drop of Pex, 0.11.40421.0. Get all the details on his blog.
posted on Tuesday, April 21, 2009 5:10:17 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [5]
# Friday, April 10, 2009

Download the new version!

Today we’ve released a new release of Pex on DevLabs and on our academic downloads. This highlights of this release are: NUnit, MbUnit and xUnit.net support out of the box, writing parameterized unit tests in VisualBasic.NET and F#, better Code Contracts support. As always, if we encourage you to send us feedback, bugs, stories on our forums at http://social.msdn.microsoft.com/Forums/en-US/pex/threads/ .

NUnit, MbUnit and xUnit.net supported out of the box

Pex now supports MSTest, NUnit, MbUnit and xUnit.net out of the box. Pex will automatically detect which framework you are using by inspecting the assembly reference list, and automatically save the generated tests decorated with the correct attributes for that framework.


The default test framework can also be specified through the global options (Tools –> Options –> Pex –> enter the test framework name in TestFramework).



Writing Parameterized Unit Tests in VisualBasic.NET

While the Pex white box analysis engine works at the MSIL level, Pex only emits C# code for now. In previous releases, this limitation made it impossible to use Pex parameterized unit tests from non-C# code. In this release, we have worked around this problem by automatically saving the generated tests in a ‘satellite’ C# project.

Let’s see this with an example. The screenshot below shows a single VisualBasic.NET test project with a Pex parameterized unit test:


We can right-click in the HelloTest.Hello method and select “Run Pex Explorations”:


At this point, Pex will start exploring the test in the background as usual. This is where the new support comes in: When a generated test comes back to Visual Studio, Pex will save it in a separate C# project automatically (after asking you where to drop the new project):


The generated tests are now ready to be run just as any other unit tests!

Writing Parameterized Unit Tests from F#

Similarly to VisualBasic.NET, we’ve made improvements in our infrastructure to enable writing parameterized unit tests in F#. Let’s see this with a familiar example. We have a single F# library that has xUnit.net unit tests and reference Microsoft.Pex.Framework (project Library2 below). In that project, we add a parameterized unit test (hello_test):


We can right-click on the test method name and Pex will start the exploration of that test in the background. Because of the limitations of the F# project system, you absolutely need to right-click on the method name in F# if you want contextual test selection to work. Because the project is already referencing xunit.dll, Pex will also automatically detect that you are using xUnit.net and use that framework. When the first test case comes back to VisualStudio, Pex saves it in a separate C# project:


The tests are saved in the generated test project and ready to be run by your favorite test runner!

PexObserve: Observing values, Asserting values

We’ve completely re-factored the way values can be logged on the table or saved as assertions in the generated tests. The following example shows various ways to log and assert values:


In the Observe method, we use the return value and out parameter output to automatically log and assert those values. Additionally, we add “view input” on the fly to the parameter table through the ValueForViewing method, and we add “check input” to be asserted through the ValueAtEndOfTest method. After running Pex, we get the following results:


As expected, input, ‘view input’, output and result show up in the parameter table.


In the generated test, we see assertions for the return value, out parameters and other values passed through the ValueAtEndOfTest method.

Code Contracts : Reproducible generated tests

When Pex generates a unit test that relied on a runtime contract, Pex also adds a check to the unit test which validates that the contracts have been injected into the code by the contracts rewriter. If the code is not rewritten when re-executing the unit test, it is marked as inconclusive. You will appreciate this behavior when you run your unit tests both in Release and in Debug builds, which usually differ in how contracts get injected.


Code Contracts:  Automatic filtering of the contract violations

When Pex generates a test that violates a Code Contract pre-condition (i.e. Contract.Requires), there are basically two scenarios: the precondition was on top of the stack and should be considered as an expected exception; or it is a nested exception and should be considered as a bug. Pex provides a default exception filtering that implements this behavior.

Stubs: simplified syntax

We’ve considerably simplified the syntax of stubs by removing the ‘this’ parameter from the stub delegate definition. Let’s illustrate this with a test that stubs the ‘ReadAllText’ method of a fictitious ‘IFileSystem’ interface. 

Stubs: generic methods

The Stubs framework now supports stubbing generic methods by providing particular instantiations of that method. In the following example, the generic Bar<T> method is stubbed for the particular Bar<int> instantiation:


Stubs and Pex: Pex will choose the stubs behavior by default

We provide a new custom attribute, PexChooseAsStubFallbackBehaviorAttribute, that hooks Pex choices to the Stub fallback behavior. To illustrate what this means, let’s modify slightly the example above by removing the stub of ReadAllText:


If this test was to be run without the PexChooseAsStubFallbackBehavior attribute, it would throw a StubNotImplementedException. However, with the PexChooseAsStubFallbackBehavior attribute, the fallback behavior calls into PexChoose to ask Pex for a new string. In this example in particular, on each call to ReadAllText, Pex will generate a new string for the result. You can see this string as a new parameter to the parameterized unit test. Therefore, when we run this test under Pex, we see different behavior happening, including the “hello world” file:


Note that all the necessary attributes are added at the assembly level by the Pex Wizard.

Miscellanous bug fixes and improvements

  • [fixed] Dialogs do not render correctly under high DPI
  • When a generic parameterized unit tests does not have any generic argument instantiations, Pex makes a guess for you.
  • When a test parameter is an interface or an abstract class, Pex now searches the known assemblies for implementations and concrete classes. In particular, that means that Pex will often automatically use the automatically generated Stubs implementations for interfaces or abstract classes.
  • Static parameterized unit tests are supported (if static tests are supported by your test framework)
  • Better solving of decimal and floating point constraints. We will report on the details later.

Breaking Changes

  • The PexFactoryClassAttribute is no longer needed and has been removed. Now, Pex will pick up object factory methods marked with the PexFactoryMethodAttribute from any static class in the test project containing the parameterized unit tests. If the generated tests are stored in a separate project, that project is not searched.
  • The PexStore API has been renamed to PexObserve.
  • Pex is compatible with Code Contracts versions strictly newer than v1.1.20309.13. Unfortunately, v1.1.20309.13 is the currently available version of Code Contracts. The Code Contracts team is planning on a release soon.


Happy Pexing!

posted on Friday, April 10, 2009 12:06:31 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [12]
# Thursday, March 19, 2009

The videos of TechDays are up:

While attending TechDays Finland in Helsinki, I had the opportunity (thanks Vesa) to take a swim in the baltic frozen sea along with fellow Microsoftie Scott Galloway (i’ll spare you the video of the actual bath).

Great way to dust off the remaining of jetlag.


ps: Yes we are standing on the sea.

posted on Thursday, March 19, 2009 5:28:22 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Friday, March 06, 2009

The next version of Moq 3.0 will work better for Pex:


In the past, Pex would analyze the Moq ‘engine’ code and spend a large amount of time trying to understand the internals of that framework. In this release, Daniel has added special annotations (see extension writer Handbook) that shut down the Pex monitoring for certain areas of Moq which don’t matter to analyse the code under test.

posted on Friday, March 06, 2009 7:15:27 PM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Saturday, February 28, 2009


Ealier this week, we released the Code Contracts library for .NET. Since then, I’ve implemented a lot for contracts for the data structures in QuickGraph . In this post, I’ll talk about my experience with the contracts…


Walking through some contracts.

Let’s take a look at the contracts of AddEdge, a method that adds an edge to a graph. Adding an edge to a graph is certainly a fun adventure, when you write contracts for it. Let’s take a look:

public interface IMutableEdgeListGraph<TVertex, TEdge> : ...
    where TEdge : IEdge<TVertex>
    /// <summary>
    /// Adds the edge to the graph
    /// </summary>
    /// <param name="edge"></param>
    /// <returns>true if the edge was added, otherwise false.</returns>
    bool AddEdge(TEdge edge);

Interface contracts

Since IMutableEdgeListGraph is an interface, we need to store the contracts in separate class. To do so,  we ‘bind’ the interface and the contract class to each other using the ContractClassForAttribute/ContractClassAttribute.

    public interface IMutableEdgeListGraph<TVertex, TEdge> ...
    sealed class IMutableEdgeListGraphContract<TVertex, TEdge>
        : IMutableEdgeListGraph<TVertex, TEdge>
        where TEdge : IEdge<TVertex>

Once both types are bound, you can start implementing the contracts of the interface in the contract class. Note that the contract class must use explicit interface implementations for all methods.

Basic null checks

If you care about null references, the first contract will probably to ensure the edge is not null. To make it quick and painless, make sure you use the crn snippet. Note that since the method body of the contracts does not matter, we simply return the default value.

bool IMutableEdgeListGraph<TVertex, TEdge>.AddEdge(TEdge e)
    Contract.Requires(e != null);
    return default(bool);

More pre-conditions

One of the implicit requirement of AddEdge is that both vertices should already belong to the graph. We want to make this explicit as a pre-condition as well:

bool IMutableEdgeListGraph<TVertex, TEdge>.AddEdge(TEdge e)
    IMutableEdgeListGraph<TVertex, TEdge> ithis = this;
    Contract.Requires(e != null);

There are two things to notice here: (1) we had to cast the “this” pointer to the interface we are writing contracts for, IMutableEdgeListGraph<,>. Because the methods in a contract class must be explicit interface implementations, we do not have access to the members of this interface from the “this” pointer, (2) ContainsVertex had to be annotated with [Pure], as any method called from a contract must be pure:

bool ContainsVertex(TVertex vertex);

What about post-conditions

One of my favorite feature of Code Contracts is to be able to state post-conditions. This is done by calling the Contracts.Ensures method. For example, we start by expressing that the edge must belong to the graph when we leave AddEdge:

bool IMutableEdgeListGraph<TVertex, TEdge>.AddEdge(TEdge e)
    IMutableEdgeListGraph<TVertex, TEdge> ithis = this;

Result and Old

Since the method returns a boolean, we should also state something about the result value. To refer to the result, one has to use the Contract.Result<T>() method. In this case,  the method returns true if the edge was new, false if it was already in the graph. We can refer to a pre-state, i.e. the value of this.Contains(e) at the beginning of the method***:

== Contract.OldValue(!ithis.ContainsEdge(e)));

Lastly, we can also make sure the edge count has been incremented, if an edge was actually added (as you can see, we could use an implication operator in C#):

== Contract.OldValue(ithis.EdgeCount) + (Contract.Result<bool>() ? 1 : 0));

*** The OldState value is evaluated after the preconditions.

Where to go next?

In the next post, I’ll show how to leverage these Contracts with Pex

posted on Saturday, February 28, 2009 8:33:15 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Tuesday, February 03, 2009


Ben presented a talk on Pex a couple months ago at DDD7. The video of the talk is now up on the web. Check it out!


posted on Tuesday, February 03, 2009 8:38:53 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, January 30, 2009

A while ago at PDC, a couple of my colleagues presented CHESS but unfortunately weren’t ready to release it. Well, you don’t have to wait anymore: CHESS is now available for download at DevLabs for Visual Studio 2008 Team Dev or Team Test. CHESS comes under the same pre-release license as Pex or under an academic license.

CHESS = Unit Testing of Concurrent Programs

CHESS is a tool that finds and reproduces “Heisenbugs” in concurrent programs (both Win32 and .NET)… You know those kind of bugs that do not reproduce under the debugger or only in production environment, the kind of bugs where you scratch your head for days in front of a process dump before giving up.

I also recently recorded a great ‘getting started’ CHESS tutorial for CHESS in Visual Studio Unit Test: you can take a unit test*** and turn it into a CHESS test by adding the [HostType(“Chess”)] attribute!


*** That unit test should involve threads, otherwise CHESS will have not effect.

CHESS and Pex

In order to control the scheduling of .NET programs, CHESS instruments the threading APIs using ExtendedReflection, the code instrumentation framework that was built for Pex. In the future, we would also like to combine both approach to explore data inputs and thread schedules together.

Well enough said… go and try it out!

posted on Friday, January 30, 2009 12:52:08 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Thursday, January 15, 2009

Phil Haacked has started an interresting discussion on the implementation of a named formatter whose format strings are of the form {name} instead of {0}. In fact, he started with 3 implementations and his reader submitted 2 more! Since Phil was kind enough to package all the implementations (and the unit tests) in a solution, I took the liberty to run Pex on them and see if there was any issue lurking out. The goal of this post is not really to show that those implementation are correct or not, but give you an idea where Pex could be applicable in your code.

But wait, it’s already Unit Tested!

Indeed, Phil diligently wrote a unit test suite that covers many different use cases. Does it mean you are done with testing? The named format syntax can be tricky since may involve escaping curly braces… What is the named format syntax anyway? It’s pretty obvious that it should let you write something like “{Name} is {Status}” instead of “{0} is {1}” but that still pretty vague. In particular, what are the syntactic rules for escaping curly braces (what’s the meaning {{}{}{{}}}, etc…) or is there even a grammar describing named formats.

As it is often the case, there is no clear and definitive answer – at least I could not find it –. In this post, I’ll show 2 techniques that can be used here out of the many patterns for Pex which we documented.

Technique 1: Explore for runtime errors (pattern 2.12 – parameterized stub)

The most basic way of running Pex is to simply apply Pex on a method without adding any assertions. At this point, you are looking for NullReferenceException, IndexOutOfRanceException or other violations of lower level APIs. Although this kind of testing won’t help you answer whether your code is correct, it may give you back instances where it is wrong. For example, by passing any format and object into the format method in a parameterized unit test; the code below is a parameterized unit test for which Pex can generate relevant inputs:

public void HenriFormat(string format, object o)

Note that it took a couple runs of Pex to  figure the right set of assemblies to be instrumented. Hopefully, this process is mostly automated and you simply have to apply the changes that Pex suggests.

The screenshot below shows the input generated by Pex. Intuitively, I would think that FormatException is expected and part of properly rejecting invalid inputs. However, there are ArgumentOutOfRanceExceptions triggered inside of the DataBinder code that probably should intercepted earlier. If this implementation uses the DataBinder code, it should make sure that it only gets acceptable values. Whether those issues are worth fixing is a tricky question: maybe the programmer intended to let this behavior happen.


Technique 2: Compare different implementations (pattern 2.7 – same observable behavior)

A second approach is to compare the output of the different implementations. Since all the implementations implement the same (underspecified) named format specification, their behavior should match with respect to any input. This property makes it really easy to write really powerful parameterized unit tests. For example, the following test asserts that the Haacked and Hanselman implementation have the same observable behavior, i.e. return the same string or throw the same exception type (PexAssert.AreBehaviorEqual takes care of asserting this):

public void HaackVsHansel(string format, object o)
        () => format.HaackFormat(o),
        () => format.HanselFormat(o)

Again, this kind of testing will not help to answer if the code is correct but it will give you instance where the different implementations behave differently, which means one of the two (or even both) have a bug. This is a great technique to test a new implementation against an old (fully tested) implementation which can be used as oracle. We run the test and Pex comes back with this list of test cases. For example, the format string “\0\0{{{{“ leads to different output for both implementation. From the different outputs, “\0\0{{“ vs “\0\0{{{{“, it seems that curly braces are escaped differently. If I wanted to dig deeper, I could also simply debug the generated test case.


Comparing All Implementations

Now that we’ve seen that Phil and Scott were not agreeing, could we apply this to the other formatters. I quickly set up a T4 template to generate the code for all the parameterized unit tests between each formatters. Note that order matters for Pex: calling A then B might lead to a different test suite compared to B then A, just because Pex explores the code in different orders.

<#@ template language="C#" #>
string[] formatters = new string[] {
using System;
using Microsoft.Pex.Framework;
using Microsoft.Pex.Framework.Validation;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using UnitTests;
using StringLib;

namespace UnitTests
    [PexClass(MaxConstraintSolverTime = 2, MaxRunsWithoutNewTests = 200)]
    public partial class StringFormatterTestsTest
        <# foreach(string left in formatters) { 
           foreach(string right in formatters) {
               if (left == right) continue;    
        public void <#= left #>Vs<#= right #>(string format, object o)
                () => format.<#= left #>Format(o),
                () => format.<#= right #>Format(o)
        } #>

The answer that Pex returns is interresting and scary: none of the implementation behaviors match. This is somewhat not surprising since they all use radically different approaches to solve this problem: regex vs string api etc…

Try it for yourself.

The full modified source is available for download here. You will need to install Pex from http://research.microsoft.com/pex/downloads.aspx to re-execute the parameterized unit tests.

posted on Thursday, January 15, 2009 4:38:25 PM (Pacific Standard Time, UTC-08:00)  #    Comments [1]
# Wednesday, January 14, 2009

Today, we released the 0.9.40105.0 version of Pex. The major highlights of this release are: Pre-Release license for Visual Studio 2008, better test framework integration and partial support for Visual Basic.NET/F# (read the full release notes).

CHESS, another project from RiSE, also released it’s first public download today. CHESS finds and reproduces heisenbugs in concurrent programs. CHESS uses the Pex code instrumentation technology to monitor and control the .Net threading APIs, and one day we hope to combine Pex and CHESS…

Enjoy and don’t forget to give us your feedback through the MSDN forums.

posted on Wednesday, January 14, 2009 12:21:29 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Saturday, January 10, 2009

Stubs is a framework for generating stubs (that we announced earlier with Pex). The framework generates stubs (not mocks) that are

  • lightweight, i.e. relying on delegates only,
  • strongly typed, i.e. no magic strings – refactoring friendly,
  • source code generated, i.e. no dynamic code or expression trees,
  • great debugging experience, i.e. break and step through stubs,
  • minimalistic API (there is almost none),
  • friendly for Pex!

More details about Stubs are also available in the stubs reference document.

Quick Start

Let’s start from a simple interface IFileSystem for which we want to write stubs:

public interface IFileSystem
    void WriteAllText(string fileName, string content);
    string ReadAllText(string fileName);

Stubs relies solely on delegates to attach behaviors, and leverages the lambda syntax from C# 3.0 to accomplish pretty much anything:

// SFileSystem was generated by Stubs and stubs IFileSystem
var stub = new SFileSystem();

// always returns “...”
stub.ReadAllText = (me, file) => "...";

// expectations: checks file = “foo.txt”
stub.ReadAllText = (me, file) =>
        Assert.AreEqual("foo.txt", file);
        return "...";

// storing side effects in closures: written saves content
string written = null;
stub.WriteAllText = (me, file, content) => written = content;

// hey, we can do whatever we want!
stub.ReadAllText = (me, file) =>
        if (file == null) throw new ArgumentNullException();
        return "...";
// downcast to the interface to use it
IFileSystem fs = stubs;

Anything is possible… as long as C# (or your favorite language) allows it.

Anatomy of a Stub

Each stubbed method has an associated field delegate that can be set freely (e.g. WriteAllText and ReadAllText). If this delegate field is set, it will used when the method is called; otherwize a default action occurs. Let’s see this with a simplified stub of the IFileSystem interface, SFileSystem, which shows how the IFileSystem.WriteAllText is implemented:

class SFileSystem 
    : StubBase<SFileSystem>
    , IFileSystem
    public Action<SFileSystem, string, string> WriteAllText; // attach here
    void IFileSystem.WriteAllText(string fileName, string content)
        var stub = this.WriteAllText;
        if (stub != null)
            stub(this, fileName, content); // your code executed here

The actual generated code may look more complicated because it contains custom attributes for debugging, comments and globally qualified types to avoid name clashing:


Code generation: How does it work?

Stubs is a single file generator that pre-generates stubs to source code. Stubs also monitors the build and regenerates the source code when a change occurs.


The stub generation is configured through an XML file (.stubx file) that contains which code and how it should be generated. The generated code is saved in a ‘Designer’ file, similarly to other code generation tools (type dataset etc…).


Great debugging experience

A cool side effect of simply using delegates: you can step through and debug your stubs! This is usually not the case with mock framework using dynamic code. Below, you can see that one can set breakpoints in the body of a stub and debug as usual.


Where do I get it?

The Stubs framework comes with Pex but you can use it in any unit testing activity. It provides a simple lightweight alternative to define stub for testing. Pex can be downloaded from http://research.microsoft.com/pex .

posted on Saturday, January 10, 2009 7:41:32 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Tuesday, December 30, 2008

I recently implemented a Disjoint-Set data structure in QuickGraph, which is the main building block of the Kruskal’s minimum spanning tree algorithm. This is a fun data-structure to look at, as it purpose is quite different from the ‘main’ BCL collections. The disjoint-set is useful to partition elements into sets, and defines 2 main operation to that purpose: Find, finds the set an element belongs to (can be used to check if 2 elements are in the same set), Union merges two sets.

The source of the disjoint-set is in the QuickGraph source, if you are curious about the details.

Testing the disjoint-set

There is not much left to TDD when it comes to write data structure. Such data structure are usually described in details in an article, and you ‘just’ have to follow the authors instruction to implement. Nevertheless, unit testing is really critical to ensure that the implementation is correct – but there is a risk of having to re-implement the algorithm to test it.

For the disjoint-set, I used 2 tools developed at RiSE: Code Contracts and… Pex. When implementing data structure from the literature, I usually start ‘dumping’ the code and the contracts. As much as possible, any invariant or property that the author describes should be translated to contracts. It will give more opportunities for Pex to find bugs in my code.

Contracts First

For example, the contracts for the Union method look as follows:

private bool Union(Element left, Element right)
    Contract.Requires(left != null);
    Contract.Requires(right != null);
        ? Contract.OldValue(this.SetCount) - 1 == this.SetCount
        : Contract.OldValue(this.SetCount) == this.SetCount
    Contract.Ensures(this.FindNoCompression(left) == this.FindNoCompression(right));

The two first requires clauses do the usual null checks. The first ensures clause checks that if Union returns true, a merge has been done and the number of sets has decreased by 1. The last ensures checks that left and right belong to the same set at the end of the union.

Note that so far, I did not have to provide any kind of implementation details. The other methods in the implementation receive the same treatment.

A Parameterized Unit Test

I wrote a single parameterized unit test while writing/debugging the implementation of the forest. It could probably have been re-factored into many smaller tests, but for the sake of laziness, I used a single one.

The parameterized unit test implement a common scenario: add elements to the disjoint-set, then apply a bunch of union operations. Along the way, we can rely on test assertion and code contracts to check the correctness of our implementation.

Let’s start with the test method signature:

public void Unions(int elementCount, [PexAssumeNotNull]KeyValuePair<int,int>[] unions) {

The test takes a number of element to add and a sequence of unions to apply. The input data as is needs to be refined to be useful. In that sense, we add assumptions, under the form of calls to PexAssume, to tell Pex ‘how it should shape’ the input data. In this case, we want to ensure that elementCount is positive and relatively small; and that the values in unions are within the [0…elementCount) range.

    PexAssume.IsTrue(0 < elementCount);
        u => 0 <= u.Key && u.Key < elementCount && 
             0 <= u.Value && u.Value < elementCount

Now that we have some data, we can start writing the first part of the scenario: filling the disjoint-set. To do so, we simply add the integers from [0..elementCount). Along the way, we check that the Contains, ElementCount, SetCount all behave as expected:

    var target = new ForestDisjointSet<int>();
    // fill up with 0..elementCount - 1
    for (int i = 0; i < elementCount; i++)
        Assert.AreEqual(i + 1, target.ElementCount);
        Assert.AreEqual(i + 1, target.SetCount);

The second part gets more interesting. Each element of the unions array is a ‘union’ action between 2 elements:

    // apply Union for pairs unions[i], unions[i+1]
    for (int i = 0; i < unions.Length; i++)
        var left = unions[i].Key;
        var right= unions[i].Value;

        var setCount = target.SetCount;
        bool unioned = target.Union(left, right);

There is 2 things we want to assert here: first, then left and right now belong to the same set. Secondly, that the SetCount has been updated correctly: if unioned is true, it should have decreased by one.

        // should be in the same set now
Assert.IsTrue(target.AreInSameSet(left, right));
        // if unioned, the count decreased by 1
PexAssert.ImpliesIsTrue(unioned, () => setCount - 1 == target.SetCount);

From this parameterized unit test, I could work on the implementation, and refine again and again until it had all passing tests (I did not write any other test code). 

Happy Ending

The resulting test suite generated by Pex is summarized in the screenshot below: the number of elements does not really matter, what is interesting is the sequence of unions performed. This test suite achieves 100% coverage of the methods under test :).


In fact, the Union method involved some tricky branches to cover, due to some optimization occurring in the disjoint-set. Pex managed to generate unit tests for each of the branches.


Thanks to the contracts, the test assertions, and the high code coverage of the generated test suite, I have now a good confidence that my code is properly tested. The End!

(Next time, we will talk about testing minimum spanning tree implementations…)

posted on Tuesday, December 30, 2008 11:41:45 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Tuesday, November 25, 2008

We just published a tutorial on the code digger on CodeProject, enjoy…


posted on Tuesday, November 25, 2008 8:13:29 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, November 14, 2008

We will give a session on Pex at the Seattle Code Camp – Saturday 11/15 at 2.45pm. We will show the new Code Digger and how Pex can help you…

posted on Friday, November 14, 2008 1:35:49 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Wednesday, November 05, 2008

Check out the session on Code Contracts and Pex on Channel 9. You will learn about the new cool API to express pre-conditions, post-conditions and invariants in your favorite language – i.e. design by contracts (DbC) for .NET and the new Code Digger experience in Pex, and most importantly how DbC and Pex play well together.



posted on Wednesday, November 05, 2008 11:10:22 PM (Pacific Standard Time, UTC-08:00)  #    Comments [4]
# Tuesday, October 21, 2008

Wonder what we’ve been up to for the last months… We’ve been building a very cool development experience on top of Pex that we call Code Digging. Check out Nikolai’s blog post on what it means to you!       .


posted on Tuesday, October 21, 2008 9:49:02 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]

Have you ever written code that directly used the .NET File APIs? We probably all did although we knew it would make the code less testable and dependent on the file system state. As bad as it sounds, it really requires a lot of discipline and work to avoid this: one would need to create an abstraction layer over the file system, which is not a short task (think long/tedious).

// in how many ways can this break?
public static void CleanDirectory(string path)
    if (Directory.Exists(path))
        Directory.Delete(path, true);


Fortunately, there always someone else who got motivated at some point. Ade Miller digged an abstraction of the File System, the IFileSystem interface, that Brad Wilson had written for the CodePlex client project. Very nice since it provides a solid foundation for cleanly abstracting from the File System, and thus increase the testability of our code.

// a little better, testable code at least
public static void
CleanDirectory(IFileSystem fs, string path) { if (fs.DirectoryExists(path)) fs.DeleteDirectory(path, false); fs.CreateDirectory(path); }


So with this interface we can write code that we’ll be able to test in isolation from the physical file system. That’s great but there is still a lot of work on the should of the developer: the developer will have write intricate scenarios involving mocks to simulate the different possible configurations of the file system. No matter which mock framework (Moq, Rhino, Isolator, …), he’ll be using, (1) it’s going to be painful, (2) he’ll miss cases. It’s probably easy to write a single “happy path”, but especially with the file system there are quite some realistic “unhappy paths”.

This test case uses Moq to create the scenario where there is a directory already. Although Moq has a very slick API to set expectations, it is still a lot of work to write this basic scenario. (And what exactly is the meaning of “Expect”, the delegate or expression inside, “Returns” and “Callback”?)

public void DeletesAndCreateNewDirectory()
    var fs = new Mock<IFileSystem>();
    string path = "foo";

    fs.Expect(f => f.DirectoryExists(path)).Returns(true);
    fs.Expect(f => f.DeleteDirectory(path, false)).Callback( () => Console.WriteLine("deleted"));
    fs.Expect(f => f.CreateDirectory(path)).Callback(() => Console.WriteLine("created"));

    DirectoryExtensions.CleanDirectory(fs.Object, path);


We had our intern, Soonho Kong, work on a Parameterized Model of the File System, built on top of the IFileSystem interface (yes that same interface Brad Wilson published on CodePlex). We say that the model is parameterized because it uses the Pex choices API to create arbitrary initial File System states; Pex “chooses” each such state (actually, Pex carefully computes the state using a constraint solver) to trigger different code paths in the code. You can think of each choice as a new parameter to the test. Or to put this with an example: if your code checks that the file “foo.txt” exists, then the parameterized model would choose a file system state that would contain a “foo.txt” file (or not, in another state, to cover both branches of the program).

So what does it mean for you? Well, the way you write tests that involve the file system changes radically. You simply need to pass the file system model to your implementation. The model is an under-approximation of the real file system (which means that we didn’t model every single nastiness that can occur when the moon is full), but it definitely captures more practically relevant corner cases than we (humans) usually think about. Let’s see this in the following test:

public void CleanDirectory()
    var fs = new PFileSystem();
    string path = @"\foo";
        DirectoryExtensions.CleanDirectory(fs, path);

        // assert: the directory exists and is empty
        Assert.AreEqual(0, fs.GetFiles(path).Length);

When we run Pex, we get 7 generated tests. In fact, Pex finds an interesting bug that occurs when a file with the name of the directory to clean already exists. In the Pex Exploration Results window, you can see a ‘dir’-like output of the file system model associated with a particular test case (the fs.Dir() method call outputs that text to the console which Pex captures).


This bug is the kind of corner-case that makes testing the file system so fun/hard. Thanks to the parameterized model (and Soonho), we got it for free. Note also that the assertion in our test is pretty powerful since it must be true for any configuration of the file system (it almost smells like a functional specification to me):

// assert: the directory exists and is empty
Assert.AreEqual(0, fs.GetFiles(path).Length);

Happy modeling!

The full source of PFileSystem will be available in the next version of Pex (0.8)..

posted on Tuesday, October 21, 2008 12:15:09 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]
# Friday, October 17, 2008

In this post, you’ll find the XSL stylesheets to render MSTest 2008 reports to CruiseControl.NET. CC.NET comes with stylesheets for MSTest 2005 but it seems that there were some changes in the xml output or at least they did not work for me. Please refer to the CC.NET documentation on how to integrate them into your build.

The templates are minimalistic since a deeper investigation can be done by importing the .trx file into Visual Studio.

The summary view:


And the details view:


Downloads – "AS IS" with no warranties, and confers no rights – :

posted on Friday, October 17, 2008 10:28:51 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, October 02, 2008

We are very excited to announce that Pex has a session at PDC 2008. We will be talking about code contracts and Pex, and how they play nicely together. Book it now in your conference agenda!!! (look ‘Research’ or ‘Pex’ in the session list).


See you there and don’t forget to swing by our booth.

posted on Thursday, October 02, 2008 10:42:09 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Monday, September 22, 2008

How do you write good parameterized unit tests?
Where do they work the best?
Are there some Test Patterns ? Anti Patterns?

This is the kind of questions that we have received many times from Pex users. We just released Pex 0.7 which contains a list patterns and anti-patterns for parameterized unit testing (this is still a draft but we feel that we already have a number of good patterns that would be helpful for anyone giving a shot at Pex):

Note that most of the patterns in this document are not Pex specific and apply to parameterized unit tests in general; including MbUnit RowTest/CombinatorialTest/DataTest, NUnit RowTest, MSTest Data Test, etc…

The amazing ‘quadruple A’ pattern

The ‘triple A’ pattern is a common way of writing a unit test: Arrange, Act, Assert. Even more ‘A’crobatic, we propose the ‘quadruple A’ where we added one more ‘A’ for assumption:


Pex is an automated white box testing tool from Microsoft Research.
More information at http://research.microsoft.com/pex.

posted on Monday, September 22, 2008 3:41:15 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, September 17, 2008

In the previous post, we implemented the insertion method of a binary heap using Test Driven Development (TDD) and parameterized unit tests (I'll leave the full implementation of the insertion method as an exercise).

In this post, we will take a closer look at the development flow that we used and show how it relates to traditional TDD. For many people, combining TDD and automated test generation makes no sense. I believe this is not true anymore, and this is what this post is about.

Test Driven Development Flow

TDD has a well-defined flow where developers

  1. write a unit test,
  2. run the test and watch it fail,
  3. fix the code,
  4. run the test and watch the test pass. Start again (I'll skip refactoring in this discussion)

During this flow, practitioners also refer to the green state when all tests are passing and red state when some tests are failing. The little picture below depicts this little state machine.


A key aspect of this approach is that the design of the API is inferred by writing the 'scenarios', i.e. tests. Therefore, unit tests are a critical building block of the TDD flow.

Note also that xUnit-like test frameworks (pick your favorite framework here) provide the automation tools so that the execution of the test and the investigation of a failure is painless for the programmer.

Test Driven Development Flow With Parameterized Unit Tests

Parameterized unit tests and Pex change the TDD flow while retaining its essence: building the design from tests. Here are the steps, we'll discuss them in detail later on:

  1. Write a parameterized unit test,
  2. Run Pex and watch some generated unit tests fail,
  3. Fix the code,
  4. Run Pex again,
    1. Some previously generated unit tests now pass, at least one new failing unit test get generated (go to 3)
    2. All generated tests are passing, start again (go to 1)

The key difference is the shortcut from step 4 (generating unit tests) to 3 (fix the code), without passing through step 1 (write a new test). This is illustrated by the yellow feedback loop in the diagram below:


To the risk of repeating myself, let me emphasize some important points here:

  • you still need to write unit tests, it's just that they can have parameters: Pex generates unit tests from parameterized unit tests, by instantiating them. The person who writes the parameterized unit test is you, not the tool!
  • it is still about design: using parameterized unit tests is as much about design as closed (i.e. parameterless) unit tests. In fact, one can argue that parameterized unit tests are way closer to a specification that closed unit tests.
  • it is test first: in case it was not obvious by now :)

The shortcut from 4 to 3

As mentioned above, the main difference in the flow is that jump from running the tool (step 4) to fixing the code (step 3), without writing new tests (step 1). This happens because a parameterized unit tests captures equivalence classes rather than a single scenario like closed unit tests. As a result,

  • you spend more time fixing/implementing the code: the nice thing about the shortcut is that you can spend more time writing the code, rather than writing tests.
  • you can leverage automated white-box testing tools: Pex tries to get the maximum coverage out of your parameterized unit test (note remember that getting coverage also means covering the throwing branches in Assert), using an automated white-box code analysis. Now that you have also those manycore CPUs on your motherboard, you can finally make a good use of them :)

To Pex and not to Pex

An important aspect of parameterized unit tests (and tools like Pex) is that you do not have to drop (completely) your existing habits: in many cases, it is easier to write closed unit tests! In fact, you can always start from a closed unit test and refactor it later. We do not expect users to parameterized unit tests write exclusively (another nice read here), but when you do write them, we expect you'll get much more 'out of your buck'.

In future posts, we'll discuss different ways to write parameterized unit tests: from refactoring existing unit tests to using test patterns.

To be continued...

Next time, we'll go back to the heap and look at implementing the 'remove minimum' method...

(Pex is a automated structural testing tool from Microsoft Research. More information at http://research.microsoft.com/pex.)

posted on Wednesday, September 17, 2008 5:15:26 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [6]
# Monday, September 08, 2008

The other day I stumbled on a first draft on a new book on algorithms (Data Structure and Algorithms). After taking a peak at the draft, I found some (hidden) motivation to finally write a decent binary heap for QuickGraph. A heap is a critical data structure for Dijkstra shortest path or Prim's minimum spanning tree algorithm, since it is used to build efficient priority queues.

In this post, we'll start building a binary heap using Test-Driven Development (write the tests first, etc...) and parameterized unit tests.


The heap is a tree where each parent node has a value smaller or equal to the child nodes. The binary heap is a heap implemented through a binary tree, and to make things more interesting (and fast), the tree is usually mapped to an array using indexing magic:

  • parent node index: (index - 1) /2
  • left child node: 2*index + 1
  • right child node: 2*index + 2

The indexing magic is typically the kind of things that introduce bugs.

Let's write that first test

We start by writing a test that simply fills any binary heap with a number of entries. A possible test for this is written as follows:

public void Insert<TPriority, TValue>(
    [PexAssumeUnderTest]BinaryHeap<TPriority, TValue> target,
    [PexAssumeNotNull] KeyValuePair<TPriority, TValue>[] kvs)
    var count = target.Count;
    foreach (var kv in kvs) {
        target.Add(kv.Key, kv.Value);
        AssertInvariant<TPriority, TValue>(target);
    Assert.IsTrue(count + kvs.Length == target.Count);

There are a number of unusual annotations in this test. Let's review all of them:

  • The test is generic: Pex supports generic unit tests. This is really convenient when testing a generic type.
  • PexAssumeUnderTest, PexAssumeNotNull: it basically tells Pex that we don't care about the case where kvs or target is null.
  • We've added a Add(TPriority priority, TValue value) method and Count property to the BinaryHeap

There are also two assertions in the test. Inlined in the loop, we check the invariant of the heap (AssertInvariant). We'll fill up this method as we go. At the end of the test, we check that the Count property.

BinaryHeap version #0

We start with a simple (broken) implementation that does nothing:

public class BinaryHeap<TPriority, TValue> {
    public void Add(TPriority priority, TValue value) { }
    public int Count { returnt 0; }

Now that the code compiles, we run Pex which quickly finds that a non-empty array breaks the count assertion.


BinaryHeap version #1

We have a failing test so let's fix the code by storing the items in a list:

public class BinaryHeap<TPriority, TValue> {
    List<KeyValuePair<TPriority, TValue>> items = new List<KeyValuePair<TPriority, TValue>>();
    public void Add(TPriority priority, TValue value) {
        this.items.Add(new KeyValuePair<TPriority, TValue>(priority, value));
    public int Count {
        get { return this.items.Count; }

We run Pex again and all the previous failing tests are now passing :)


There's a new object creation event that happened (bold events should be looked at). Remember that the test takes a binary heap as argument, well we probably need to tell Pex how to instantiate that class. In fact, this is exactly what happens when I click on the object creation button:


Pex tells me that it guessed how to create the binary heap instance and gives me the opportunity to save and edit the factory. The factory looks like this and get included automatically in the test project:

public partial class BinaryHeapFactory {
    [PexFactoryMethod(typeof(BinaryHeap<int, int>))]
    public static BinaryHeap<int, int> Create() {
        BinaryHeap<int, int> binaryHeap = new BinaryHeap<int, int>();
        return binaryHeap;

All our tests are passing, we can write the next test... but wait!

We have an invariant!

The nice thing about data structures is that they have fairly well defined invariants. These are very useful for testing!

In the case of the heap, we know that the parent node priority should always be less or equal to the priorities of both of his left and right children. Therefore, we can add a method to BinaryHeap that walks the array and checks this property on each node:

public void ObjectInvariant() {
    for (int index = 0; index < this.count; ++index) {
        var left = 2 * index + 1;
        Debug.Assert(left >= count || this.Less(index, left));
        var right = 2 * index + 2;
        Debug.Assert(right >= count || this.Less(index, right));
private bool Less(int i, int j) { return false; }

Remember that AssertInvariant method, let's call ObjectInvariant in that method and run Pex again.

void AssertInvariant<TPriority,TValue>(BinaryHeap<TPriority,TValue> target) {

Pex immediately finds an issue:


This assertion is due to our overly simplified implementation of Less, which always return false.

Fixing tests, and finding new failing tests

We have failing tests so it's time to fix the code again. Let's start fixing on the Less method by using a comparer:

readonly Comparison<TPriority> comparison =
private bool Less(int i, int j)
    return this.comparison(this.items[i].Key, this.items[j].Key) >= 0;

We run Pex and it comes back with the following tests:


Two interesting things happened here:

  • the previous failing test (with [0,0], [0,0]) was fixed by fixing Less
  • Pex found a new issue where the input array involves a small key (3584), then a larger key (4098). A correct heap implementation would have kept the smaller key at the first position.

The coolest part is we did not have to write any additional line of code to get to this point: Pex updated the previous tests and generated the new failure for us.

This is a new kind of flow that occurs when using Pex in TDD: a code update has fixed some issues but created new ones. We are still moving towards our goal but we did not have to pay the price of writing a new unit test.

In fact, to fulfill the invariant and make this test pass we will have to write a correct implementation of the Add method.... without writing a single additional line of test code :)

to be continued...

posted on Monday, September 08, 2008 10:39:35 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
# Tuesday, August 26, 2008

Update: renamed project to YUnit to avoid clashing with other frameworks.

I've been playing with custom test types for team test lately and the result of this experiment is YUnit. A microscopic test framework that lets you write tests anywhere since it only uses the Conditional attribute. That's right, any public static parameterless method tagged with [Conditional("TEST")] becomes a test :)

If you've always dreamed of implementing your own custom test type, this sample could be helpful to you. Remember that this is only a sample and comes with no guarantees of support.

Sources and installer available at: http://code.msdn.microsoft.com/yunit.


posted on Tuesday, August 26, 2008 8:26:36 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Wednesday, August 06, 2008

Read on Nikolai's announcement on the latest drop of Pex.

posted on Wednesday, August 06, 2008 11:06:59 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, July 31, 2008

Alexander Nowak has started a blog post chronicle on Pex and already has 6 episodes to it!

  • Pex - test Case 5 (regular expressions)
  • Pex - test case 4 (strings and parameter validation)
  • Pex - Test case 3 (enums and business rules validation)
  • Pex - test case 2 
  • Pex - test case 1
  • Starting with Pex (Program Exploration)

    The posts give a nice point of view of Pex from a user perspective, and against classic testing techniques such as equivalence classes.

  • posted on Thursday, July 31, 2008 7:25:49 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Tuesday, July 29, 2008

    Linear programming problems are usually solved using the simplex algorithm. While it is easy to encode a constraint system of linear equalities and inequalities as a Parameterized Unit Test for Pex, there is currently no way to tell Pex that we want test inputs that are “minimal” according to a custom objective function. However, Pex can still generate *surprising* feasible solutions.

    Let's start with a simple set of linear inequalities that define our problem.

    public int Test(int x, int y)
    // PexAssume is used to add 'constraints' on the input
    // in this case, we simply encode the inequalities in a boolean formula PexAssume.IsTrue( x + y < 10 & // using bitwise & to avoid introducing branches 5 * x + 2 * y > 20 & -x + 2 * y > 0 & x > 0 & y > 0);
    // the profit is returned so that it is automatically logged by Pex return x + 4 * y; }

    After running Pex, we get one feasible solution. It is not optimal as expected since we don't apply the simplex algorithm.


    Enter overflows

    Remember that .Net arithmetic operations will silently overflow unless you execute them in a checked context? Let's push our luck and try to force an overflow by changing x > 0 constraint to x > 1000:

    public int Test(int x, int y)
            x + y < 10 &
            5 * x + 2 * y > 20 &
            -x + 2 * y > 0 &
            x > 1000 & y > 0
        return x + 4 * y;

    Z3, the constraint solver that Pex uses to compute new test inputs, uses bitvector arithmetic to find a surprising solution that fulfills all the inequalities (our profit has just gone of the roof :)).


    Z3 is truly an astonishing tool!

    Checked context

    In order to avoid overflow, one should use a checked context. Let's update the parameterized unit test:

    public int Test(int x, int y)
                x + y < 10 &
                5 * x + 2 * y > 20 &
                -x + 2 * y > 0 &
                x > 0 & y > 0
        return x + 4 * y;

    In fact, in that case, Pex generates 2 test cases. One test that passes and the other test that triggers an OverflowException (implicit branch).


    Stay tuned for more surprising discoveries using Pex.

    posted on Tuesday, July 29, 2008 6:35:31 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
    # Monday, June 23, 2008

    We've added a little filter to Pex that flags potential testability issues in your source code (at least in the context of unit testing). The idea is simple: if your code logic branches over API that depends on the environment (file system, UI, network, time, etc...), you're not doing 'pure' unit testing anymore and it could be flagged as a testability issue.

    Why do we do this anyway?

    The problem is that Pex is very sensitive to those issues: it cannot control the environment. Let's take a look at a simple example where we write a custom stream class with a testability issue in one of the constructor:


    The constructor checks that the file exists. In order to pass that point in the program, you would actually need to create or find an actual file in the file system. You're talking to the file system, this is already integration testing.

    Parameterized drama

    Let's be optimistic and write a parameterized unit test that reads a file:


    Pex comes back with a single test, passing null as fileName (i.e. default(string)). Since Pex does not know how the file system works, it cannot create a valid file name, and that's where the story ends.


    Help is on the way

    To help you diagnose these issues, we've added specialized logging flags. Pex shows hints in the issue notification area:


    Clicking on that event will give you additional important data: which API was called and the stacktrace at the time of the call :)


    posted on Monday, June 23, 2008 11:08:33 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Wednesday, June 04, 2008

    Ever wonder what was happening inside the ResourceReader (where do those resources come from anyway!).... Check out Nikolai's post on using Pex to test this (complicated) class...

    posted on Wednesday, June 04, 2008 7:17:06 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Saturday, May 24, 2008

    If you're interrested on reading more about Pex, here are some online document that should get you satisfied:

    The tutorial contains hands-on labs and exercises to understand how Pex generates new test cases. Highly recommend if you're interrested on Pex.

    Happy reading!

    posted on Saturday, May 24, 2008 9:00:01 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Thursday, May 22, 2008

    Nikolai broke the news; we've just released Pex 0.5... get it while it's hot!

    We're eager to hear some feedback, so don't hesitate to tell us what you think about it.

    posted on Thursday, May 22, 2008 9:30:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [11]
    # Thursday, May 08, 2008
    posted on Thursday, May 08, 2008 8:40:27 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
    # Wednesday, February 27, 2008

    This is one problem for which we don't have an elegant solution yet:

    what is the best way to craft a name for a generated test?

    Let's see an example; given the following parameterized unit test,

    [PexMethod] void Test(int i) {
        if (i == 123) throw ArgumentException();

    Pex would generate 2 tests: i = 0, i = 123. So it seems doable to infer test names such as

    [TestMethod] void Test0() { this.Test(0); }
    [TestMethod, EE(typeof(ArgumentException))]
    void Test123ThrowsArgumentException() { this.Test(123); }

    So what's so difficult about it? Well, most PUT's aren't that simple and as the size of the generated parameter increases, the methods might increase as well (strings getting bigger). Here's a list of potential problems:

    • the method should stay relatively small (less than 80 chars),
    • the parameters might be objects or classes, which do look well in a string format,
    • generated strings might be huge and contain weird unicode characters,
    • the tests might involve mock choices which again do not render well to strings,
    • the more parameters the more cryptic things become

    Our approach: Timestamps

    To the light of all those problems, we've taken the shortcut route in Pex by simply using combination of the parameterized unit test method signature and the timestamp when the test is generated:

    [TestMethod] void TestInt32_20080224_124301_35() { ... }

    This is ugly, how can I change that?

    We've added an extensibility point to support custom test naming scheme. After registering your 'namer', you will get opportunity to craft a test name following your favorite code standard. You will have the generated test, output, exception, etc... at your disposition to make an intelligent choice there.

    Off course, you're also welcome to drop a comment on this post to suggest a better scheme.

    posted on Wednesday, February 27, 2008 12:18:46 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
    # Saturday, February 23, 2008

    So how do you write negative test cases with Pex? Here's a nice solution I was working on. I'm wondering

    Traditional ExpectedExceptions style

    Let's test the constructor a type Foo that takes a reference argument. We expect to throw a ArgumentNullException when null is passed. Therefore, we could write

    void Test() {
         new Foo(null);

    or using xUnit style assertions,

    void Test() {
         Assert.Throws(delegate { new Foo(null); });

    Pex ExpectedExcetions style

    Let's refactor our first test and push 'null' as a test parameter. We add an assertion after the constructor to make sure a null parameter never succeeds.

    void Test(object input) {
        new Foo(input);
        Assert.IsNotNul(input); // we should never get here

    The interesting part is that this test will not only test for the null value but also for the passing values as well. Moreover, there might be more checks over the input which might trigger other exceptions. In that case, additional asserts could be added -- or even better, centralized in an invariant method.

    So, what do you think?

    posted on Saturday, February 23, 2008 1:40:50 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
    # Wednesday, February 13, 2008

    Maybe yes, maybe no, I guess it depends on the reviews... Have you reviewed it?


    posted on Wednesday, February 13, 2008 2:53:49 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]

    This is a general recommendation if you're planning to use a tool like Pex in the future: make sure that preconditions (i.e. parameter validation) fails in a different fashion that other assertions.

    Here's a snippet that shows the problem:

    // don't do this
    void Clone(ICloneable o) {
         Debug.Assert(o != null); // pre-condition
         object clone = o.Clone();
         Debug.Assert(clone); // assertion

    Why is this bad?

    A tool like Pex will explore your code and try to trigger every Debug.Assert it finds on its way. When the assertion is a precondition, it is likely expected and one would like to emit a negative test case (i.e. 'expected exception').

    The problem in the snippet above is that both failure will yield to the same assertion exception and it will very difficult to *automatically* triage the failure as expected or not.

    How do I fix this?

    Make sure different classes of assertions can be differentiated automatically, through different exception types, tags in the message, etc...

    posted on Wednesday, February 13, 2008 9:51:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3]
    # Saturday, December 22, 2007

    Scott swinged by MSR a couple weeks ago to talk about Pex.



    posted on Saturday, December 22, 2007 4:02:30 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
    # Thursday, December 06, 2007

    Update: this talk has been cancelled.

    I'll be giving a talk about Pex in Diegem on January 3.


    posted on Thursday, December 06, 2007 8:33:33 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
    # Wednesday, December 05, 2007

    In the previous post, we went through the exploration testing process to exercise a simple method, CheckPositive. In this post, we'll try the same exploration testing, but will let Pex do it.

    // mehod under test
    1 void CheckPositive(int i, bool @throw) {
    2     if (i < 0) {      
    3          Console.WriteLine("not ok");
    4          if (@throw)
    5             throw new ArgumentException();
    6     }
    7     else
    8         Console.WriteLine("ok");
    9 }
    // hand-crafted unit tests
    [TestMethod] void Zero() {
         CheckPositive(0, false);
    [TestMethod] void MinusOne() {
         CheckPositive(-1, false);
    [TestMethod] void MinusOneAndThrow() {
         CheckPositive(-1, true);

    Exploration testing with Pex

    To let Pex explore the CheckPositive, we write a little test wrapper around that method:

    [TestClass, PexClass]
    public partial class ExplorationTesting {
        public void Test(int i, bool @throw) {
            CheckPositive(i, @throw);

    We also instrumented the original method with additional methods to track down the path conditions that Pex computes along the execution traces. Pex generates 3 pairs of values which are equivalent to what the test we manually created:

    • 0, false
    • int.MinValue, false
    • int.MinValue, true (throws)
    posted on Wednesday, December 05, 2007 8:54:15 AM (Pacific Standard Time, UTC-08:00)  #    Comments [1]
    # Tuesday, December 04, 2007

    In the previous post, we clarified what Pex was not doing. So what does it do?

    Pex performs some kind of automated exploration testing. I'll dive deeper into the details of this, but let's start with an example that gives the high level idea of the methodology.

    Exploration Testing

    Let's start by doing some exploration testing on a simple method that checks that an integer is positive, CheckPositive:

    1 void CheckPositive(int i, bool @throw) {
    2     if (i < 0) {      
    3          Console.WriteLine("not ok");
    4          if (@throw)
    5             throw new ArgumentException();
    6     }
    7     else
    8         Console.WriteLine("ok");
    9 }

    One way to 'explore' this method would be to throw different values at CheckPositive and use the debugger to see what's happening.

    Iteration 1: pick the default

    Let's create a unit test that does exactly that and step into the debugger. Since we don't really know anything about CheckPositive yet, we simply pick 0 for i (actually default(int)).

    void Zero() {
         CheckPositive(0, false);

    When we reach the statement "Console.WriteLine..." on line 8, we can figure out that we took this branch because the condition "i < 0" on line 2 evaluated to false. Let's remember this and continue on.
              line 2, i < 0 == false, uncovered branch

    The execution continues and the test finished successfully.

    Iteration 2: flip the last condition

    In the previous run, we've remembered that some code was not covered on line 3. We also know that this code path was not covered because the condition "i < 0" evaluated to false. At this point, we usually intuitively figure out a value of 'i' in our brain, to make this condition true. In this case, we need to solve "find i such that i < 0". Let's pick -1.

    void MinusOne() {
         CheckPositive(-1, false);

    We run the test under the debugger. As expected on line 2, the condition evaluates to true, and the program takes the other branch that in the previous test.
    The program continues and reaches line 4 where another if statement branch. Since the condition  "@throw" evaluates to false, we take the branch that throws and remember the condition:
              line 4, @throw == false, uncovered branch

    The program continues to run and finishes.

    Iteration 3: path condition + flipped condition

    We've still some uncovered branch to cover in the method, 'guarded' by the condition at line 4. To be able to cover this code, we need 2 things:
          1) reach line 4: i < 0
          2) make the condition in line 4 evaluate to true: @throw == true

    So to cover the last statement in the method, we need to find parameter values such that
                      i < 0 && @throw == true

    Let's pick -1, and true.

    void MinusOneAndThrow() {
         CheckPositive(-1, true);

    The test executes and now throws an exception as we wanted. At this point, we've fully covered to behavior of CheckPositive.

    What about Pex?

    Pex uses the same 'exploration' methodology as above. Pex executes a parameterized unit tests over and over and tries to cover each branch of the program. As it executes more code, it learns about new branches to cover etc... To find the input, Pex uses a constraint solver, Z3.

    posted on Tuesday, December 04, 2007 11:17:38 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]

    Nikolai Tillmann published a paper on DySy, a tool that can infer likely invariants by monitoring the code that is running.

    DySy is was built on top of the infrastructure that Pex uses to generate test cases. In fact, it is one of our sample application :)

    posted on Tuesday, December 04, 2007 8:53:17 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
    # Friday, November 30, 2007

    I realized that I had not talked much about how Pex computes the values for the test parameters... the most important part of the tool!

    It's not ...

    Let's start by getting to wrong ideas out of the way. Parameterized tests are nothing new, they exist in MbUnit, VSTS, XUnit.Net, FIT, etc... so what's different with Pex?

    • it is not random: when it comes to generate data, the easiest solution is to plugin a random generator. If the state space is big enough (e.g. integers 2^32), it highly unlikely that random tests will find the interesting corner cases,

    if (i == 123456)
         throw new Exception(); <---------- random won't find this

    • it does not require ranges or hints for the data: Pex does not require annotation to specify the range of particular inputs. All the relevant input values are inferred from the code itself (we'll see later how).
    • it is not pairwize testing: following the comment above, Pex does use a pairwize approach.
    • it is not data testing: data testing such as FIT or MbUnit RowTest usually consists of rows containing a set of inputs and the expected output. In Pex, you cannot provide the expected output as a 'concrete' value, you need to express it as code (through assertions for example). This is a subtle difference that radically changes the way you write your tests.

    [RowTest, Row(0, 1, 1), Row(1, 0, 1)] // data test for the addition
    void AddTest(int a, int b, int result)
    {   Assert.AreEqual(result, a + b);  }

    [PexMethod] // 0 is the neutral of the addition operation
    void ZeroNeutralTest(int b)
    {   Assert.AreEqual(b, 0 + b);  }

    • it is not a static analysis tool: Pex does a dynamic analysis of the code; it analyses the code that *is* running. Pex does this by rewriting the IL before it's jitted and instrumenting it with (many) callbacks to track precisely which IL instruction is being run by the CLR. So yes, Pex analyses the IL but on the fly rather than 'statically'.

    Ok, now we've got a better idea of what Pex is not. So how does it work? ....

    posted on Friday, November 30, 2007 1:19:05 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3]
    # Friday, October 19, 2007

    In it's Weekly Source Code, Scott Hanselman presents a new CodePlex project, NDepend.Helpers.FileDirectoryPath from Patrick Smacchia. Nice, better path handling should have been part of the BCL a while ago. 

    Path stuff is hard

    Path normalization and parsing is not an easy task so when Patrick Smacchia mentions that his code "100% unit-tested", I decided to see if Pex could not find a little bug over there.

    A dumb parameterized unit test

    So I added wrote the following parameterized unit test, which 'simply' calls the constructor of FilePathRelative. Under the hood, there is some string manipulation done by the library to normalize the path, it should be interresting to see what comes out of this. I also added calls to PexComment.XXX to log the input/output values (Pex will build a table out of this):

        public void FilePathRelativeCtor(string path) {
            FilePath result = new FilePathRelative(path);
            PexComment.Value("result", result.FileName);


    So Pex starts running and soon enough an assert pops up. Pex had just found a neat little path that broke an assertion in the library:

    public void FilePathRelativeCtor_String_71019_003302_0_05() {

    Popping up the reports, I went for the parameter table (remember the PexComment calls) that shows one row for each generated test. In fact, the 5-th test that Pex generated was triggering the assert:

    Note that "//" also triggers the bug which seems to indicate that any path finishing by "/" will have this behavior.

    The path condition

    Lastly, I took a quick look at the path condition that Pex solved to discover the bug (see red below). Luckily this one is fairly easy and one can clearly see 'path[0] == '/' in there.

     What did we learn today?

    Handling paths is hard :)

    Also, we saw that a dumb parameterized unit test (just calling a ctor), could find a bugs. If you use assertions, it will help Pex look for bugs in your code.

    posted on Friday, October 19, 2007 1:46:37 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
    # Tuesday, October 16, 2007

    Update: I will not be at the Seattle Code Camp, too much rescheduling.

    I'll be presenting Pex at the Seattle Code Camp in Nov.

    Pex – Automated White Box Unit Testing

    Parameterized unit testing is becoming a mainstream feature of most unit test frameworks; MbUnit RowTest (and more), VSTS data tests, xUnit.net Theories, etc... Unfortunately, it is still the responsibility of the developer to figure out relevant parameter values to exercise the code. With Pex, this is no longer true. Pex is a unit test framework addin that can generate relevant parameter values for parameterized unit tests. Pex uses an automated white box analysis (i.e. it monitors the code execution at runtime) to systematically explore every branches in the code. In this talk, Peli will give an overview of the technology behind Pex (with juicy low-level .NET profiling goodness), then quickly jump to exiting live demos.

    posted on Tuesday, October 16, 2007 6:36:52 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Saturday, September 29, 2007

    In Pex, we have our own version of all (and more) collection classes from the BCL. This duplication may sound stupid but there's a very good reason behind it: since Pex instruments code (i.e. rewrites IL), the BCL collections might be instrumented as well... leading to a poorer performance (instrumented code is slower for many reasons). Therefore, to avoid this situation, we ended up using our own implementation which never gets instrumented.

    While using our collections, we added 2 features that we like a lot: readonly interfaces and small collections.

    Readonly interfaces

    Readonly interfaces are handy to 'safely' expose collections, once you start using them you get hooked:

    • ICountable<T>, a readonly enumerable with a Count.
    interface ICountable<T> : IEnumerable<T>
        int Count {get;}
    • ICopyable<T>, a countable that can be copied around,
    interface ICopyable<T> : ICountable<T>
        void CopyTo(T[] array, int index);
    • IIndexable<T>, a indexed collection
    interface IIndexable<T> :  ICopyable<T>
        T this[int index] { get; }

    We also have a couple more interfaces for dictionary style collections but you get the idea.

    Small Collections

    Small collections are optimized collections with 0, 1 or 2 elements. Depending on your scenario, these can improve the performance of your application while lowering the pressure on the GC (create less objects -> less stuff to garbage collect). If you create a lot of objects which each have collections, ask yourself: are you lazy allocating the collections? what should be the initial size of the collection? do you expect more than one element on average?

    posted on Saturday, September 29, 2007 11:33:15 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Sunday, September 23, 2007

    xUnit, the new variation on the 'unit test framework' theme comes with support for data driven tests: 'Theories' (funny name by btw). Pex is a plugin for test frameworks, so we've added support for xUnit as well.

    [PexClass] // xUnit does not have fixture attributes
    public class MyTests
        [Theory, DataViaXXXX] // xUnit theories
        [PexTest] // let pex help you 
        public void Test(int a, ....)

    posted on Sunday, September 23, 2007 9:26:32 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Friday, September 07, 2007

    In a previous, we were looking at partial trust the lack of support for it. In this post, I'll show the key 'fixes' that we did to make Pex 'partial trust' aware.

    Simulating Partial Trust

    The easiest way to run under partial trust is to run your .net application from the network. However, in the context of a test framework, this would not work since many required permissions would not be granted (reflection, i/o, etc...). So we need a new AppDomain whose security policy considers the test framework assemblies as fully trusted.

    • Get a new AppDomain:
    string trust = "LocalIntranet";
    AppDomain domain = AppDomain.CreateAppDomain(trust);
    • Load the named permission set
    PermissionSet permission = GetNamedPermissionSet(trust);
    • Create the code group structure that associate the partial trust permission to any code
    UnionCodeGroup code= new UnionCodeGroup(
        new AllMembershipCondition(),
        new PolicyStatement(permission, PolicyStatementAttribute.Nothing));
    • give full trust to each test framework assembly:
    StrongName strongName = CreateStrongName(typeof(TestFixtureAttribute).Assembly);
    PermissionSet fullTrust = new PermissionSet(PermissionState.Unrestricted);
    UnionCodeGroup fullTrustCode = new UnionCodeGroup(
    new StrongNameMembershipCondition(strongName.PublicKey, strongName.Name, strongName.Version),
    new PolicyStatement(fullTrust, PolicyStatementAttribute.Nothing));
    • Assign the policy to the AppDomain
    PolicyLevel policy = PolicyLevel.CreateAppDomainLevel();
    policy.RootCodeGroup = code;

    This is basically it (the rest of the details are left as an exercise :)).

    Let them call you

    Make sure to add the AllowPartiallyTrustedCallers to the test framework assembly otherwize users won't be allowed to call into it...

    A twist...

    Pex is bit invasive when it comes to partial trust. Pex rewrites the IL at runtime and turns all method bodies into... unsafe code (that is unverifiable). At this point, any will not run because of the SkipVerification permission.

    No problemo, just add it to the permissionset:

        new SecurityPermission(SecurityPermissionFlag.SkipVerification)


    posted on Friday, September 07, 2007 11:39:22 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Thursday, August 23, 2007

    A common requirement for unit test framework is the ability to test internal types.

    That's easy! use InternalsVisibleToAttribute

    With .Net 2.0 and up, this is a fairly easy task thanks to the InternalsVisibleToAttribute: add it to the product assembly to give 'visibility rights' to the test assembly.

    // in assembly Foo
    internal class Foo {}
    // giving visibility rights to the Foo.Tests assembly

    On the test assembly side, this works because unit test are 'closed' methods which do not expose any internal types.

    public void FooTest() {
        Foo foo = new Foo(); // we're using the internal type Foo
                             // but it's hidden in the unit test

    What about parameterized tests? Make them internal as well

    If one of the parameters of the test is internal, the test method will have to be internal as well in order to compile:

    internal void FooTest(Foo foo) {

    Not pretty but still gets the job done. Pex will generate public unit test methods that invoke the internal parameterized test method, and we'll be happy:

    public void FooTest_12345() {

    What about MbUnit RowTest?

    This issue was never faced by MbUnit RowTest because it only accepts intrinsic types such as int, long, etc... Those types are obviously public :)

    posted on Thursday, August 23, 2007 10:24:08 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Saturday, July 14, 2007

    Pex can analyze regular expressions*** and generate strings that matches them automatically!

    What does it mean? Well, maybe, somewhere deep in your code your are validating some string with a regex (for example a url). In order to test the validation code, one needs to craft inputs that does not match (easy) and matches (harder) the regex.

    Let Pex do it:

    So what if Pex could be smart enough to understand a regex and craft inputs accordingly? In the example below, it would be very hard for a random generate to generate a string that matches the regex.

    public void Url([PexAssumeIsNotNull]string s)
        if (Regex.IsMatch(s, "(?<Protocol>\w+):\/\/(?<Domain>[\w@][\w.:@]+)\/?[\w\.?=%&=\-@/$,]*"))
            throw new PexCoverThisException(); // random won't find this

    "" would be a failing match and foo://foo.com a good match. (To generate the correct match, my brain simulated the regex automaton and estimated one possible path). Interestingly, Pex generates....


    Pretty ugly... but correct! This reminds us that while regex are used to validate input, what they'll let through is sometimes scary.

    Compiled Regex + Pex = Love

    The great part about supporting the regular expressions is that it comes for free (almost) since Regex can be compiled to IL in .Net. When the BCL generates the regex IL code, it effectily builds the automaton... which can be analyzed by Pex!!!

    Refresher: Pex works analyzing the MSIL being executed.

    Hey but not all Regex are compiled!

    That's true. Compiling the regex is optional so Pex needs to do a little bit of 'plumbing' to make sure all regular expressions are compiled. This is simply done by substituting the real .ctor of the Regex class with a customized version that compiles the regex. I'll talk about substitutions deeper in the future.

    *** Of course, the bigger the regex is, the harder it is going to be for Pex to craft a successful match.

    posted on Saturday, July 14, 2007 2:49:53 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
    # Thursday, July 05, 2007

    Pex has a couple samples that get bundled in our installer. We have them in our main solution so that we get the benefits of Refactoring (that they still compiled :)). This approach has one little problem: the sample projects are referencing the Pex projects... which are not shipped! If packaged as is, they'll never compile on the user's machine.

    ProjectReference -> Reference

    We solved this problem by implementing an MSBuild task that 'patches' .csproj project files by replacing ProjectReference nodes with Reference:

    <ProjectReference Include="..\..\..\Pex\Framework\Microsoft.Pex.Framework.csproj">


    <Reference Include="Microsoft.Pex.Framework">...</Reference>

    There's a bit of coding involved to it (resolving the assembly name, selecting the appropriate projects, etc...) but that's basically it :)

    posted on Thursday, July 05, 2007 9:02:38 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Tuesday, July 03, 2007

    One of the problem with Pex is that.... it's yet another dependency in your test project. Pex has it's own attributes which makes it very difficult to 'strip it out' of the test source. Many teams won't allow to check-in assemblies, they won't install external components on the build machine and just forget about touching the source code!

    So how do you strip Pex?

    In "Condition" lies the answer

    An elegant solution uses a bit of Reflection, CodeDom and the MSBuild Condition attribute: generate Attribute stubs (shadows) and bind them to the project.

    Pex modifies the project file and uses MSBuild conditions to conditionally include files and assembly references in the generated test project.

    • A boolean property PexShadows to controls the shadowing state: true if shadowed, false or missing otherwize.
    <Project DefaultTargets="Build" ...>
    • a conditional reference to Microsoft.Pex.Framework:
            Condition="$(PexShadows) != 'true'" />

    when the property PexShadows evalutes to true, the Microsoft.Pex.Framework assembly is not referenced anymore.

    • a file containing all the custom attributes 'stubs' (for all attributes in the Microsoft.Pex.Framework) is generated (automatically of course :)). Pex also dumps the source of several other helper classes. Each generated file is added to the test project conditionally:
        <Compile Include="Properties\PexAttributes.cs" Condition="$(PexShadows) == 'true'">

    That's it! Flip the value of PexShadows to bind/unbing your test project from Pex.

    So what does Visual Studio says ?

    Visual Studio is totally fooled by the maneuver. It shows the conditional files and references as if nothing happened. Add a menu item in the project context menu to switch it on and off and we are good to go :)

    posted on Tuesday, July 03, 2007 10:28:12 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Saturday, June 30, 2007

    In Pex, we added the possibility to specify the type under test of a given fixture:

    public class Account {...}

    [TestFixture, PexClass(typeof(Account))]
    public class AccountTest {...}  

    That's nice but why would it be useful... Beyond the fact that it clearly expresses the 'target' of the fixture, this kind of information can be leverage by tools like Pex.

    For example, since we know that Account is the type under test, we can tune Pex to prioritize the exploration of the Account type.

    Another interesting side effect is the targeted code coverage data. Instead of getting coverage information over the entire assembly, we can directly provide coverage over the type under test: the AccountTest covered xx% of Account.

    Still toying around the concept, one can add a special filtering mode to the command line to execute all tests that target 'Account':

    pex.exe /type-under-test:Account Bank.Tests.dll
    posted on Saturday, June 30, 2007 12:23:24 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Saturday, May 26, 2007

    .Net 2.0 has been out for a while and it seems that 'generics' have not made it into unit test frameworks (that I know of). When I write unit tests for generics, I don't want to have to instantiate them!

    For example, if I have an generic interface,

    interface IFoo<T> {...}

    then I'd really like to write this kind of test and let the test framework figure out an interresting instantiation (i.e. choice of T):

    public void Test<T>(IFoo<T> foo) { ... }

    In the example above, System.Object can be trivially used to instantiate IFoo<T>. Of course, things get more interresting when mixing type argument, method argument and constraints :)

    interface IFoo<T>
        R Bar<R>(T t)
         where R : IEnumerable<T>

    In Pex, we've started to look at this problem... stay tuned.

    posted on Saturday, May 26, 2007 11:09:00 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [3]
    # Thursday, April 19, 2007

    With parameterized unit tests, it is not uncommon to generate a large number of exceptions. An basic exception log usually looks like this: a small message and the stack trace.

    Whith this kind of output, a lot of work is still left to the user since he has, *for each frame*,  to manually open the source file and move to the line refered by the trace.

    Give me the source context!

    In the Pex reports we added a couple lines of code to read the source for each frame and display it in the reports (no rocket science). The cool thing is that you now can get a pretty good idea of what happened without leaving the test report. Multiply that by dozens of exceptions and you've won a loooot of time.

    Here are some screenshots to illustrate this: an exception was thrown in some arcane method. The error message is not really useful (as usual).

    If we expand the source and actually see the code, things become much clearer...

    posted on Thursday, April 19, 2007 5:12:40 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
    # Tuesday, April 10, 2007

    MbUnit supports different flavors of parameterized unit tests: RowTest, CombinatorialTest, etc... If you are already using those features, it would be very easy for you to 'pexify' them:

    namespace MyTest
         using MbUnit.Framework;
         using Microsoft.Pex.Framework;

         [TestFixture, PexClass]
         public partial class MyTestFixture
    [Row("a", "b")]

    public void Test(string a, string b)


    Isn't this nice? :)

    Some little notes:

    • 'partial' is helpfull to emit the generated unit test in the same class... but not the same file. Pex also support another mode where partial is not required.
    • the Pex attributes do not 'interfere' with the MbUnit ones. Your unit tests will still run exactly the same with MbUnit.
    posted on Tuesday, April 10, 2007 9:47:27 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
    # Sunday, April 01, 2007

    This is the first post about Pex and how to use it. Since Pex is a fairly large project, I'll probably stretch the content of a large number of entries. Stay tunned...

    Pex gives you Parameterized Unit Tests

    Data-driven tests are not something new. They exist under different forms such as MbUnit's RowTest or CombinatorialTest, VSTS's DataSource, etc... So what's so special about Pex? The major difference is that Pex finds the parameter inputs for you (and generates a unit test out of it).

    Whereas the user had to (smartly) guess a set of input for the data-driven tests, Pex tries to compute the relevant inputs to those tests automatically (and may also suggest fixes).

    Note: The way Pex finds the input will be covered in more details later. In a couple words, Pex performs a systematic white box analysis of the program behavior. It tries to generate a minimal test suite with maximum coverage.

    Parameterized Unit Tests, what does it feel like?

    Parameterized unit tests are methods with parameters. There's nothing magic about that. Pex provides a set of custom attributes so that you can author them side-by-side with classic unit tests:

    [TestClass, PexClass] // VSTS fixture containing parameterized tests
    public class AccountTest
        // parameterized unit test
        // account money should never be negative, for any 'money', 'withdraw'
        public void TransferFunds(int money, int withdraw) {...}

    When Pex finds interresting data to feed the parameterized unit test, it generates a unit test method that calls the parameterized unit test. This also means that once the unit test has been generated, you do not need Pex anymore to run the repro.

    [TestClass, PexClass] // VSTS fixture containing parameterized tests
    public class AccountTest
        public void TransferFunds(int money, int withdraw) {...}
        [TestMethod, GeneratedBy("Pex", "")]
        public void TransferFunds_12345() { this.TransferFunds(12, 13); }


    Why do I need parameterized unit tests anyway?

    A parameterized unit tests generally captures more program behaviors than a single unit test which is like a micro-scenario. This will become more apparent when we start looking at some examples. 

    If you were already using [RowTest] or [DataSource] as part of your testing, then you will definitely like Pex.

    posted on Sunday, April 01, 2007 3:52:37 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
    # Monday, March 12, 2007

    The Pex screencast was a bit mysterious without sound and comments. I've added a 'storyboard' to help you understand it:



    posted on Monday, March 12, 2007 5:31:29 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [1]
    # Thursday, March 08, 2007


    I'm thrilled to present the project I joined last October: 'Pex' (for Program EXploration). Pex is a powerfull plugin for unit test frameworks that let the user write parameterized unit tests**. Pex does the hard work of computing the relevant values for those parameters, and serializing them as classic unit tests.

    Here's a short screencast where we test and implement a string chunker. In the screencast, we use a parameterized unit test to express that for *any* string input and *any* chunk length, the concatenation of the chunks should be equal to the original sting.


    More info on Pex is available at http://research.microsoft.com/pex/.

    ** It's actually much more than that... but let's keep that for later :)

    posted on Thursday, March 08, 2007 9:58:36 AM (Pacific Standard Time, UTC-08:00)  #    Comments [5]