# Wednesday, June 04, 2008

Ever wonder what was happening inside the ResourceReader (where do those resources come from anyway!).... Check out Nikolai's post on using Pex to test this (complicated) class...

posted on Wednesday, June 04, 2008 7:17:06 PM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Saturday, May 24, 2008

If you're interrested on reading more about Pex, here are some online document that should get you satisfied:

The tutorial contains hands-on labs and exercises to understand how Pex generates new test cases. Highly recommend if you're interrested on Pex.

Happy reading!

posted on Saturday, May 24, 2008 9:00:01 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [0]
# Thursday, May 22, 2008

Nikolai broke the news; we've just released Pex 0.5... get it while it's hot!

We're eager to hear some feedback, so don't hesitate to tell us what you think about it.

posted on Thursday, May 22, 2008 9:30:00 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [11]
# Thursday, May 08, 2008
posted on Thursday, May 08, 2008 8:40:27 AM (Pacific Daylight Time, UTC-07:00)  #    Comments [2]
# Wednesday, February 27, 2008

This is one problem for which we don't have an elegant solution yet:

what is the best way to craft a name for a generated test?

Let's see an example; given the following parameterized unit test,

[PexMethod] void Test(int i) {
    if (i == 123) throw ArgumentException();
}

Pex would generate 2 tests: i = 0, i = 123. So it seems doable to infer test names such as

[TestMethod] void Test0() { this.Test(0); }
[TestMethod, EE(typeof(ArgumentException))]
void Test123ThrowsArgumentException() { this.Test(123); }

So what's so difficult about it? Well, most PUT's aren't that simple and as the size of the generated parameter increases, the methods might increase as well (strings getting bigger). Here's a list of potential problems:

  • the method should stay relatively small (less than 80 chars),
  • the parameters might be objects or classes, which do look well in a string format,
  • generated strings might be huge and contain weird unicode characters,
  • the tests might involve mock choices which again do not render well to strings,
  • the more parameters the more cryptic things become

Our approach: Timestamps

To the light of all those problems, we've taken the shortcut route in Pex by simply using combination of the parameterized unit test method signature and the timestamp when the test is generated:

[TestMethod] void TestInt32_20080224_124301_35() { ... }

This is ugly, how can I change that?

We've added an extensibility point to support custom test naming scheme. After registering your 'namer', you will get opportunity to craft a test name following your favorite code standard. You will have the generated test, output, exception, etc... at your disposition to make an intelligent choice there.

Off course, you're also welcome to drop a comment on this post to suggest a better scheme.

posted on Wednesday, February 27, 2008 12:18:46 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Saturday, February 23, 2008

So how do you write negative test cases with Pex? Here's a nice solution I was working on. I'm wondering

Traditional ExpectedExceptions style

Let's test the constructor a type Foo that takes a reference argument. We expect to throw a ArgumentNullException when null is passed. Therefore, we could write

[ExpectedException(typeof(ArgumentNullException))]
void Test() {
     new Foo(null);
}

or using xUnit style assertions,

void Test() {
     Assert.Throws(delegate { new Foo(null); });
}

Pex ExpectedExcetions style

Let's refactor our first test and push 'null' as a test parameter. We add an assertion after the constructor to make sure a null parameter never succeeds.

void Test(object input) {
    new Foo(input);
    Assert.IsNotNul(input); // we should never get here
}

The interesting part is that this test will not only test for the null value but also for the passing values as well. Moreover, there might be more checks over the input which might trigger other exceptions. In that case, additional asserts could be added -- or even better, centralized in an invariant method.

So, what do you think?

posted on Saturday, February 23, 2008 1:40:50 AM (Pacific Standard Time, UTC-08:00)  #    Comments [2]
# Saturday, February 16, 2008

Looks like I've just made it in for the Seattle ALT.NET 'un'-conference :)

posted on Saturday, February 16, 2008 3:02:01 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Friday, February 15, 2008

Some may wonder if there's any relation between my nickname 'Peli' and 'Pex': there's none! Pex just stands for "Program Exploration".

In fact, Pex was started long before I joined the project by my colleague Nikolai Tillmann. Nikolai started the MUTT project (read this), which is the ancestor of Pex. He's the mastermind behind the IL rewriter, symbolic engine, and well a large part of Pex :) Hopefully, I'll convince him to start a blog :).

posted on Friday, February 15, 2008 10:43:55 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]
# Wednesday, February 13, 2008

Maybe yes, maybe no, I guess it depends on the reviews... Have you reviewed it?

http://submissions.agile2008.org/node/2766

posted on Wednesday, February 13, 2008 2:53:49 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]

This is a general recommendation if you're planning to use a tool like Pex in the future: make sure that preconditions (i.e. parameter validation) fails in a different fashion that other assertions.

Here's a snippet that shows the problem:

// don't do this
void Clone(ICloneable o) {
     Debug.Assert(o != null); // pre-condition
     ...
     object clone = o.Clone();
     Debug.Assert(clone); // assertion
}

Why is this bad?

A tool like Pex will explore your code and try to trigger every Debug.Assert it finds on its way. When the assertion is a precondition, it is likely expected and one would like to emit a negative test case (i.e. 'expected exception').

The problem in the snippet above is that both failure will yield to the same assertion exception and it will very difficult to *automatically* triage the failure as expected or not.

How do I fix this?

Make sure different classes of assertions can be differentiated automatically, through different exception types, tags in the message, etc...

posted on Wednesday, February 13, 2008 9:51:00 AM (Pacific Standard Time, UTC-08:00)  #    Comments [3]