The Register Resolve Release pattern

Wednesday, 29 September 2010 11:50:02 UTC

The subject of Dependency Injection (DI) in general, and DI Containers specifically, suffers from horrible terminology that seems to confuse a lot of people. Newcomers to DI often think about DI Containers as a sort of Abstract Factory on steroids. It's not. Nicholas Blumhardt already realized and described this phenomenon a couple of years ago.

The core of the matter is that as developers, we are extremely accustomed to thinking about components and services in terms of queries instead of commands. However, the Hollywood Principle insists that we should embrace a tell, don't ask philosophy. We can apply this principles to DI Containers as well: Don't call the container; it'l call you.

This leads us to what Krzysztof Koźmic calls the three calls pattern. Basically it states that we should only do three things with a DI Container:

  1. Bootstrap the container
  2. Resolve root components
  3. Dispose this container

This is very sound advice and independently of Krzysztof I've been doing something similar for years - so perhaps the pattern label is actually in order here.

However, I think that the pattern deserves a more catchy name, so in the spirit of the Arrange Act Assert (AAA) pattern for unit testing I propose that we name it the Register Resolve Release (RRR) pattern. The names originate with Castle Windsor terminology, where we:

  1. Register components with the container
  2. Resolve root components
  3. Release components from the container

Other containers also support the RRR pattern, but if we were to pick their terminology, it would rather be the Configure GetInstance Dispose (CGD) pattern (or something similar), and that's just not as catchy.

We can rewrite a previous example with Castle Windsor and annotate it with comments to call out where the three container phases occur:

private static void Main(string[] args)
{
    var container = new WindsorContainer();
    container.Kernel.Resolver.AddSubResolver(
        new CollectionResolver(container.Kernel));
 
    // Register
    container.Register(
        Component.For<IParser>()
            .ImplementedBy<WineInformationParser>(),
        Component.For<IParser>()
            .ImplementedBy<HelpParser>(),
        Component.For<IParseService>()
            .ImplementedBy<CoalescingParserSelector>(),
        Component.For<IWineRepository>()
            .ImplementedBy<SqlWineRepository>(),
        Component.For<IMessageWriter>()
            .ImplementedBy<ConsoleMessageWriter>());
 
    // Resolve
    var ps = container.Resolve<IParseService>();
    ps.Parse(args).CreateCommand().Execute();
 
    // Release
    container.Release(ps);
    container.Dispose();
}

Notice that in most cases, explicitly invoking the Release method isn't necessary, but I included it here to make the pattern stand out.

So there it is: the Register Resolve Release pattern.


Comments

Harry Dev
I agree completely and think the Register Resolve Release (RRR) moniker is very good. You should think about creating a wikipedia or c2 entry for it to promote it ;)
2010-09-29 12:21 UTC
Nice coinage, gets a vote from me.
2010-10-03 21:28 UTC
Arnis L.
This realization took me few months. It truly is quite hard for newcomers.
2010-10-03 22:42 UTC
You could use a using block, too.
2010-10-07 17:03 UTC
Yes, although the general pattern is a bit more subtle than this. In the example given, the call to the Release method is largely redundant. If we assume that this is the case, a using block disposes the container as well.

However, a using block invokes Dispose, but not Release. Releasing an object graph is conceptually very different from disposing the container. However, in the degenerate case shown here, there's not a lot of difference, but in a server scenario where we use the container to resolve an object graph per request, we resolve and release many object graphs all the time. In such scenarios we only dispose the container when the application itself recycles, and even then, we may never be given notice that this happens.
2010-10-07 21:09 UTC
Great name, I like it. RRR is easy to remember and makes sense to me perfectly. Thanks.
2012-06-09 06:45 UTC

Instrumentation with Decorators and Interceptors

Monday, 20 September 2010 19:18:21 UTC

One of my readers recently asked me an interesting question. It relates to my book's chapter about Interception (chapter 9) and Decorators and how they can be used for instrumentation-like purposes.

In an earlier blog post we saw how we can use Decorators to implement Cross-Cutting Concerns, but the question relates to how a set of Decorators can be used to log additional information about code execution, such as the time before and after a method is called, the name of the method and so on.

A Decorator can excellently address such a concern as well, as we will see here. Let us first define an IRegistrar interface and create an implementation like this:

public class ConsoleRegistrar : IRegistrar
{
    public void Register(Guid id, string text)
    {
        var now = DateTimeOffset.Now;
        Console.WriteLine("{0}\t{1:s}.{2}\t{3}",
            id, now, now.Millisecond, text);
    }
}

Although this implementation ‘logs' to the Console, I'm sure you can imagine other implementations. The point is that given this interface, we can add all sorts of ambient information such as the thread ID, the name of the current principal, the current culture and whatnot, while the text string variable still gives us an option to log more information. If we want a more detailed API, we can just make it more detailed - after all, the IRegistrar interface is just an example.

We now know how to register events, but are seemingly no nearer to instrumenting an application. How do we do that? Let us see how we can instrument the OrderProcessor class that I have described several times in past posts.

At the place I left off, the OrderProcessor class uses Constructor Injection all the way down. Although I would normally prefer using a DI Container to auto-wire it, here's a manual composition using Poor Man's DI just to remind you of the general structure of the class and its dependencies:

var sut = new OrderProcessor(
    new OrderValidator(), 
    new OrderShipper(),
    new OrderCollector(
        new AccountsReceivable(),
        new RateExchange(),
        new UserContext()));

All the dependencies injected into the OrderProcessor instance implement interfaces on which OrderProcessor relies. This means that we can decorate each concrete dependency with an implementation that instruments it.

Here's an example that instruments the IOrderProcessor interface itself:

public class InstrumentedOrderProcessor : IOrderProcessor
{
    private readonly IOrderProcessor orderProcessor;
    private readonly IRegistrar registrar;
 
    public InstrumentedOrderProcessor(
        IOrderProcessor processor,
        IRegistrar registrar)
    {
        if (processor == null)
        {
            throw new ArgumentNullException("processor");
        }
        if (registrar == null)
        {
            throw new ArgumentNullException("registrar");
        }
 
        this.orderProcessor = processor;
        this.registrar = registrar;
    }
 
    #region IOrderProcessor Members
 
    public SuccessResult Process(Order order)
    {
        var correlationId = Guid.NewGuid();
        this.registrar.Register(correlationId,
            string.Format("Process begins ({0})",
                this.orderProcessor.GetType().Name));
 
        var result = this.orderProcessor.Process(order);
 
        this.registrar.Register(correlationId,
            string.Format("Process ends   ({0})", 
            this.orderProcessor.GetType().Name));
 
        return result;
    }
 
    #endregion
}

That looks like quite a mouthful, but it's really quite simple - the cyclomatic complexity of the Process method is as low as it can be: 1. We really just register the Process method call before and after invoking the decorated IOrderProcessor.

Without changing anything else than the composition itself, we can now instrument the IOrderProcessor interface:

var registrar = new ConsoleRegistrar();
var sut = new InstrumentedOrderProcessor(
    new OrderProcessor(
        new OrderValidator(),
        new OrderShipper(),
        new OrderCollector(
            new AccountsReceivable(),
            new RateExchange(),
            new UserContext())),
    registrar);

However, imagine implementing an InstrumentedXyz for every IXyz and compose the application with them. It's possible, but it's going to get old really fast - not to mention that it massively violates the DRY principle.

Fortunately we can solve this issue with any DI Container that supports dynamic interception. Castle Windsor does, so let's see how that could work.

Instead of implementing the same code ‘template' over and over again to instrument an interface, we can do it once and for all with an interceptor. Imagine that we delete the InstrumentedOrderProcessor; instead, we create this:

public class InstrumentingInterceptor : IInterceptor
{
    private readonly IRegistrar registrar;
 
    public InstrumentingInterceptor(IRegistrar registrar)
    {
        if (registrar == null)
        {
            throw new ArgumentNullException("registrar");
        }
 
        this.registrar = registrar;
    }
 
    #region IInterceptor Members
 
    public void Intercept(IInvocation invocation)
    {
        var correlationId = Guid.NewGuid();
        this.registrar.Register(correlationId, 
            string.Format("{0} begins ({1})", 
                invocation.Method.Name,
                invocation.TargetType.Name));
 
        invocation.Proceed();
 
        this.registrar.Register(correlationId,
            string.Format("{0} ends   ({1})", 
                invocation.Method.Name, 
                invocation.TargetType.Name));
    }
 
    #endregion
}

If you compare this to the Process method of InstrumentedOrderProcessor (that we don't need anymore), you should be able to see that they are very similar. In this version, we just use the invocation argument to retrieve information about the decorated method.

We can now add InstrumentingInterceptor to a WindsorContainer and enable it for all appropriate components. When we do that and invoke the Process method on the resolved IOrderProcessor, we get a result like this:

bbb9724e-0fad-4b06-9bb0-b8c1c460cded    2010-09-20T21:01:16.744    Process begins (OrderProcessor)
43349d42-a463-463b-8ddf-e569e3170c97    2010-09-20T21:01:16.745    Validate begins (TrueOrderValidator)
43349d42-a463-463b-8ddf-e569e3170c97    2010-09-20T21:01:16.745    Validate ends   (TrueOrderValidator)
44fdccc8-f12d-4057-ae03-791225686504    2010-09-20T21:01:16.746    Collect begins (OrderCollector)
8bbb1a0c-6134-4652-a4af-cd8c0c7184a0    2010-09-20T21:01:16.746    GetCurrentUser begins (UserContext)
8bbb1a0c-6134-4652-a4af-cd8c0c7184a0    2010-09-20T21:01:16.747    GetCurrentUser ends   (UserContext)
d54359ff-8c32-487f-8728-b19ff0bf4942    2010-09-20T21:01:16.747    GetCurrentUser begins (UserContext)
d54359ff-8c32-487f-8728-b19ff0bf4942    2010-09-20T21:01:16.747    GetCurrentUser ends   (UserContext)
c54c4506-23a8-4553-ba9a-066fc64252d2    2010-09-20T21:01:16.748    GetSelectedCurrency begins (UserContext)
c54c4506-23a8-4553-ba9a-066fc64252d2    2010-09-20T21:01:16.748    GetSelectedCurrency ends   (UserContext)
b3dba76b-6b4e-44fa-aca5-52b2d8509db3    2010-09-20T21:01:16.750    Convert begins (RateExchange)
b3dba76b-6b4e-44fa-aca5-52b2d8509db3    2010-09-20T21:01:16.751    Convert ends   (RateExchange)
e07765bd-fe07-4486-96f1-f74d77241343    2010-09-20T21:01:16.751    Collect begins (AccountsReceivable)
e07765bd-fe07-4486-96f1-f74d77241343    2010-09-20T21:01:16.752    Collect ends   (AccountsReceivable)
44fdccc8-f12d-4057-ae03-791225686504    2010-09-20T21:01:16.752    Collect ends   (OrderCollector)
231055d3-4ebb-425d-8d69-fb9c85d9a860    2010-09-20T21:01:16.752    Ship begins (OrderShipper)
231055d3-4ebb-425d-8d69-fb9c85d9a860    2010-09-20T21:01:16.753    Ship ends   (OrderShipper)
bbb9724e-0fad-4b06-9bb0-b8c1c460cded    2010-09-20T21:01:16.753    Process ends   (OrderProcessor)

Notice how we care easily see where and when method calls begin and end using the descriptive text as well as the correlation id. I will leave it as an exercise for the reader to come up with an API that provides better parsing options etc.

As a final note it's worth pointing out that this way of instrumenting an application (or part of it) can be done following the Open/Closed Principle. I never changed the original implementation of any of the components.


My future is Azure

Monday, 13 September 2010 20:08:07 UTC

As some of my readers may already know, my (previous) employer Safewhere went out of business in August, so I started looking for something new to do. I believe that I have now found it.

October 1st I'll start in my new role as Technical Lead at Commentor, where it will be my responsibility to establish us as the leading Windows Azure center of expertise in Denmark (and beyond?). That's quite a big mouthful for me, but also something that I'm very excited about.

This means that besides working on real Windows Azure projects with Danish customers, I also anticipate doing a lot of blogging, speaking and writing about Azure in the future.

What does this mean for my many other endeavors, like my book, AutoFixture or just blogging and speaking about TDD and DI in general? These things are something that I've always been doing mainly in my spare time, and I intend to keep doing that. Perhaps there are even better opportunities for synergy in my new line of work, but only time will tell.

I'm really thrilled to be given the opportunity to expand in a slightly different direction. It'll be hard, but I'm sure it's also going to be a blast!


Comments

good for Azure! :)
2010-09-13 21:49 UTC
René Løhde
Hi Mark,

Congrats - great news for both Commmentor, Azure and you.
I hope to do a ton of shared Azure projects with you in the near future.

-René
2010-09-14 08:57 UTC
Thanks. You know where to find me :)
2010-09-14 09:23 UTC
From what I know of you from your blog and your excellent book, Commentor is lucky to have you. I am almost finished with your book Dependency Injection, and I am VERY impressed with it. I have put much of the knowledge to use on my current projects, which have made them much more testable.
2010-09-17 01:51 UTC
Good luck with that Mark, I'm sure you need it with Azure :)

Regards from a true believer in "What can Azure do for you"
2010-09-18 19:33 UTC
Thomas
Mark,

Congrats on a new job. I will be interesting to read about Azure. Glad to see that you won't get rid of TDD and DI ;)

Thomas
2010-10-05 20:08 UTC

Don't call the container; it'll call you

Monday, 30 August 2010 20:06:58 UTC

There still seems to be some confusion about what is Dependency Injection (DI) and what is a DI Container, so in this post I will try to sort it out as explicitly as possible.

DI is a set of principles and patterns that enable loose coupling.

That's it; nothing else. Remember that old quote from p. 18 of Design Patterns?

Program to an interface; not an implementation.

This is the concern that DI addresses. The most useful DI pattern is Constructor Injection where we inject dependencies into consumers via their constructors. No container is required to do this.

The easiest way to build a DI-friendly application is to just use Constructor Injection all the way. Conversely, an application does not automatically become loosely coupled when we use a DI Container. Every time application code queries a container we have an instance of the Service Locator anti-pattern. The corollary leads to this variation of the Hollywood Principle:

Don't call the container; it'll call you.

A DI Container is a fantastic tool. It's like a (motorized) mixer: you can whip cream by hand, but it's easier with a mixer. On the other hand, without the cream the mixer is nothing. The same is true for a DI Container: to really be valuable, your code must employ Constructor Injection so that the container can auto-wire dependencies.

A well-designed application adheres to the Hollywood Principle for DI Containers: it doesn't call the container. On the other hand, we can use the container to compose the application - or we can do it the hard way; this is called Poor Man's DI. Here's an example that uses Poor Man's DI to compose a complete application graph in a console application:

private static void Main(string[] args)
{
    var msgWriter = new ConsoleMessageWriter();
    new CoalescingParserSelector(
        new IParser[]
        {
            new HelpParser(msgWriter),
            new WineInformationParser(
                new SqlWineRepository(),
                msgWriter)
        })
        .Parse(args)
        .CreateCommand()
        .Execute();
}

Notice how the nested structure of all the dependencies gives you an almost visual idea about the graph. What we have here is Constructor Injection all the way in.

CoalescingParserSelector's constructor takes an IEnumerable<IParser> as input. Both HelpParser and WineInformationParser requires an IMessageWriter, and WineInformationParser also an IWineRepository. We even pull in types from different assemblies because SqlWineRepository is defined in the SQL Server-based data access assembly.

Another thing to notice is that the msgWriter variable is shared among two consumers. This is what a DI Container normally addresses with its ability to manage component lifetime. Although there's not a DI Container in sight, we could certainly benefit from one. Let's try to wire up the same graph using Unity (just for kicks):

private static void Main(string[] args)
{
    var container = new UnityContainer();
    container.RegisterType<IParser, WineInformationParser>("parser.info");
    container.RegisterType<IParser, HelpParser>("parser.help");
    container.RegisterType<IEnumerable<IParser>, IParser[]>();
 
    container.RegisterType<IParseService, CoalescingParserSelector>();
 
    container.RegisterType<IWineRepository, SqlWineRepository>();
    container.RegisterType<IMessageWriter, ConsoleMessageWriter>(
        new ContainerControlledLifetimeManager());
 
    container.Resolve<IParseService>()
        .Parse(args)
        .CreateCommand()
        .Execute();
    container.Dispose();
}

We are using Constructor Injection throughout, and most DI Containers (even Unity, but not MEF) natively understands that pattern. Consequently, this means that we can mostly just map interfaces to concrete types and the container will figure out the rest for us.

Notice that I'm using the Configure-Resolve-Release pattern described by Krzysztof Koźmic. First I configure the container, then I resolve the entire object graph, and lastly I dispose the container.

The main part of the application's execution time will be spent within the Execute method, which is where all the real application code runs.

In this example I wire up a console application, but it just as well might be any other type of application. In a web application we just do a resolve per web request instead.

But wait! does that mean that we have to resolve the entire object graph of the application, even if we have dependencies that cannot be resolved at run-time? No, but that does not mean that you should pull from the container. Pull from an Abstract Factory instead.

Another question that is likely to arise is: what if I have dependencies that I rarely use? Must I wire these prematurely, even if they are expensive? No, you don't have to do that either.

In conclusion: there is never any reason to query the container. Use a container to compose your object graph, but don't rely on it by querying from it. Constructor Injection all the way enables most containers to auto-wire your application, and an Abstract Factory can be a dependency too.


Comments

MEF also supports constructor injection natively. See [ImportingConstructor] in the MEF documentation section "Declaring Imports".
2010-09-01 16:29 UTC
Yes, MEF supports it, but I chose my words pretty carefully. Any container that needs a special attribute to handle Constructor Injection can hardly be said to understand the pattern. I can't point MEF to a class that uses Constructor Injection and expect it to natively recognize it as such. MEF understands the [ImportingConstructor] attribute, but that's a different thing altogether.
2010-09-01 17:34 UTC
I have to agree with Mark.

Having special constructors all over the code makes MEF kind of lame in comparison to .NET NInject or Java's CDI.
I am sure it works just fine, but seems a bit too "Hand's on" when there are a few good options that require less customization on host containers.
2012-03-02 16:53 UTC

Changing the behavior of AutoFixture auto-mocking with Moq

Wednesday, 25 August 2010 19:56:38 UTC

One of my Twitter followers who appears to be using AutoFixture recently asked me this:

So with the AutoMoqCustomization I feel like I should get Mocks with concrete types too (except the sut) - why am I wrong?

AutoFixture's extention for auto-mocking with Moq was never meant as a general modification of behavior. Customizations extend the behavior of AutoFixture; they don't change it. There's a subtle difference. In any case, the auto-mocking customization was always meant as a fallback mechanism that would create Mocks for interfaces and abstract types because AutoFixture doesn't know how to deal with those.

Apparently @ZeroBugBounce also want concrete classes to be issued as Mock instances, which is not quite the same thing; AutoFixture already has a strategy for that (it's called ConstructorInvoker).

Nevertheless I decided to spike a little on this to see if I could get it working. It turns out I needed to open some of the auto-mocking classes a bit for extensibility (always a good thing), so the following doesn't work with AutoFixture 2.0 beta 1, but will probably work with the RTW. Also please not that I'm reporting on a spike; I haven't thoroughly tested all edge cases.

That said, the first thing we need to do is to remove AutoFixture's default ConstructorInvoker that invokes the constructor of concrete classes. This is possible with this constructor overload:

public Fixture(DefaultRelays engineParts)

This takes as input a DefaultRelays instance, which is more or less just an IEnumerable<ISpecimenBuilder> (the basic building block of AutoFixture). We need to replace that with a filter that removes the ConstructorInvoker. Here's a derived class that does that:

public class FilteringRelays : DefaultEngineParts
{
    private readonly Func<ISpecimenBuilder, bool> spec;
 
    public FilteringRelays(Func<ISpecimenBuilder, bool> specification)
    {
        if (specification == null)
        {
            throw new ArgumentNullException("specification");
        }
 
        this.spec = specification;
    }
 
    public override IEnumerator<ISpecimenBuilder> GetEnumerator()
    {
        var enumerator = base.GetEnumerator();
        while (enumerator.MoveNext())
        {
            if (this.spec(enumerator.Current))
            {
                yield return enumerator.Current;
            }
        }
    }
}

DefaultEngineParts already derive from DefaultRelays, so this enables us to use the overloaded constructor to remove the ConstructorInvoker by using these filtered relays:

Func<ISpecimenBuilder, bool> concreteFilter
    = sb => !(sb is ConstructorInvoker);
var relays = new FilteringRelays(concreteFilter);

The second thing we need to do is to tell the AutoMoqCustomization that it should Mock all types, not just interfaces and abstract classes. With the new (not in beta 1) overload of the constructor, we can now supply a Specification that determines which types should be mocked.:

Func<Type, bool> mockSpec = t => true;

We can now create the Fixture like this to get auto-mocking of all types:

var fixture = new Fixture(relays).Customize(
    new AutoMoqCustomization(new MockRelay(mockSpec)));

With this Fixture instance, we can now create concrete classes that are mocked. Here's the full test that proves it:

[Fact]
public void CreateAnonymousMockOfConcreteType()
{
    // Fixture setup
    Func<ISpecimenBuilder, bool> concreteFilter
        = sb => !(sb is ConstructorInvoker);
    var relays = new FilteringRelays(concreteFilter);
 
    Func<Type, bool> mockSpec = t => true;
 
    var fixture = new Fixture(relays).Customize(
        new AutoMoqCustomization(new MockRelay(mockSpec)));
    // Exercise system
    var foo = fixture.CreateAnonymous<Foo>();
    foo.DoIt();
    // Verify outcome
    var fooTD = Mock.Get(foo);
    fooTD.Verify(f => f.DoIt());
    // Teardown
}

Foo is this concrete class:

public class Foo
{
    public virtual void DoIt()
    {
    }
}

Finally, a word of caution: this is a spike. It's not fully tested and is bound to fail in certain cases: at least one case is when the type to be created is sealed. Since Moq can't create a Mock of a sealed type, the above code will fail in that case. However, we can address this issue with some more sophisticated filters and Specifications. However, I will leave that up to the interested reader (or a later blog post).

All in all I think this provides an excellent glimpse of the degree of extensibility that is built into AutoFixture 2.0's kernel.


AutoFixture as an auto-mocking container

Thursday, 19 August 2010 19:25:50 UTC

The new internal architecture of AutoFixture 2.0 enables some interesting features. One of these is that it becomes easy to extend AutoFixture to become an auto-mocking container.

Since I personally use Moq, the AutoFixture 2.0 .zip file includes a new assembly called Ploeh.AutoFixture.AutoMoq that includes an auto-mocking extension that uses Moq for Test Doubles.

Please note that AutoFixture in itself has no dependency on Moq. If you don't want to use Moq, you can just ignore the Ploeh.AutoFixture.AutoMoq assembly.

Auto-mocking with AutoFixture does not have to use Moq. Although it only ships with Moq support, it is possible to write an auto-mocking extension for a different dynamic mock library.

To use it, you must first add a reference to Ploeh.AutoFixture.AutoMoq. You can now create your Fixture instance like this:

var fixture = new Fixture()
    .Customize(new AutoMoqCustomization());

What this does is that it adds a fallback mechanism to the fixture. If a type falls through the normal engine without being handled, the auto-mocking extension will check whether it is a request for an interface or abstract class. If this is so, it will relay the request to a request for a Mock of the same type.

A different part of the extension handles requests for Mocks, which ensures that the Mock will be created and returned.

Splitting up auto-mocking into a relay and a creational strategy for Mock objects proper also means that we can directly request a Mock if we would like that. Even better, we can use the built-in Freeze support to freeze a Mock, and it will also automatically freeze the auto-mocked instance as well (because the relay will ask for a Mock that turns out to be frozen).

Returning to the original frozen pizza example, we can now rewrite it like this:

[Fact]
public void AddWillPipeMapCorrectly()
{
    // Fixture setup
    var fixture = new Fixture()
        .Customize(new AutoMoqCustomization());
 
    var basket = fixture.Freeze<Basket>();
    var mapMock = fixture.Freeze<Mock<IPizzaMap>>();
 
    var pizza = fixture.CreateAnonymous<PizzaPresenter>();
 
    var sut = fixture.CreateAnonymous<BasketPresenter>();
    // Exercise system
    sut.Add(pizza);
    // Verify outcome
    mapMock.Verify(m => m.Pipe(pizza, basket.Add));
    // Teardown
}

Notice that we can simply freeze Mock<IPizzaMap> which also automatically freeze the IPizzaMap instance as well. When we later create the SUT by requesting an anonymous BasketPresenter, IPizzaMap is already frozen in the fixture, so the correct instance will be injected into the SUT.

This is similar to the behavior of the custom FreezeMoq extension method I previously described, but now this feature is baked in.


Comments

How would you compare and contrast AutoFixture with UnityAutoMockContainer? I started using UnityAutoMockContainer and have been very pleased with it so far. The main contribution it makes is that it reduces the friction of TDD by mocking all dependencies so I do not have to. Then I can extract them using syntax like this

Mock<IFoo> mockFoo = _container.GetMock<IFoo>();


It appears that both AutoFixture and UnityAutoMocker can be used in a similar fashion. Though it appears that AutoFixture has a broader feature set (the ability to create anonymous value types.) Whereas, one can create anonymous reference types by simply asking theUnityAutoMockContainer to resolve the interface or concrete type.

I have only been using UnityAutoMockContainer for a week or two. So when I stumbled upon your most excellent book Dependency Injection in .Net, I discovered AutoFixture and was considering whether to use it instead. I welcome your perspective. By the way, that book on DI you are writing is great! Nice work.

2010-09-09 10:34 UTC
Hi David

Thanks for writing.

As a disclaimer I should add that I have no experience with UnityAutoMockContainer, but that I'm quite familiar with Unity itself, as well as the general concept of an auto-mocking container, which (IIRC) is something that predates Unity. At the very least there are many auto-mocking containers available, and you can often easily extend an existing DI Container to add auto-mocking capabilities (see e.g. this simplified example).

This is more or less also the case with AutoFixture. It started out as something completely different (namely as a Test Data Builder) and then evolved. At a time it became natural to also add auto-mocking capabilities to it.

You could say that the same is the case with Unity: it's a DI Container and was never envisioned as an auto-mocking container. However, since Unity is extensible, it is possible to add auto-mocking to it as well (just as the Autofac example above).

This means that UnityAutoMockContainer and AutoFixture approach the concept of auto-mocking from two different directions. I can't really compare their auto-mocking capabilities as I don't know UnityAutoMockContainer well enough, but I can offer this on a more general level:

AutoFixture shares a lot of similarities with DI Containers (Unity included). It supports auto-wiring and it can be configured to create instances in lots of interesting ways. However, since the focus is different, it does some things better and some things not as well as a DI Container.

AutoFixture has more appropriate handling of primitive types. Normally a DI Container is not going to serve you a sequence of unique strings or numbers, whereas AutoFixture does. This was really the original idea that started it all.

Most DI Containers are not going to try to populate writable properties, but AutoFixture does.

AutoFixture also has a more granular concept of what constitutes a request. For all DI Containers I know, a request to resolve something is always based on a Type. AutoFixture, on the other hand, enable us to make arbitrary requests, which means that we can treat a request for a ParameterInfo differently than a request for a PropertyInfo (or a Type) even if they share the same Type. This sometimes comes in handy.

On the other hand AutoFixture is weaker when it comes to lifetime management. A Fixture is never expected to exist for more than a single test case, so it makes no sense to model any other lifestyle than Transient and Singleton. AutoFixture can do that, but nothing more. It has no Seam for implementing custom lifestyles and it does not offer Per Thread or Per HttpRequest lifestyles. It doesn't have to, because it's not a DI Container.

In short I prefer AutoFixture for TDD because it's a more focused tool than any DI Container. On the other hand it means that there's one more new API to learn, although that's not an issue for me personally :)
2010-09-09 11:04 UTC
Hi Mark,

I just started to play with AutoFixture and its auto-mocking support. As far as I can see, the auto-mocking extensions return mocks without functionality, especially, without any data in the properties.
Example:
var anonymousParent = fixture.CreateAnonymous<ComplexParent>();
This example from the cheat sheet would return a filled fixture:
ComplexParent:
-Child: ComplexChild
--Name: string: "namef70b67ff-05d3-4498-95c9-de74e1aa0c3c"
--Number: int: 1


When using the AutoMoqCustomization with an interface IComplexParent with one property Child: ComplexChild, the result is strange:
1) Child is of type ComplexChildProxySomeGuid, i.e. it is mocked, although it is a concrete class
2) Child.Number is 0
3) Child.Name is null


I think this is inconsistent. Is it a bug or intentional? If it is intentional, could you please explain?

Thanks,

Daniel
2011-09-08 09:57 UTC
Yes, this is mostly intentional - mostly because even if AutoFixture would attempt to assign data to properties, you wouldn't get any result out of it. As an illustration, try using Moq without AutoFixture to create a new instance of IComplexParent and assign a property to it. It's not going to remember the property value!

This is, in my opinion, the correct design choice made by the Moq designers: an interface specifies only the shape of members - not behavior.

However, with Moq, you can turn on 'normal' property behavior by invoking the SetupAllProperties on the Mock instance. The AutoMoqCustomization doesn't do that, but it's certainly possible to use the underlying types used to implement it to compose a different AutoMoqCustomization that also invokes SetupAllProperties.

In any case, this is only half of the solution, because it would only enable the Mock instance to get 'normal' property behavior. The other half would then be to make the AutoMoqCustomization automatically fill those properties. This is possible by wrapping the ISpecimenBuilder instance responsible for creating the actual interface instance in a PostProcessor. The following discussions may shed a little more light on these matters: Default Greedy constructor behavior breaks filling and Filling sub properties.

In short, I don't consider it a bug, but I do respect that other people may want different base behavior than I find appropriate. It's definitely possible to tweak AutoFixture to do what you want it to do, but perhaps a bit more involved than it ought to be. AutoFixture 3.0 should streamline the customization API somewhat, I hope.

As a side note, I consider properties in interfaces a design smell, so that's probably also why I've never had any issues with the current behavior.
2011-09-08 10:51 UTC
Thanks for your response. As I am using NSubstitute, I created an auto-mocking customization for it. I used the RhinoMocks customization as a template and changed it to support NSubstitute. I also implemented it the way I expected AutoMoq to behave, i.e. with filled properties.
Please see the fork I created: https://hg01.codeplex.com/forks/dhilgarth/autonsubstitute.
If you are interested, I can send a pull request.
2011-09-08 13:28 UTC
I think it is better to have AutoFixture behave consistent. When I request an anonymous instance, I expect its properties to be filled. I don't care if it's a concrete class I get or a mock because I asked for an anonymous instance of an interface. For me as a user it's not intuitive that they behave differently.
That's why I implemented it that way in AutoNSubstitute.


BTW, AutoMoq is in itself inconsistent: As I wrote in my original question, the property Child is not null but it contains a mock of ComplexChild. I understand your reasons for saying the properties should be null if a mock is returned, but then it should apply consistently to all properties.

Maybe it is a good idea to let the user of the customization choose what behavior he wants?
2011-09-08 14:19 UTC
Yes, I agree that consistency is desirable :)

As I mentioned, it's not something I've ever given a great deal of thought because I almost never define properties on interfaces, but I agree that it might be more consistent, so I've created a work item for it - you could go and vote for it if you'd like :)

Regarding your observation about the Child property, that's once more the behavior we get from Moq... The current implementation of AutoMoq basically just hands all mocking concerns off to Moq and doesn't deal with them any further.

Regarding your fork for NSubstitute I only had a brief look at it, but please do send a pull request - then we'll take it from there :)

Thank you for your interest.
2011-09-08 14:37 UTC

AutoFixture 2.0 beta 1

Monday, 09 August 2010 11:32:06 UTC

It gives me great pleasure to announce that AutoFixture 2.0 beta 1 is now available for download. Compared to version 1.1 AutoFixture 2.0 is implemented using a new and much more open and extensible engine that enables many interesting scenarios.

Despite the new engine and the vastly increased potential, the focus on version 2.0 has been to ensure that the current, known feature set was preserved.

What's new?

While AutoFixture 2.0 introduces new features, this release is first and foremost an upgrade to a completely new internal architecture, so the number of new releases is limited. Never the less, the following new features are available in version 2.0:

There are still open feature requests for AutoFixture, so now that the new engine is in place I can again focus on implementing some of the features that were too difficult to address with the old engine.

Breaking changes

Version 2.0 introduces some breaking changes, although the fundamental API remains recognizable.

  • A lot of the original methods on Fixture are now extension methods on IFixture (a new interface). The methods include CreateAnonymous<T>, CreateMany<T> and many others, so you will need to include a using directive for Ploeh.AutoFixture to compile the unit test code.
  • CreateMany<T> still returns IEnumerable<T>, but the return value is much more consistently deferred. This means that in almost all cases you should instantly stabilize the sequence by converting it to a List<T>, an array or similar.
  • Several methods are now deprecated in favor of new methods with better names.

A few methods were also removed, but only those that were already deprecated in version 1.1.

Roadmap

The general availability of beta 1 of AutoFixture 2.0 marks the beginning of a trial period. If no new issues are reported within the next few weeks, a final version 2.0 will be released. If too many issues are reported, a new beta version may be necessary.

Please report any issues you find.

After the release of the final version 2.0 my plan is currently to focus on the Idioms project, although there are many other new features that might warrant my attention.

Blog posts about the new features will also follow soon.


Comments

Arnis L.
One thing i somehow can't understand - when exactly there's a need for such a framework as AutoFixture?

P.s. Just don't read it wrong - I didn't say it's useless. Problem is with me. :)
2010-08-13 19:00 UTC
That's a reasonable question. The short answer is provided by AutoFixture's tag line: Write maintainable unit tests, faster.

For a more elaborate answer, you may want to read this, this and this. Also: stay tuned for more content.
2010-08-15 19:32 UTC
Arnis L.
Using it for awhile. Really nice tool. Less to write and think about. :)
2010-08-29 17:57 UTC

StructureMap PerRequest vs. Unique lifetimes

Tuesday, 20 July 2010 20:42:53 UTC

StructureMap offers several different lifetimes, among these two known as PerRequest and Unique respectively. Recently I found myself wondering what was the difference between those two, but a little help from Jeremy Miller put me on the right track.

In short, Unique is equivalent to what Castle Windsor calls Transient: every time an instance of a type is needed, a new instance is created. Even if we need the same service multiple times in the same resolved graph, multiple instances are created.

PerRequest, on the other hand, is a bit special. Each type can be viewed as a Singleton within a single call to GetInstance, but as Transient across different invocations of GetInstance. In other words, the same instance will be shared within a resolved object graph, but if we resolve the same root type once more, we will get a new shared instance - a Singleton local to that graph.

Here are some unit tests I wrote to verify this behavior (recall that PerRequest is StructureMap's default lifestyle):

[Fact]
public void ResolveServicesWithSameUniqueDependency()
{
    var container = new Container();
    container.Configure(x =>
    {
        var unique = new UniquePerRequestLifecycle();
        x.For<IIngredient>().LifecycleIs(unique)
            .Use<Shrimp>();
        x.For<OliveOil>().LifecycleIs(unique);
        x.For<EggYolk>().LifecycleIs(unique);
        x.For<Vinegar>().LifecycleIs(unique);
        x.For<IIngredient>().LifecycleIs(unique)
            .Use<Vinaigrette>();
        x.For<IIngredient>().LifecycleIs(unique)
            .Use<Mayonnaise>();
        x.For<Course>().LifecycleIs(unique);
    });
 
    var c1 = container.GetInstance<Course>();
    var c2 = container.GetInstance<Course>();
 
    Assert.NotSame(
        c1.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c1.Ingredients.OfType<Mayonnaise>().Single().Oil);
    Assert.NotSame(
        c2.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c2.Ingredients.OfType<Mayonnaise>().Single().Oil);
    Assert.NotSame(
        c1.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c2.Ingredients.OfType<Vinaigrette>().Single().Oil);
}
 
[Fact]
public void ResolveServicesWithSamePerRequestDependency()
{
    var container = new Container();
    container.Configure(x =>
    {
        x.For<IIngredient>().Use<Shrimp>();
        x.For<OliveOil>();
        x.For<EggYolk>();
        x.For<Vinegar>();
        x.For<IIngredient>().Use<Vinaigrette>();
        x.For<IIngredient>().Use<Mayonnaise>();
    });
 
    var c1 = container.GetInstance<Course>();
    var c2 = container.GetInstance<Course>();
 
    Assert.Same(
        c1.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c1.Ingredients.OfType<Mayonnaise>().Single().Oil);
    Assert.Same(
        c2.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c2.Ingredients.OfType<Mayonnaise>().Single().Oil);
    Assert.NotSame(
        c1.Ingredients.OfType<Vinaigrette>().Single().Oil,
        c2.Ingredients.OfType<Vinaigrette>().Single().Oil);
}

Notice that in both cases, the OliveOil instances are different across two independently resolved graphs (c1 and c2).

However, within each graph, the same OliveOil instance is shared in the PerRequest configuration, whereas they are different in the Unique configuration.


Comments

Freddy Hansen
Thank you, I had missed the subtle difference here and your post saved me :-)
2012-10-23 18:58 UTC

Domain Objects and IDataErrorInfo

Monday, 12 July 2010 12:58:16 UTC

Occasionally I get a question about whether it is reasonable or advisable to let domain objects implement IDataErrorInfo. In summary, my answer is that it's not so much a question about whether it's a leaky abstraction or not, but rather whether it makes sense at all. To me, it doesn't.

Let us first consider the essence of the concept underlying IDataErrorInfo: It provides information about the validity of an object. More specifically, it provides error information when an object is in an invalid state.

This is really the crux of the matter. Domain Objects should be designed so that they cannot be put into invalid states. They should guarantee their invariants.

Let us return to the good old DanishPhoneNumber example. Instead of accepting or representing a Danish phone number as a string or integer, we model it as a Value Object that encapsulates the appropriate domain logic.

More specifically, the class' constructor guarantees that you can't create an invalid instance:

private readonly int number;
 
public DanishPhoneNumber(int number)
{
    if ((number < 112) ||
        (number > 99999999))
    {
        throw new ArgumentOutOfRangeException("number");
    }
    this.number = number;
}

Notice that the Guard Clause guarantees that you can't create an instance with an invalid number, and the readonly keyword guarantees that you can't change the value afterwards. Immutable types make it easier to protect a type's invariants, but it is also possible with mutable types - you just need to place proper Guards in public setters and other mutators, as well as in the constructor.

In any case, whenever a Domain Object guarantees its invariants according to the correct domain logic it makes no sense for it to implement IDataErrorInfo; if it did, the implementation would be trivial, because there would never be an error to report.

Does this mean that IDataErrorInfo is a redundant interface? Not at all, but it is important to realize that it's an Application Boundary concern instead of a Domain concern. At Application Boundaries, data entry errors will happen, and we must be able to cope with them appropriately; we don't want the application to crash by passing unvalidated data to DanishPhoneNumber's constructor.

Does this mean that we should duplicate domain logic at the Application Boundary? That should not be necessary. At first, we can apply a simple refactoring to the DanishPhoneNumber constructor:

public DanishPhoneNumber(int number)
{
    if (!DanishPhoneNumber.IsValid(number))
    {
        throw new ArgumentOutOfRangeException("number");
    }
    this.number = number;
}
 
public static bool IsValid(int number)
{
    return (112 <= number)
        && (number <= 99999999);
}

We now have a public IsValid method we can use to implement an IDataErrorInfo at the Application Boundary. Next steps might be to add a TryParse method.

IDataErrorInfo implementations are often related to input forms in user interfaces. Instead of crashing the application or closing the form, we want to provide appropriate error messages to the user. We can use the Domain Object to provide validation logic, but the concern is completely different: we want the form to stay open until valid data has been entered. Not until all data is valid do we allow the creation of a Domain Object from that data.

In short, if you feel tempted to add IDataErrorInfo to a Domain Class, consider whether you aren't about to violate the Single Responsibility Principle. In my opinion, this is the case, and you would be better off reconsidering the design.


Comments

onof
I agree.

Too often i see domain objects implementing a lot of validation code. I think that most of validation logic must be out of domain objects.
2010-07-12 15:05 UTC
Arnis L
People are struggling with understanding what they are validating, where they put their validation but i kind a think that nature of validity itself is often forgotten, unexplored or misunderstood.

DanishPhoneNumber value can't be less than 112. In reality we are modeling - such a phone number just does not exist. So it makes sense to disallow existence of such an object and throw an error immediately.

But there might be cases when domain contains temporary domain object invalidity from specific viewpoint/s.

Consider good old cargo shipment domain from Blue book. Shipment object is invalid and shouldn't be shipped if there's no cargo to ship and yet such a shipment can exist because it's planned out gradually. In these kind of situations - it might make sense to use IDataErrorInfo interface.
2010-07-13 13:44 UTC
I would prefer to disagree :)

We must keep in mind that we are not modeling the real world, but rather the business logic that addresses the real world. In your example, that would be represented by a proper domain object that models that a shipment is still in the planning stage. Let's call this object PlannedShipment.

According to the domain model, PlannedShipment has its own invariants that it must protect, and the point still remains: PlannedShipment itself cannot be in an invalid state. However, PlannedShipment can't be shipped because it has not yet been promoted to a 'proper' Shipment. Such an API is safer because it makes it impossible to introduce errors of the kind where the code attempts to ship an invalid Shipment.
2010-07-13 14:58 UTC
I think it is a very interesting thought to make domain objects immutable. However, I’m very curious about the practical implications of this. For instance: are all your domain objects immutable? Do you create them by using constructors with many arguments (because some domain objects tend to have many properties? It gets really awkward when constructors have many (say more than 5) arguments. How do you deal with this? Which O/RM tool(s) are you using for this immutability and how are you achieving this. Some O/RM tools will probably be a bad pick in trying to implement this. How are you dealing with updating existing entities? Creating a new entity with the same id seems rather awkward and doesn't seem to communicate its intent very well IMO. I love to see some examples.

Thanks
2010-07-13 20:08 UTC
I never said that all Domain Objects should be immutable - I'm using the terminology from Domain-Driven Design that distinguishes between Entities and Value Objects.

A Value Object benefits very much from being immutable, so I always design them that way, but that doesn't mean that I make Entities immutable as well. I usually don't, although I'm sometimes toying with that idea.

In any case, if you have more than 4 or 5 fields in a class (no matter if you fill them through a constructor or via property setters), you most likely have a new class somewhere in there waiting to be set free. Clean Code makes a pretty good case of this. Once again, too many primitives in an Entity is a smell that the Single Responsibility Principle is violated.
2010-07-14 09:06 UTC
It's very uncommon that i disagree with you, but...

With your phone number example in mind, the validation should imho never be encapsulated in the domain object, but belong to a separate validation. When you put validation inside the constructor you will eventually break the . Of course we have to validate for null input if they will break the functionality, but I would never integrate range check etc. into the class it self.
2010-07-15 20:28 UTC
You'll have to define the range check somewhere. If you put it in an external class, I could repeat your argument there: "your DanishPhoneNumberRangeValidator isn't open for extensibility." Value Objects are intrinsically rather atomic, and not particularly composable, in scope.

However, consumers of those Value Objects need not be. While I didn't show it, DanishPhoneNumber could implement IPhoneNumber and all clients consume the interface. That would make DanishPhoneNumber a leaf of a composition while still keeping the architecture open for extensibility.

The point is to define each type so that their states are always consistent. Note that for input gatherers, invalid data is considered consistent in the scope of input gathering. That's where IDataErrorInfo makes sense :)
2010-07-15 21:13 UTC
Arnis L
In short - I just wanted to mention nature of validity itself.

Second thing I wanted to emphasize is about Your sentence of making sense - we should focus on technology we are using not only to be able to express ourselves, but to be aware (!) of how we are doing it and be able to adjust that.

Patterns, OOP, .NET, POCO and whatnot are tools only. IDataErrorInfo is a tool too. Therefore - if it feels natural to use it to express our domain model (while it's suboptimal cause of arguments You mentioned), there is nothing wrong with using it per se. An agreement that our domain model objects (in contrast to reality) can be invalid if it simplifies things greatly (think ActiveRecord) is a tool too.
2010-07-20 08:51 UTC
I think we can often construct examples where the opposite of our current stance makes sense. Still, I like all rules like the above because they should first and foremost make us stop and think about what we are doing. Once we've done that, we can forge ahead knowing that we made a conscious decision - no matter what we chose.

To me, internal consistency and the SRP is so important that I would feel more comfortable having IDataErrorInfo outside of domain objects, but there are no absolutes :)
2010-07-20 09:35 UTC

Introducing AutoFixture Likeness

Tuesday, 29 June 2010 06:39:30 UTC

The last time I presented a sample of an AutoFixture-based unit test, I purposely glossed over the state-based verification that asserted that the resulting state of the basket variable was that the appropriate Pizza was added:

Assert.IsTrue(basket.Pizze.Any(p =>
    p.Name == pizza.Name), "Basket has added pizza.");

The main issue with this assertion is that the implied equality expression is rather weak: we consider a PizzaPresenter instance to be equal to a Pizza instance if their Name properties match.

What if they have other properties (like Size) that don't match? If this is the case, the test would be a false negative. A match would be found in the Pizze collection, but the instances would not truly represent the same pizza.

How do we resolve this conundrum without introducing equality pollution? AutoFixture offers one option in the form of the generic Likeness<TSource, TDestination> class. This class offers convention-based test-specific equality mapping from TSource to TDestination and overriding the Equals method.

One of the ways we can use it is by a convenience extension method. This unit test is a refactoring of the test from the previous post, but now using Likeness:

[TestMethod]
public void AddWillAddToBasket_Likeness()
{
    // Fixture setup
    var fixture = new Fixture();
    fixture.Register<IPizzaMap, PizzaMap>();
 
    var basket = fixture.Freeze<Basket>();
 
    var pizza = fixture.CreateAnonymous<PizzaPresenter>();
    var expectedPizza = 
        pizza.AsSource().OfLikeness<Pizza>();
 
    var sut = fixture.CreateAnonymous<BasketPresenter>();
    // Exercise system
    sut.Add(pizza);
    // Verify outcome
    Assert.IsTrue(basket.Pizze.Any(expectedPizza.Equals));
    // Teardown
}

Notice how the Likeness instance is created with the AsSource() extension method. The pizza instance (of type PizzaPresenter) is the source of the Likeness, whereas the Pizza domain model type is the destination. The expectedPizza instance is of type Likeness<PizzaPresenter, Pizza>.

The Likeness class overrides Equals with a convention-based comparison: if two properties have the same name and type, they are equal if their values are equal. All public properties on the destination must have equal properties on the source.

This allows me to specify the Equals method as the predicate for the Any method in the assertion:

Assert.IsTrue(basket.Pizze.Any(expectedPizza.Equals));

When the Any method evalues the Pizze collection, it executes the Equals method on Likeness, resulting in a convention-based comparison of all public properties and fields on the two instances.

It's possible to customize the comparison to override the behavior for certain properties, but I will leave that to later posts. This post only scratches the surface of what Likeness can do.

To use Likeness, you must add a reference to the Ploeh.SemanticComparison assembly. You can create a new instance using the public constructor, but to use the AsSource extension method, you will need to add a using directive:

using Ploeh.SemanticComparison.Fluent;

Comments

DavidS
Hi Mark,

In your example, you are only comparing one property and I know that you can test many properties as well.

Now it is my understanding that given many properties if any property doesn't match, then you'll get a test failure. My question is how to output a message pinpointing which property is causing the test to fail.

On another note, maybe you could ask Adam Ralph how he's integrated the comment section on his blog, which I believe is using the same platform as you are. http://adamralph.com/2013/01/09/blog-post-excerpts-a-new-solution/

2013-04-18 12:40 UTC

David, if you want to get more detailed feedback on which properties don't match, you can use expected.ShouldEqual(actual);

2013-04-18 21:33 UTC

Page 28 of 37

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!