ploeh blog danish software design
Convention-based Customizations with AutoFixture
As previous posts have described, AutoFixture creates Anonymous Variables based on the notion of being able to always hit within a well-behaved Equivalence Class for a given type. This works well a lot of the time because AutoFixture has some sensible defaults: numbers are positive integers and strings are GUIDs.
This last part only works as long as strings are nothing but opaque blobs to the consuming class. This is, however, not an unreasonable assumption. Consider classes that implement Entities such as Person or Address. Strings will often take the form of FirstName, LastName, Street, etc. In all such cases, the value of the string usually doesn't matter.
However, there will always be cases where the value of a string has a special meaning of its own. It will often be best to let AutoFixture guide us towards a better API design, but this is not always possible. Sometimes there are rules that constrain the formatting of a string.
As an example, consider a Money class with this constructor:
public Money(decimal amount, string currencyCode)
{
if (currencyCode == null)
{
throw new ArgumentNullException("...");
}
if (!CurrencyCodes.IsValid(currencyCode))
{
throw new ArgumentException("...");
}
this.amount = amount;
this.currencyCode = currencyCode;
}
Notice that the constructor only allows properly formatted currency codes (such as e.g. “DKK”, “USD”, “AUD”, etc.) through, while other strings will throw an exception. AutoFixture's default behavior of creating GUIDs as strings is problematic, as the Money constructor will throw on a GUID.
We could attempt to fix this by changing the way AutoFixture generates strings in general, but that may not be the best solution as it may interfere with other string values. It is, however, easy to do:
fixture.Inject("DKK");
This simply injects the “DKK” string into the fixture, causing all subsequent strings to have the same value. However, a hypothetical Pizza class with Name and Description properties in addition to a Price property of type Money will now look like this:
{
"Name": "DKK",
"Price": {
"Amount": 1.0,
"CurrencyCode": "DKK"
},
"Description": "DKK"
}
What we really want is to customize only the currency code. This is where the extremely customizable architecture of AutoFixture can help us. As the documentation explains, lots of different request will flow through the kernel's Chain of Responsibility to create a Money instance. To populate the two parameters of the Money constructor, two ParameterInfo requests will be issued - one for each parameter. We can take advantage of this to create a custom ISpecimenBuilder that only addresses string parameters with the name currencyCode.
public class CurrencyCodeSpecimenBuilder :
ISpecimenBuilder
{
public object Create(object request,
ISpecimenContext context)
{
var pi = request as ParameterInfo;
if (pi == null)
{
return new NoSpecimen(request);
}
if (pi.ParameterType != typeof(string)
|| pi.Name != "currencyCode")
{
return new NoSpecimen(request);
}
return "DKK";
}
}
It simply examines the request to determine whether this is something that it should address at all. Only if the request is a ParameterInfo representing a string parameter named currencyCode do we deal with it. In any other case we return NoSpecimen, which simply tells AutoFixture that it should ask another ISpecimenBuilder instead.
Here we just return the hard-coded string “DKK”, but we could easily have expanded the example to use a more varied generation algorithm. I will leave that, as well as how to generalize this in other ways, as exercises to the reader.
With CurrencyCodeSpecimenBuilder available, we can add it to the Fixture like this:
fixture.Customizations.Add(
new CurrencyCodeSpecimenBuilder());
With this customization added, a Pizza instance now looks like this:
{
"Name": "namec6b7a923-ea78-4817-9e24-a6863a597645",
"Price": {
"Amount": 1.0,
"CurrencyCode": "DKK"
},
"Description": "Description63ef17d7-876d-46d8-af73-1ed91f83e699"
}
Notice how only the currency code is affected while all other string values are created by the default algorithm.
In a nutshell, a custom ISpecimenBuilder can be used to implement all sorts of custom conventions for AutoFixture. The one shown here applies the string “DKK” to all string parameters named currencyCode. This mean that the convention isn't necessarily constrained to the Money constructor, but applies to all ParamterInfo instances that fit the specification.
Windows Azure Migration Sanity Check
Recently I attended a workshop where we attempted to migrate an existing web application to Windows Azure. It was a worthwhile workshop that left me with a few key takeaways related to migrating applications to Windows Azure.
The first and most important point is so self-evident that I seriously considered not publishing it. However, apparently it wasn't self-evident to all workshop participants, so someone else might also benefit from this general advice:
Before migrating to Windows Azure, make sure that the application scales to at least two normal servers.
It's as simple as that - still, lots of web developers never consider this aspect of application architecture.
Why is this important in relation to Azure? The Windows Azure SLA only applies if you deploy two or more instances, which makes sense since the hosting servers occasionally need to be patched etc.
Unless you don't care about the SLA, your application must be able to ‘scale' to at least two servers. If it can't, fix this issue first, before attempting to migrate to Windows Azure. You can test this locally by simply installing your application on two different servers and put them behind a load balancer (you can use virtual machines if you don't have the hardware). Only if it works consistently in this configuration should you consider deploying to Azure.
Here are the most common issues that may prevent the application from ‘scaling':
- Keeping state in memory. If you must use session state, use one of the out-of-process session state store providers.
- Using the file system for persistent storage. The file system is local to each server.
Making sure that the application ‘scales' to at least two servers is such a simple sanity check that it should go without saying, but apparently it doesn't.
Please note that I put ‘scaling' in quotes here. An application that runs on only two servers has yet to prove that it's truly scalable, but that's another story.
Also note that this sanity check in no way guarantees that the application will run on Azure. However, if the check fails, it most likely will not.
Comments
I would also add somme CCC like cash. you may consider to move from a local cache to distributed one to save some unwanted behaviour.
AutoFixture 2.0 Released
It gives me great pleasure to announce the release of AutoFixture 2.0. This release fixes a few minor issues since beta 1, but apart from that there are no significant changes.
This is a major release compared to AutoFixture 1.1, featuring a completely new kernel, as well as new features and extensibility points. The beta 1 announcement sums up the changes. Get it here.
AutoData Theories with AutoFixture
AutoFixture 2.0 comes with a new extension for xUnit.net data theories. For those of us using xUnit.net, it can help make our unit tests more succinct and declarative.
AutoFixture's support for xUnit.net is implemented in a separate assembly. AutoFixture itself has no dependency on xUnit.net, and if you use another unit testing framework, you can just ignore the existence of the Ploeh.AutoFixture.Xunit assembly.
Let's go back and revisit the previous test we wrote using AutoFixture and its auto-mocking extension:
[Fact] public void AddWillPipeMapCorrectly() { // Fixture setup var fixture = new Fixture() .Customize(new AutoMoqCustomization()); var basket = fixture.Freeze<Basket>(); var mapMock = fixture.Freeze<Mock<IPizzaMap>>(); var pizza = fixture.CreateAnonymous<PizzaPresenter>(); var sut = fixture.CreateAnonymous<BasketPresenter>(); // Exercise system sut.Add(pizza); // Verify outcome mapMock.Verify(m => m.Pipe(pizza, basket.Add)); // Teardown }
Notice how all of the Fixture Setup phase is only used to create various objects that will be used in the test. First we create the fixture object, and then we use it to create four other objects. That turns out to be a pretty common idiom when using AutoFixture, so it's worthwhile to reduce the clutter if possible.
With xUnit.net's excellent extensibility features, we can. AutoFixture 2.0 now includes the AutoDataAttribute in a separate assembly. AutoDataAttribute derives from xUnit.net's DataAttribute (just like InlineDataAttribute), and while we can use it as is, it becomes really powerful if we combine it with auto-mocking like this:
public class AutoMoqDataAttribute : AutoDataAttribute { public AutoMoqDataAttribute() : base(new Fixture() .Customize(new AutoMoqCustomization())) { } }
This is a custom attribute that combines AutoFixture's two optional extensions for auto-mocking and xUnit.net support. With the AutoMoqDataAttribute in place, we can now rewrite the above test like this:
[Theory, AutoMoqData] public void AddWillPipeMapCorrectly([Frozen]Basket basket, [Frozen]Mock<IPizzaMap> mapMock, PizzaPresenter pizza, BasketPresenter sut) { // Fixture setup // Exercise system sut.Add(pizza); // Verify outcome mapMock.Verify(m => m.Pipe(pizza, basket.Add)); // Teardown }
The AutoDataAttribute simply uses a Fixture object to create the objects declared in the unit tests parameter list. Notice that the test is no longer a [Fact], but rather a [Theory].
The only slightly tricky thing to notice is that when we declare a parameter object, it will automatically map to a call to the CreateAnonymous method for the parameter type. However, when we need to invoke the Freeze method, we can add the [Frozen] attribute in front of the parameter.
The best part about data theories is that they don't prevent us from writing normal unit tests in the same Test Class, and this carries over to the [AutoData] attribute. We can use it when it's possible, but for those more complex tests where we need to interact with a Fixture instance, we can still write a normal [Fact].
Comments
System.InvalidOperationException : No data found for XunitSample.TestClass.TestMethod2
System.InvalidOperationException : No data found for XunitSample.TestClass.TestMethod1
Did I miss something in terms of setup? Sample code here - https://gist.github.com/1920943
Forgive me if I missed this point in your blog, but it seems that the parameter order is critical when injecting objects marked with the [Frozen] attribute into your test, if the SUT is being injected with that frozen object.
I was refactoring my tests to make use of AutoData like you've shown here, and was adding my [Frozen]Mock<IMySutDependency> parameter willy-nilly to the end of the test parameters, *after* the SUT parameter. I was freezing it so that I could verify the same mocked dependency that was injected into the SUT as part of my test, or so I thought...
void MySutTest(MySut sut, [Frozen]Mock<IMySutDependency> frozen)
After a brief episode of confusion and self-loathing, I moved the frozen mocked dependency in front of the SUT in the test parameter list. Skies parted, angels sang, test passed again.
void MySutTest([Frozen]Mock<IMySutDependency> frozen, MySut sut)
Jeff, thank you for your comment. As you have discovered, the order of test parameters matter when you apply those 'hint' attributes, like [Frozen]
. This is by design, because it enables you to generate one or more unique values of a data type before you freeze it. This could sometimes be important, although it's not something I do a lot. Keep in mind that the [Frozen] attribute wasn't designed exclusively with mocks in mind; it doesn't know anything about mocks - it just freeze a Type.
In general, all of those 'hint' attributes apply an ICustomization, and they apply each ICustomization in the order of the arguments to which they are applied. The order of AutoFixture Customizations matter.
The Register Resolve Release pattern
The subject of Dependency Injection (DI) in general, and DI Containers specifically, suffers from horrible terminology that seems to confuse a lot of people. Newcomers to DI often think about DI Containers as a sort of Abstract Factory on steroids. It's not. Nicholas Blumhardt already realized and described this phenomenon a couple of years ago.
The core of the matter is that as developers, we are extremely accustomed to thinking about components and services in terms of queries instead of commands. However, the Hollywood Principle insists that we should embrace a tell, don't ask philosophy. We can apply this principles to DI Containers as well: Don't call the container; it'l call you.
This leads us to what Krzysztof Koźmic calls the three calls pattern. Basically it states that we should only do three things with a DI Container:
- Bootstrap the container
- Resolve root components
- Dispose this container
This is very sound advice and independently of Krzysztof I've been doing something similar for years - so perhaps the pattern label is actually in order here.
However, I think that the pattern deserves a more catchy name, so in the spirit of the Arrange Act Assert (AAA) pattern for unit testing I propose that we name it the Register Resolve Release (RRR) pattern. The names originate with Castle Windsor terminology, where we:
- Register components with the container
- Resolve root components
- Release components from the container
Other containers also support the RRR pattern, but if we were to pick their terminology, it would rather be the Configure GetInstance Dispose (CGD) pattern (or something similar), and that's just not as catchy.
We can rewrite a previous example with Castle Windsor and annotate it with comments to call out where the three container phases occur:
private static void Main(string[] args) { var container = new WindsorContainer(); container.Kernel.Resolver.AddSubResolver( new CollectionResolver(container.Kernel)); // Register container.Register( Component.For<IParser>() .ImplementedBy<WineInformationParser>(), Component.For<IParser>() .ImplementedBy<HelpParser>(), Component.For<IParseService>() .ImplementedBy<CoalescingParserSelector>(), Component.For<IWineRepository>() .ImplementedBy<SqlWineRepository>(), Component.For<IMessageWriter>() .ImplementedBy<ConsoleMessageWriter>()); // Resolve var ps = container.Resolve<IParseService>(); ps.Parse(args).CreateCommand().Execute(); // Release container.Release(ps); container.Dispose(); }
Notice that in most cases, explicitly invoking the Release method isn't necessary, but I included it here to make the pattern stand out.
So there it is: the Register Resolve Release pattern.
Comments
However, a using block invokes Dispose, but not Release. Releasing an object graph is conceptually very different from disposing the container. However, in the degenerate case shown here, there's not a lot of difference, but in a server scenario where we use the container to resolve an object graph per request, we resolve and release many object graphs all the time. In such scenarios we only dispose the container when the application itself recycles, and even then, we may never be given notice that this happens.
Instrumentation with Decorators and Interceptors
One of my readers recently asked me an interesting question. It relates to my book's chapter about Interception (chapter 9) and Decorators and how they can be used for instrumentation-like purposes.
In an earlier blog post we saw how we can use Decorators to implement Cross-Cutting Concerns, but the question relates to how a set of Decorators can be used to log additional information about code execution, such as the time before and after a method is called, the name of the method and so on.
A Decorator can excellently address such a concern as well, as we will see here. Let us first define an IRegistrar interface and create an implementation like this:
public class ConsoleRegistrar : IRegistrar { public void Register(Guid id, string text) { var now = DateTimeOffset.Now; Console.WriteLine("{0}\t{1:s}.{2}\t{3}", id, now, now.Millisecond, text); } }
Although this implementation ‘logs' to the Console, I'm sure you can imagine other implementations. The point is that given this interface, we can add all sorts of ambient information such as the thread ID, the name of the current principal, the current culture and whatnot, while the text string variable still gives us an option to log more information. If we want a more detailed API, we can just make it more detailed - after all, the IRegistrar interface is just an example.
We now know how to register events, but are seemingly no nearer to instrumenting an application. How do we do that? Let us see how we can instrument the OrderProcessor class that I have described several times in past posts.
At the place I left off, the OrderProcessor class uses Constructor Injection all the way down. Although I would normally prefer using a DI Container to auto-wire it, here's a manual composition using Pure DI just to remind you of the general structure of the class and its dependencies:
var sut = new OrderProcessor( new OrderValidator(), new OrderShipper(), new OrderCollector( new AccountsReceivable(), new RateExchange(), new UserContext()));
All the dependencies injected into the OrderProcessor instance implement interfaces on which OrderProcessor relies. This means that we can decorate each concrete dependency with an implementation that instruments it.
Here's an example that instruments the IOrderProcessor interface itself:
public class InstrumentedOrderProcessor : IOrderProcessor { private readonly IOrderProcessor orderProcessor; private readonly IRegistrar registrar; public InstrumentedOrderProcessor( IOrderProcessor processor, IRegistrar registrar) { if (processor == null) { throw new ArgumentNullException("processor"); } if (registrar == null) { throw new ArgumentNullException("registrar"); } this.orderProcessor = processor; this.registrar = registrar; } #region IOrderProcessor Members public SuccessResult Process(Order order) { var correlationId = Guid.NewGuid(); this.registrar.Register(correlationId, string.Format("Process begins ({0})", this.orderProcessor.GetType().Name)); var result = this.orderProcessor.Process(order); this.registrar.Register(correlationId, string.Format("Process ends ({0})", this.orderProcessor.GetType().Name)); return result; } #endregion }
That looks like quite a mouthful, but it's really quite simple - the cyclomatic complexity of the Process method is as low as it can be: 1. We really just register the Process method call before and after invoking the decorated IOrderProcessor.
Without changing anything else than the composition itself, we can now instrument the IOrderProcessor interface:
var registrar = new ConsoleRegistrar(); var sut = new InstrumentedOrderProcessor( new OrderProcessor( new OrderValidator(), new OrderShipper(), new OrderCollector( new AccountsReceivable(), new RateExchange(), new UserContext())), registrar);
However, imagine implementing an InstrumentedXyz for every IXyz and compose the application with them. It's possible, but it's going to get old really fast - not to mention that it massively violates the DRY principle.
Fortunately we can solve this issue with any DI Container that supports dynamic interception. Castle Windsor does, so let's see how that could work.
Instead of implementing the same code ‘template' over and over again to instrument an interface, we can do it once and for all with an interceptor. Imagine that we delete the InstrumentedOrderProcessor; instead, we create this:
public class InstrumentingInterceptor : IInterceptor { private readonly IRegistrar registrar; public InstrumentingInterceptor(IRegistrar registrar) { if (registrar == null) { throw new ArgumentNullException("registrar"); } this.registrar = registrar; } #region IInterceptor Members public void Intercept(IInvocation invocation) { var correlationId = Guid.NewGuid(); this.registrar.Register(correlationId, string.Format("{0} begins ({1})", invocation.Method.Name, invocation.TargetType.Name)); invocation.Proceed(); this.registrar.Register(correlationId, string.Format("{0} ends ({1})", invocation.Method.Name, invocation.TargetType.Name)); } #endregion }
If you compare this to the Process method of InstrumentedOrderProcessor (that we don't need anymore), you should be able to see that they are very similar. In this version, we just use the invocation argument to retrieve information about the decorated method.
We can now add InstrumentingInterceptor to a WindsorContainer and enable it for all appropriate components. When we do that and invoke the Process method on the resolved IOrderProcessor, we get a result like this:
bbb9724e-0fad-4b06-9bb0-b8c1c460cded 2010-09-20T21:01:16.744 Process begins (OrderProcessor) 43349d42-a463-463b-8ddf-e569e3170c97 2010-09-20T21:01:16.745 Validate begins (TrueOrderValidator) 43349d42-a463-463b-8ddf-e569e3170c97 2010-09-20T21:01:16.745 Validate ends (TrueOrderValidator) 44fdccc8-f12d-4057-ae03-791225686504 2010-09-20T21:01:16.746 Collect begins (OrderCollector) 8bbb1a0c-6134-4652-a4af-cd8c0c7184a0 2010-09-20T21:01:16.746 GetCurrentUser begins (UserContext) 8bbb1a0c-6134-4652-a4af-cd8c0c7184a0 2010-09-20T21:01:16.747 GetCurrentUser ends (UserContext) d54359ff-8c32-487f-8728-b19ff0bf4942 2010-09-20T21:01:16.747 GetCurrentUser begins (UserContext) d54359ff-8c32-487f-8728-b19ff0bf4942 2010-09-20T21:01:16.747 GetCurrentUser ends (UserContext) c54c4506-23a8-4553-ba9a-066fc64252d2 2010-09-20T21:01:16.748 GetSelectedCurrency begins (UserContext) c54c4506-23a8-4553-ba9a-066fc64252d2 2010-09-20T21:01:16.748 GetSelectedCurrency ends (UserContext) b3dba76b-6b4e-44fa-aca5-52b2d8509db3 2010-09-20T21:01:16.750 Convert begins (RateExchange) b3dba76b-6b4e-44fa-aca5-52b2d8509db3 2010-09-20T21:01:16.751 Convert ends (RateExchange) e07765bd-fe07-4486-96f1-f74d77241343 2010-09-20T21:01:16.751 Collect begins (AccountsReceivable) e07765bd-fe07-4486-96f1-f74d77241343 2010-09-20T21:01:16.752 Collect ends (AccountsReceivable) 44fdccc8-f12d-4057-ae03-791225686504 2010-09-20T21:01:16.752 Collect ends (OrderCollector) 231055d3-4ebb-425d-8d69-fb9c85d9a860 2010-09-20T21:01:16.752 Ship begins (OrderShipper) 231055d3-4ebb-425d-8d69-fb9c85d9a860 2010-09-20T21:01:16.753 Ship ends (OrderShipper) bbb9724e-0fad-4b06-9bb0-b8c1c460cded 2010-09-20T21:01:16.753 Process ends (OrderProcessor)
Notice how we care easily see where and when method calls begin and end using the descriptive text as well as the correlation id. I will leave it as an exercise for the reader to come up with an API that provides better parsing options etc.
As a final note it's worth pointing out that this way of instrumenting an application (or part of it) can be done following the Open/Closed Principle. I never changed the original implementation of any of the components.
My future is Azure
As some of my readers may already know, my (previous) employer Safewhere went out of business in August, so I started looking for something new to do. I believe that I have now found it.
October 1st I'll start in my new role as Technical Lead at Commentor, where it will be my responsibility to establish us as the leading Windows Azure center of expertise in Denmark (and beyond?). That's quite a big mouthful for me, but also something that I'm very excited about.
This means that besides working on real Windows Azure projects with Danish customers, I also anticipate doing a lot of blogging, speaking and writing about Azure in the future.
What does this mean for my many other endeavors, like my book, AutoFixture or just blogging and speaking about TDD and DI in general? These things are something that I've always been doing mainly in my spare time, and I intend to keep doing that. Perhaps there are even better opportunities for synergy in my new line of work, but only time will tell.
I'm really thrilled to be given the opportunity to expand in a slightly different direction. It'll be hard, but I'm sure it's also going to be a blast!
Comments
Congrats - great news for both Commmentor, Azure and you.
I hope to do a ton of shared Azure projects with you in the near future.
-René
Regards from a true believer in "What can Azure do for you"
Congrats on a new job. I will be interesting to read about Azure. Glad to see that you won't get rid of TDD and DI ;)
Thomas
Don't call the container; it'll call you
There still seems to be some confusion about what is Dependency Injection (DI) and what is a DI Container, so in this post I will try to sort it out as explicitly as possible.
DI is a set of principles and patterns that enable loose coupling.
That's it; nothing else. Remember that old quote from p. 18 of Design Patterns?
Program to an interface; not an implementation.
This is the concern that DI addresses. The most useful DI pattern is Constructor Injection where we inject dependencies into consumers via their constructors. No container is required to do this.
The easiest way to build a DI-friendly application is to just use Constructor Injection all the way. Conversely, an application does not automatically become loosely coupled when we use a DI Container. Every time application code queries a container we have an instance of the Service Locator anti-pattern. The corollary leads to this variation of the Hollywood Principle:
Don't call the container; it'll call you.
A DI Container is a fantastic tool. It's like a (motorized) mixer: you can whip cream by hand, but it's easier with a mixer. On the other hand, without the cream the mixer is nothing. The same is true for a DI Container: to really be valuable, your code must employ Constructor Injection so that the container can auto-wire dependencies.
A well-designed application adheres to the Hollywood Principle for DI Containers: it doesn't call the container. On the other hand, we can use the container to compose the application - or we can do it the hard way; this is called Poor Man's DI. Here's an example that uses Poor Man's DI to compose a complete application graph in a console application:
private static void Main(string[] args) { var msgWriter = new ConsoleMessageWriter(); new CoalescingParserSelector( new IParser[] { new HelpParser(msgWriter), new WineInformationParser( new SqlWineRepository(), msgWriter) }) .Parse(args) .CreateCommand() .Execute(); }
Notice how the nested structure of all the dependencies gives you an almost visual idea about the graph. What we have here is Constructor Injection all the way in.
CoalescingParserSelector's constructor takes an IEnumerable<IParser> as input. Both HelpParser and WineInformationParser requires an IMessageWriter, and WineInformationParser also an IWineRepository. We even pull in types from different assemblies because SqlWineRepository is defined in the SQL Server-based data access assembly.
Another thing to notice is that the msgWriter variable is shared among two consumers. This is what a DI Container normally addresses with its ability to manage component lifetime. Although there's not a DI Container in sight, we could certainly benefit from one. Let's try to wire up the same graph using Unity (just for kicks):
private static void Main(string[] args) { var container = new UnityContainer(); container.RegisterType<IParser, WineInformationParser>("parser.info"); container.RegisterType<IParser, HelpParser>("parser.help"); container.RegisterType<IEnumerable<IParser>, IParser[]>(); container.RegisterType<IParseService, CoalescingParserSelector>(); container.RegisterType<IWineRepository, SqlWineRepository>(); container.RegisterType<IMessageWriter, ConsoleMessageWriter>( new ContainerControlledLifetimeManager()); container.Resolve<IParseService>() .Parse(args) .CreateCommand() .Execute(); container.Dispose(); }
We are using Constructor Injection throughout, and most DI Containers (even Unity, but not MEF) natively understands that pattern. Consequently, this means that we can mostly just map interfaces to concrete types and the container will figure out the rest for us.
Notice that I'm using the Configure-Resolve-Release pattern described by Krzysztof Koźmic. First I configure the container, then I resolve the entire object graph, and lastly I dispose the container.
The main part of the application's execution time will be spent within the Execute method, which is where all the real application code runs.
In this example I wire up a console application, but it just as well might be any other type of application. In a web application we just do a resolve per web request instead.
But wait! does that mean that we have to resolve the entire object graph of the application, even if we have dependencies that cannot be resolved at run-time? No, but that does not mean that you should pull from the container. Pull from an Abstract Factory instead.
Another question that is likely to arise is: what if I have dependencies that I rarely use? Must I wire these prematurely, even if they are expensive? No, you don't have to do that either.
In conclusion: there is never any reason to query the container. Use a container to compose your object graph, but don't rely on it by querying from it. Constructor Injection all the way enables most containers to auto-wire your application, and an Abstract Factory can be a dependency too.
Comments
Having special constructors all over the code makes MEF kind of lame in comparison to .NET NInject or Java's CDI.
I am sure it works just fine, but seems a bit too "Hand's on" when there are a few good options that require less customization on host containers.
Changing the behavior of AutoFixture auto-mocking with Moq
One of my Twitter followers who appears to be using AutoFixture recently asked me this:
So with the AutoMoqCustomization I feel like I should get Mocks with concrete types too (except the sut) - why am I wrong?
AutoFixture's extention for auto-mocking with Moq was never meant as a general modification of behavior. Customizations extend the behavior of AutoFixture; they don't change it. There's a subtle difference. In any case, the auto-mocking customization was always meant as a fallback mechanism that would create Mocks for interfaces and abstract types because AutoFixture doesn't know how to deal with those.
Apparently @ZeroBugBounce also want concrete classes to be issued as Mock instances, which is not quite the same thing; AutoFixture already has a strategy for that (it's called ConstructorInvoker).
Nevertheless I decided to spike a little on this to see if I could get it working. It turns out I needed to open some of the auto-mocking classes a bit for extensibility (always a good thing), so the following doesn't work with AutoFixture 2.0 beta 1, but will probably work with the RTW. Also please not that I'm reporting on a spike; I haven't thoroughly tested all edge cases.
That said, the first thing we need to do is to remove AutoFixture's default ConstructorInvoker that invokes the constructor of concrete classes. This is possible with this constructor overload:
public Fixture(DefaultRelays engineParts)
This takes as input a DefaultRelays instance, which is more or less just an IEnumerable<ISpecimenBuilder> (the basic building block of AutoFixture). We need to replace that with a filter that removes the ConstructorInvoker. Here's a derived class that does that:
public class FilteringRelays : DefaultEngineParts { private readonly Func<ISpecimenBuilder, bool> spec; public FilteringRelays(Func<ISpecimenBuilder, bool> specification) { if (specification == null) { throw new ArgumentNullException("specification"); } this.spec = specification; } public override IEnumerator<ISpecimenBuilder> GetEnumerator() { var enumerator = base.GetEnumerator(); while (enumerator.MoveNext()) { if (this.spec(enumerator.Current)) { yield return enumerator.Current; } } } }
DefaultEngineParts already derive from DefaultRelays, so this enables us to use the overloaded constructor to remove the ConstructorInvoker by using these filtered relays:
Func<ISpecimenBuilder, bool> concreteFilter = sb => !(sb is ConstructorInvoker); var relays = new FilteringRelays(concreteFilter);
The second thing we need to do is to tell the AutoMoqCustomization that it should Mock all types, not just interfaces and abstract classes. With the new (not in beta 1) overload of the constructor, we can now supply a Specification that determines which types should be mocked.:
Func<Type, bool> mockSpec = t => true;
We can now create the Fixture like this to get auto-mocking of all types:
var fixture = new Fixture(relays).Customize( new AutoMoqCustomization(new MockRelay(mockSpec)));
With this Fixture instance, we can now create concrete classes that are mocked. Here's the full test that proves it:
[Fact] public void CreateAnonymousMockOfConcreteType() { // Fixture setup Func<ISpecimenBuilder, bool> concreteFilter = sb => !(sb is ConstructorInvoker); var relays = new FilteringRelays(concreteFilter); Func<Type, bool> mockSpec = t => true; var fixture = new Fixture(relays).Customize( new AutoMoqCustomization(new MockRelay(mockSpec))); // Exercise system var foo = fixture.CreateAnonymous<Foo>(); foo.DoIt(); // Verify outcome var fooTD = Mock.Get(foo); fooTD.Verify(f => f.DoIt()); // Teardown }
Foo is this concrete class:
public class Foo { public virtual void DoIt() { } }
Finally, a word of caution: this is a spike. It's not fully tested and is bound to fail in certain cases: at least one case is when the type to be created is sealed. Since Moq can't create a Mock of a sealed type, the above code will fail in that case. However, we can address this issue with some more sophisticated filters and Specifications. However, I will leave that up to the interested reader (or a later blog post).
All in all I think this provides an excellent glimpse of the degree of extensibility that is built into AutoFixture 2.0's kernel.
AutoFixture as an auto-mocking container
The new internal architecture of AutoFixture 2.0 enables some interesting features. One of these is that it becomes easy to extend AutoFixture to become an auto-mocking container.
Since I personally use Moq, the AutoFixture 2.0 .zip file includes a new assembly called Ploeh.AutoFixture.AutoMoq that includes an auto-mocking extension that uses Moq for Test Doubles.
Please note that AutoFixture in itself has no dependency on Moq. If you don't want to use Moq, you can just ignore the Ploeh.AutoFixture.AutoMoq assembly.
Auto-mocking with AutoFixture does not have to use Moq. Although it only ships with Moq support, it is possible to write an auto-mocking extension for a different dynamic mock library.
To use it, you must first add a reference to Ploeh.AutoFixture.AutoMoq. You can now create your Fixture instance like this:
var fixture = new Fixture() .Customize(new AutoMoqCustomization());
What this does is that it adds a fallback mechanism to the fixture. If a type falls through the normal engine without being handled, the auto-mocking extension will check whether it is a request for an interface or abstract class. If this is so, it will relay the request to a request for a Mock of the same type.
A different part of the extension handles requests for Mocks, which ensures that the Mock will be created and returned.
Splitting up auto-mocking into a relay and a creational strategy for Mock objects proper also means that we can directly request a Mock if we would like that. Even better, we can use the built-in Freeze support to freeze a Mock, and it will also automatically freeze the auto-mocked instance as well (because the relay will ask for a Mock that turns out to be frozen).
Returning to the original frozen pizza example, we can now rewrite it like this:
[Fact] public void AddWillPipeMapCorrectly() { // Fixture setup var fixture = new Fixture() .Customize(new AutoMoqCustomization()); var basket = fixture.Freeze<Basket>(); var mapMock = fixture.Freeze<Mock<IPizzaMap>>(); var pizza = fixture.CreateAnonymous<PizzaPresenter>(); var sut = fixture.CreateAnonymous<BasketPresenter>(); // Exercise system sut.Add(pizza); // Verify outcome mapMock.Verify(m => m.Pipe(pizza, basket.Add)); // Teardown }
Notice that we can simply freeze Mock<IPizzaMap> which also automatically freeze the IPizzaMap instance as well. When we later create the SUT by requesting an anonymous BasketPresenter, IPizzaMap is already frozen in the fixture, so the correct instance will be injected into the SUT.
This is similar to the behavior of the custom FreezeMoq extension method I previously described, but now this feature is baked in.
Comments
Mock<IFoo> mockFoo = _container.GetMock<IFoo>();
It appears that both AutoFixture and UnityAutoMocker can be used in a similar fashion. Though it appears that AutoFixture has a broader feature set (the ability to create anonymous value types.) Whereas, one can create anonymous reference types by simply asking theUnityAutoMockContainer to resolve the interface or concrete type.
I have only been using UnityAutoMockContainer for a week or two. So when I stumbled upon your most excellent book Dependency Injection in .Net, I discovered AutoFixture and was considering whether to use it instead. I welcome your perspective. By the way, that book on DI you are writing is great! Nice work.
Thanks for writing.
As a disclaimer I should add that I have no experience with UnityAutoMockContainer, but that I'm quite familiar with Unity itself, as well as the general concept of an auto-mocking container, which (IIRC) is something that predates Unity. At the very least there are many auto-mocking containers available, and you can often easily extend an existing DI Container to add auto-mocking capabilities (see e.g. this simplified example).
This is more or less also the case with AutoFixture. It started out as something completely different (namely as a Test Data Builder) and then evolved. At a time it became natural to also add auto-mocking capabilities to it.
You could say that the same is the case with Unity: it's a DI Container and was never envisioned as an auto-mocking container. However, since Unity is extensible, it is possible to add auto-mocking to it as well (just as the Autofac example above).
This means that UnityAutoMockContainer and AutoFixture approach the concept of auto-mocking from two different directions. I can't really compare their auto-mocking capabilities as I don't know UnityAutoMockContainer well enough, but I can offer this on a more general level:
AutoFixture shares a lot of similarities with DI Containers (Unity included). It supports auto-wiring and it can be configured to create instances in lots of interesting ways. However, since the focus is different, it does some things better and some things not as well as a DI Container.
AutoFixture has more appropriate handling of primitive types. Normally a DI Container is not going to serve you a sequence of unique strings or numbers, whereas AutoFixture does. This was really the original idea that started it all.
Most DI Containers are not going to try to populate writable properties, but AutoFixture does.
AutoFixture also has a more granular concept of what constitutes a request. For all DI Containers I know, a request to resolve something is always based on a Type. AutoFixture, on the other hand, enable us to make arbitrary requests, which means that we can treat a request for a ParameterInfo differently than a request for a PropertyInfo (or a Type) even if they share the same Type. This sometimes comes in handy.
On the other hand AutoFixture is weaker when it comes to lifetime management. A Fixture is never expected to exist for more than a single test case, so it makes no sense to model any other lifestyle than Transient and Singleton. AutoFixture can do that, but nothing more. It has no Seam for implementing custom lifestyles and it does not offer Per Thread or Per HttpRequest lifestyles. It doesn't have to, because it's not a DI Container.
In short I prefer AutoFixture for TDD because it's a more focused tool than any DI Container. On the other hand it means that there's one more new API to learn, although that's not an issue for me personally :)
I just started to play with AutoFixture and its auto-mocking support. As far as I can see, the auto-mocking extensions return mocks without functionality, especially, without any data in the properties.
Example:
var anonymousParent = fixture.CreateAnonymous<ComplexParent>();
This example from the cheat sheet would return a filled fixture:
ComplexParent:
-Child: ComplexChild
--Name: string: "namef70b67ff-05d3-4498-95c9-de74e1aa0c3c"
--Number: int: 1
When using the AutoMoqCustomization with an interface IComplexParent with one property Child: ComplexChild, the result is strange:
1) Child is of type ComplexChildProxySomeGuid, i.e. it is mocked, although it is a concrete class
2) Child.Number is 0
3) Child.Name is null
I think this is inconsistent. Is it a bug or intentional? If it is intentional, could you please explain?
Thanks,
Daniel
This is, in my opinion, the correct design choice made by the Moq designers: an interface specifies only the shape of members - not behavior.
However, with Moq, you can turn on 'normal' property behavior by invoking the SetupAllProperties on the Mock instance. The AutoMoqCustomization doesn't do that, but it's certainly possible to use the underlying types used to implement it to compose a different AutoMoqCustomization that also invokes SetupAllProperties.
In any case, this is only half of the solution, because it would only enable the Mock instance to get 'normal' property behavior. The other half would then be to make the AutoMoqCustomization automatically fill those properties. This is possible by wrapping the ISpecimenBuilder instance responsible for creating the actual interface instance in a PostProcessor. The following discussions may shed a little more light on these matters: Default Greedy constructor behavior breaks filling and Filling sub properties.
In short, I don't consider it a bug, but I do respect that other people may want different base behavior than I find appropriate. It's definitely possible to tweak AutoFixture to do what you want it to do, but perhaps a bit more involved than it ought to be. AutoFixture 3.0 should streamline the customization API somewhat, I hope.
As a side note, I consider properties in interfaces a design smell, so that's probably also why I've never had any issues with the current behavior.
Please see the fork I created: https://hg01.codeplex.com/forks/dhilgarth/autonsubstitute.
If you are interested, I can send a pull request.
That's why I implemented it that way in AutoNSubstitute.
BTW, AutoMoq is in itself inconsistent: As I wrote in my original question, the property Child is not null but it contains a mock of ComplexChild. I understand your reasons for saying the properties should be null if a mock is returned, but then it should apply consistently to all properties.
Maybe it is a good idea to let the user of the customization choose what behavior he wants?
As I mentioned, it's not something I've ever given a great deal of thought because I almost never define properties on interfaces, but I agree that it might be more consistent, so I've created a work item for it - you could go and vote for it if you'd like :)
Regarding your observation about the Child property, that's once more the behavior we get from Moq... The current implementation of AutoMoq basically just hands all mocking concerns off to Moq and doesn't deal with them any further.
Regarding your fork for NSubstitute I only had a brief look at it, but please do send a pull request - then we'll take it from there :)
Thank you for your interest.
Comments
Thanks
The reason why the CurrencyCodeSpecimenBuilder is looking for a ParameterInfo instance is because the thing it's looking for is exactly the constructor parameter to the Money class.
If you instead want to match on a property, PropertyInfo is indeed the correct request to look for.
(and FieldInfo is used if you want to match on a public field...)
A request can be anything, but will often by a Type, ParameterInfo, or PropertyInfo.
You can use the
context
argument passed to the Create method to resolve other values; you only need to watch out for infinite recursions: you can't ask for an unconditional string if the Specimen Builder you're writing handles unconditional strings.I've written several custom ISpecimenBuilder implementations similar to your example above, and I always wince when testing for the ParameterType.Name value (i.e., if(pi.Name == "myParamName"){...}). It seems like it makes a test that would use this implementation very brittle - no longer would I have freedom to change the name of the paramter to suit my asthetics, without relying on a refactoring tool (cough, cough, Resharper, cough, cough) to hopefully pickup on the string value in my test suite and prompt me to change it there as well.
This makes me think that I shouldn't need to be doing this, and that a design refactoring of my SUT would be a better option. Care to comment on this observation? Is there a common scenario/design trap that illustrates a better way? Or am I already in dangerous design territory based on my need to create an ISpecimenBuilder in the first place?
Jeff, thank you for writing. Your question warranted a new blog post; it may not answer all of your concerns, but hopefully some of them. Read it and let me know if you still have questions.