Tuesday, 08 November 2011 15:29:05 UTC

Now that my book about Dependency Injection is out, it's only fitting that I also invert my own dependencies by striking out as an independent consultant/advisor. In the future I'm hoping to combine my writing and speaking efforts, as well as my open source interests, with helping out clients write better code.

If you'd like to get help with Dependency Injection or Test-Driven Development, SOLID, API design, application architecture or one of the other topics I regularly cover here on my blog, I'm available as a consultant worldwide.

When it comes to Windows Azure, I'll be renewing my alliance with my penultimate employer Commentor, so you can also hire me as part of larger deal with Commentor.

In case you are wondering what happened to my employment with AppHarbor, I resigned from my position there because I couldn't make it work with all the other things I also would like to do. I still think AppHarbor is a very interesting project, and I wish my former employers the best of luck with their endeavor.

This has been a message from the blog's sponsor (myself). Soon, regular content will resume.


Well shux, I was waiting on pins and needles for some magic unicorn stuff from ya! I hear ya though, gotta have that liberty. :) I'm often in the same situation.

Best of luck to you, I'll be reading the blog as always.

BTW - Got the physical book finally, even though I'm no newb of IoC and such, I'd have loved a solid read when I was learning about the options back in the day. ;)

2011-11-08 16:11 UTC
Best of luck.

As with all other endevours you set your mind to, you will for sure also excel as a free agent.
2011-11-08 16:58 UTC
Flemming Laugesen #
Congratulation my friend - looking forward to take advances of you expertice :)

2011-11-08 19:45 UTC
Congratulations on your decision, and the very best of luck, I'm sure you'll have heaps of succes.
2011-11-08 21:52 UTC
I wish you the best with your new adventure. I cannot thank you enough for all I learned from your book on Dependency Injection.
One of your Fans in USA,
2011-11-09 03:08 UTC
Congrats! Best of luck.
2011-11-09 09:00 UTC

SOLID concrete

Tuesday, 25 October 2011 15:01:15 UTC

Greg Young gave a talk at GOTO Aarhus 2011 titled Developers have a mental disorder, which was (semi-)humorously meant, but still addressed some very real concerns about the cognitive biases of software developers as a group. While I have no intention to provide a complete resume of the talk, Greg said one thing that made me think a bit (more) about SOLID code. To paraphrase, it went something like this:

Developers have a tendency to attempt to solve specific problems with general solutions. This leads to coupling and complexity. Instead of being general, code should be specific.

This sounds correct at first glance, but once again I think that SOLID code offers a solution. Due to the Single Responsibility Principle each SOLID concrete (pardon the pun) class will tend to very specifically address a very narrow problem.

Such a class may implement one (or more) general-purpose interface(s), but the concrete type is specific.

The difference between the generality of an interface and the specificity of a concrete type becomes more and more apparent the better a code base applies the Reused Abstractions Principle. This is best done by defining an API in terms of Role Interfaces, which makes it possible to define a few core abstractions that apply very broadly, while implementations are very specific.

As an example, consider AutoFixture's ISpecimenBuilder interface. This is a very central interface in AutoFixture (in fact, I don't even know just how many implementations it has, and I'm currently too lazy to count them). As an API, it has proven to be very generally useful, but each concrete implementation is still very specific, like the CurrentDateTimeGenerator shown here:

public class CurrentDateTimeGenerator : ISpecimenBuilder
    public object Create(object request, 
        ISpecimenContext context)
        if (request != typeof(DateTime))
            return new NoSpecimen(request);
        return DateTime.Now;

This is, literally, the entire implementation of the class. I hope we can agree that it's very specific.

In my opinion, SOLID is a set of principles that can help us keep an API general while each implementation is very specific.

In SOLID code all concrete types are specific.


nelson #
I don't agree with the "Reused Abstractions Principle" article at all. Programming to interfaces provides many benefits even in cases where "one interface, multiple implementations" doesn't apply.

For one, ctor injection for dependencies makes them explicit and increases readability of a particular class (you should be able to get a general idea of what a class does by looking at what dependencies it has in its ctor). However, if you're taking in more than a handful of dependencies, that is an indication that your class needs refactoring. Yes, you could accept dependencies in the form of concrete classes, but in such cases you are voiding the other benefits of using interfaces.

As far as API writing goes, using interfaces with implementations that are internal is a way to guide a person though what is important in your API and what isn't. Offering up a bunch of instantiatable classes in an API adds to the mental overhead of learning your code - whereas only marking the "entry point" classes as public will guide people to what is important.

Further, as far as API writing goes, there are many instances where Class A may have a concrete dependency on Class B, but you wish to hide the methods that Class A uses to talk to Class B. In this case, you may create an interface (Interface B) with all of the public methods that you wish to expose on Class B and have Class B implement them, then add your "private" methods as simple, non-virtual, methods on Class B itself. Class A will have a property of type Interface B, which simply returns a private field of type Class B. Class A can now invoke specific methods on Class B that aren't accessible though the public API using it's private reference to the concrete Class B.

Finally, there are many instances where you want to expose only parts of a class to another class. Let's say that you have an event publisher. You would normally only want to expose the methods that have to do with publishing to other code, yet that same class may include facilities that allow you to register handler objects with it. Using interfaces, you can limit what other classes can and can't do when they accept your objects as dependencies.

These are instances of what things you can do with interfaces that make them a useful construct on their own - but in addition to all of that, you get the ability to swap out implementations without changing code. I know that often times implementations are never swapped out in production (rather, the concrete classes themselves are changed), which is why I mention this last, but in the rare cases where it has to be done, interfaces make this scenario possible.

My ultimate point is that interfaces don't always equal generality or abstraction. They are simply tools that we can use to make code explicit and readable, and allow us to have greater control over method/property accessibility.
2011-10-25 18:15 UTC
The RAP fills the same type of role as unit testing/TDD: theoretically, it's possible to write testable code without writing a single unit test against it, but how do you know?

It's the same thing with the RAP: how can you know that it's possible to exchange one implementation of an interface with another if you've never tried it? Keep in mind that Test Doubles don't count because you can always create a bogus implementation of any interface. You could have an interface that's one big leaky abstraction: even though it's an interface, you can never meaningfully replace that single implementation with any other meaningful production implementation.

Also: using an interface alone doesn't guarantee the Liskov Substitution Principle. However, by following the RAP, you get a strong indication that you can, indeed, replace one implementation with another without changing the correctness of the code.
2011-10-25 18:56 UTC
nelson #
That was my point, though. You can use interfaces as a tool to solve problems that have nothing directly to do with substituting implementations. I think people see this as the only usecase for the language construct, which is sad. These people then turn around and claim that you shouldn't be using interfaces at all, except for cases in which substituting implementation is the goal. I think this attitude disregards many other proper uses for the construct; the most important I think is being able to hide implementation details in the public API.

If an interface does not satisfy RAP, it does not absolutely mean that interface is invalid. Take the Customer and CustomerImpl types specified in the linked article. Perhaps the Customer interface simply exposes a public, readonly, interface for querying information about a customer. The CustomerImpl class, instantiated and acted upon behind the scenes in the domain services, may specify specific details such as OR/mapping or other behavior that isn't intended to be accessible to client code. Although a slightly contrived example (I would prefer the query model to be event sourced, not mapped to a domain model mapped to an RDBMS), I think this use is valid and should not immediately be thrown out because it does not satisfy RAP.
2011-10-25 20:15 UTC
On his bio it says that Greg Young writes for Experts Exchange. Maybe he's the one with the mental disorder :P
2011-10-26 04:16 UTC
Nelson, I think we agree :) To me, the RAP is not an absolute rule, but just another guideline/metric. If none of my interfaces have multiple implementations, I start to worry about the quality of my abstractions, but I don't find it problematic if some of my interfaces have only a single implementation.

Your discussion about interfaces fit well with the Interface Segregation Principle and the concept of Role Interfaces, and I've also previously described how interfaces act as access modifiers.
2011-10-26 08:37 UTC

Checking for exactly one item in a sequence using C# and F#

Tuesday, 11 October 2011 14:36:03 UTC

Here's a programming issue that comes up from time to time. A method takes a sequence of items as input, like this:

public void Route(IEnumerable<string> args)

While the signature of the method may be given, the implementation may be concerned with finding out whether there is exactly one element in the sequence. (I'd argue that this would be a violation of the Liskov Substitution Principle, but that's another discussion.) By corollary, we might also be interested in the result sets on each side of that single element: no elements and multiple elements.

Let's assume that we're required to raise the appropriate event for each of these three cases.

Naïve approach in C# #

A naïve implementation would be something like this:

public void Route(IEnumerable<string> args)
    var countCategory = args.Count();
    switch (countCategory)
        case 0:
        case 1:

However, the problem with that is that IEnumerable<string> carries no guarantee that the sequence will ever end. In fact, there's a whole category of implementations that keep iterating forever - these are called Generators. If you pass a Generator to the above implementation, it will never return because the Count method will block forever.

Robust implementation in C# #

A better solution comes from the realization that we're only interested in knowing about which of the three categories the input matches: No elements, a single element, or multiple elements. The last case is covered if we find at least two elements. In other words, we don't have to read more than at most two elements to figure out the category. Here's a more robust solution:

public void Route(IEnumerable<string> args)
    var countCategory = args.Take(2).Count();
    switch (countCategory)
        case 0:
        case 1:

Notice the inclusion of the Take(2) method call, which is the only difference from the first attempt. This will give us at most two elements that we can then count with the Count method.

While this is better, it still annoys me that it's necessary with a secondary LINQ call (to the Single method) to extract that single element. Not that it's particularly inefficient, but it still feels like I'm repeating myself here.

(We could also have converted the Take(2) iterator into an array, which would have enabled us to query its Length property, as well as index into it to get the single value, but it basically amounts to the same work.)

Implementation in F# #

In F# we can implement the same functionality in a much more compact manner, taking advantage of pattern matching against native F# lists:

member this.Route args =
    let atMostTwo = args |> Seq.truncate 2 |> Seq.toList
    match atMostTwo with
    | [] -> onNoArgument.Trigger(Unit.Default)
    | [arg] -> onSingleArgument.Trigger(arg)
    | _ -> onMultipleArguments.Trigger(args)

The first thing happening here is that the input is being piped through a couple of functions. The truncate method does the same thing as the Take LINQ method does, and the toList method subsequently converts that sequence of at most two elements into a native F# list.

The beautiful thing about native F# lists is that they support pattern matching, so instead of first figuring out in which category the input belongs, and then subsequently extract the data in the single element case, we can match and forward the element in a single statement.

Why is this important? I don't know… it's just satisfying on an aesthetic level :)


a. #
string item = null;
int count = 0;

foreach(var current in args)
item = current;
i ++;

if (i == 2)

if (i == 1)
2011-10-11 14:42 UTC

Weakly-typed versus Strongly-typed Message Channels

Friday, 23 September 2011 09:08:53 UTC

Soon after I posted my previous blog post on message dispatching without Service Location I received an email from Jeff Saunders with some great observations. Jeff has been so kind to allow me to quote his email here on the blog, so here it is:

“I enjoyed your latest blog post about message dispatching. I have to ask, though: why do we want weakly-typed messages? Why can't we just inject an appropriate IConsumer<T> into our services - they know which messages they're going to send or receive.

“A really good example of this is ISubject<T> from Rx. It implements both IObserver<T> (a message consumer) and IObservable<T> (a message producer) and the default implementation Subject<T> routes messages directly from its IObserver side to its IObservable side.

“We can use this with DI quite nicely - I have written an example in .NET Pad:,c26,

“The good thing about this is that we now have access to all of the standard LINQ query operators and the new ones added in Rx, so we can use a select query to map messages between layers, for instance.

“This way we get all the benefits of a weakly-typed IChannel interface, with the added advantages of strong typing for our messages and composability using Rx.

“One potential benefit of weak typing that could be raised is that we can have just a single implementation for IChannel, instead of an ISubject<T> for each message type. I don't think this is really a benefit, though, as we may want different propagation behaviour for each message type - there are other implementations of ISubject<T> that call consumers asynchronously, and we could pass any IObservable<T> or IObserver<T> into a service for testing purposes.”

These are great observations and I think that Rx holds much promise in this space. Basically you can say that in CQRS-style architectures we're already pushing events (and commands) around, so why not build upon what the framework offers?

Even if you find the IObserver<T> interface a bit too clunky with its OnNext, OnError and OnCompleted methods compared to the strongly typed IConsumer<T> interface, the question still remains: why do we want weakly-typed messages?

We don't, necessarily. My previous post wasn't meant as a particular endorsement of a weakly typed messaging channel. It was more an observation that I've seen many variations of this IChannel interface:

public interface IChannel
    void Send<T>(T message);

The most important thing I wanted to point out was that while the generic type argument may create the illusion that this is a strongly typed method, this is all it is: an illusion. IChannel isn't strongly typed because you can invoke the Send method with any type of message - and the code will still compile. This is no different than the mechanical distinction between a Service Locator and an Abstract Factory.

Thus, when defining a channel interface I normally prefer to make this explicit and instead model it like this:

public interface IChannel
    void Send(object message);

This achieves exactly the same and is more honest.

Still, this doesn't really answer Jeff's question: is this preferable to one or more strongly typed IConsumer<T> dependencies?

Any high-level application entry point that relies on a weakly typed IChannel can get by with a single IChannel dependency. This is flexible, but (just like with Service Locator), it might hide that the client may have (or (d)evolve) too many responsibilities.

If, instead, the client would rely on strongly typed dependencies it becomes much easier to see if/when it violates the Single Responsibility Principle.

In conclusion, I'd tend to prefer strongly typed Datatype Channels instead of a single weakly typed channel, but one shouldn't underestimate the flexibility of a general-purpose channel either.


Jeff #
Thanks for the response, Mark! We are in full agreement.
2011-09-23 09:20 UTC

Message Dispatching without Service Location

Monday, 19 September 2011 14:44:47 UTC

Once upon a time I wrote a blog post about why Service Locator is an anti-pattern, and ever since then, I occasionally receive rebuffs from people who agree with me in principle, but think that, still: in various special cases (the argument goes), Service Locator does have its uses.

Most of these arguments actually stem from mistaking the mechanics for the role of a Service Locator. Still, once in a while a compelling argument seems to come my way. One of the most insistent arguments concerns message dispatching - a pattern which is currently gaining in prominence due to the increasing popularity of CQRS, Domain Events and kindred architectural styles.

In this article I'll first provide a quick sketch of the scenario, followed by a typical implementation based on a ‘Service Locator', and then conclude by demonstrating why a Service Locator isn't necessary.

Scenario: Message Dispatching #

Appropriate use of message dispatching internally in an application can significantly help decouple the code and make roles explicit. A common implementation utilizes a messaging interface like this one:

public interface IChannel
    void Send<T>(T message);

Personally, I find that the generic typing of the Send method is entirely redundant (not to mention heavily reminiscent of the shape of a Service Locator), but it's very common and not particularly important right now (but more about that later).

An application might use the IChannel interface like this:

var registerUser = new RegisterUserCommand(
    "Jane Doe",
// ...
var changeUserName = new ChangeUserNameCommand(
    "Jane Ploeh");;
// ...
var resetPassword = new ResetPasswordCommand(

Obviously, in this example, the channel variable is an injected instance of the IChannel interface.

On the receiving end, these messages must be dispatched to appropriate consumers, which must all implement this interface:

public interface IConsumer<T>
    void Consume(T message);

Thus, each of the command messages in the example have a corresponding consumer:

public class RegisterUserConsumer : IConsumer<RegisterUserCommand>
public class ChangeUserNameConsumer : IConsumer<ChangeUserNameCommand>
public class ResetPasswordConsumer : IConsumer<ResetPasswordCommand>

This certainly is a very powerful pattern, so it's often used as an argument to prove that Service Locator is, after all, not an anti-pattern.

Message Dispatching using a DI Container #

In order to implement IChannel it's necessary to match messages to their appropriate consumers. One easy way to do this is by employing a DI Container. Here's an example that uses Autofac to implement IChannel, but any other container would do as well:

private class AutofacChannel : IChannel
    private readonly IComponentContext container;
    public AutofacChannel(IComponentContext container)
        if (container == null)
            throw new ArgumentNullException("container");
        this.container = container;
    public void Send<T>(T message)
        var consumer = this.container.Resolve<IConsumer<T>>();

This class is an Adapter from Autofac's IComponentContext interface to the IChannel interface. At this point I can always see the “Q.E.D.” around the corner: “look! Service Locator isn't an anti-pattern after all! I'd like to see you implement IChannel without a Service Locator.”

While I'll do the latter in just a moment, I'd like to dwell on the DI Container-based implementation for a moment.

  • Is it simple? Yes.
  • Is it flexible? Yes, although it has shortcomings.
  • Would I use it like this? Perhaps. It depends :)
  • Is it the only way to implement IChannel? No - see the next section.
  • Does it use a Service Locator? No.

While AutofacChannel uses Autofac (a DI Container) to implement the functionality, it's not (necessarily) a Service Locator in action. This was the point I already tried to get across in my previous post about the subject: just because its mechanics look like Service Locator it doesn't mean that it is one. In my implementation, the AutofacChannel class is a piece of pure infrastructure code. I even made it a private nested class in my Composition Root to underscore the point. The container is still not available to the application code, so is never used in the Service Locator role.

One of the shortcomings about the above implementations is that it provides no fallback mechanism. What happens if the container can't resolve the matching consumer? Perhaps there isn't a consumer for the message. That's entirely possible because there are no safeguards in place to ensure that there's a consumer for every possibly message.

The shape of the Send method enables the client to send any conceivable message type, and the code still compiles even if no consumer exists. That may look like a problem, but is actually an important insight into implementing an alternative IChannel class.

Message Dispatching using weakly typed matching #

Consider the IChannel.Send method once again:

void Send<T>(T message);

Despite its generic signature it's important to realize that this is, in fact, a weakly typed method (at least when used with type inferencing, as in the above example). Equivalently to a bona fide Service Locator, it's possible for a developer to define a new class (Foo) and send it - and the code still compiles: Foo());

However, at run-time, this will fail because there's no matching consumer. Despite the generic signature of the Send method, it contains no type safety. This insight can be used to implement IChannel without a DI Container.

Before I go on I should point out that I don't consider the following solution intrinsically superior to using a DI Container. However, readers of my book will know that I consider it a very illuminating exercise to try to implement everything with Poor Man's DI once in a while.

Using Poor Man's DI often helps unearth some important design elements of DI because it helps to think about solutions in terms of patterns and principles instead of in terms of technology.

However, once I have arrived at an appropriate conclusion while considering Poor Man's DI, I still tend to prefer mapping it back to an implementation that involves a DI Container.

Thus, the purpose of this section is first and foremost to outline how message dispatching can be implemented without relying on a Service Locator.

While this alternative implementation isn't allowed to change any of the existing API, it's a pure implementation detail to encapsulate the insight about the weakly typed nature of IChannel into a similarly weakly typed consumer interface:

private interface IConsumer
    void Consume(object message);

Notice that this is a private nested interface of my Poor Man's DI Composition Root - it's a pure implementation detail. However, given this private interface, it's now possible to implement IChannel like this:

private class PoorMansChannel : IChannel
    private readonly IEnumerable<IConsumer> consumers;
    public PoorMansChannel(params IConsumer[] consumers)
        this.consumers = consumers;
    public void Send<T>(T message)
        foreach (var c in this.consumers)

Notice that this is another private nested type that belongs to the Composition Root. It loops though all injected consumers, so it's up to each consumer to decide whether or not to do anything about the message.

A final private nested class bridges the generically typed world with the weakly typed world:

private class Consumer<T> : IConsumer
    private readonly IConsumer<T> consumer;
    public Consumer(IConsumer<T> consumer)
        this.consumer = consumer;
    public void Consume(object message)
        if (message is T)

This generic class is another Adapter - this time adapting the generic IConsumer<T> interface to the weakly typed (private) IConsumer interface. Notice that it only delegates the message to the adapted consumer if the type of the message matches the consumer.

Each implementer of IConsumer<T> can be wrapped in the (private) Consumer<T> class and injected into the PoorMansChannel class:

var channel = new PoorMansChannel(
    new Consumer<ChangeUserNameCommand>(
        new ChangeUserNameConsumer(store)),
    new Consumer<RegisterUserCommand>(
        new RegisterUserConsumer(store)),
    new Consumer<ResetPasswordCommand>(
        new ResetPasswordConsumer(store)));

So there you have it: type-based message dispatching without a DI Container in sight. However, it would be easy to use convention-based configuration to scan an assembly and register all IConsumer<T> implementations and wrap them in Consumer<T> instances and use this list to compose a PoorMansChannel instance. However, I will leave this as an exercise to the reader (or a later blog post).

My claim still stands #

In conclusion, I find that I can still defend my original claim: Service Locator is an anti-pattern.

That claim, by the way, is falsifiable, so I do appreciate that people take it seriously enough by attempting to disprove it. However, until now, I've yet to be presented with a scenario where I couldn't come up with a better solution that didn't involve a Service Locator.

Keep in mind that a Service Locator is defined by the role it plays - not the shape of the API.


SO User #
I'm not clear on how you would send a command (or inject IChannel) from another class if AutofacChannel is private to your Composition Root.
2011-09-19 19:37 UTC
AutofacChannel (and PoorMansChannel) is a private class that implements a public interface (IChannel). Since IChannel is a public interface, it can be consumed by any class that needs it.
2011-09-19 19:46 UTC
Bob walsh #
Hi Mark,

I'm helping a .NET vendor improve their blog by finding respected developers who will contribute guest posts. Each post will include your byline, URL, book link (with your Amazon affiliate link) and a small honorarium. It can either be a new post or one of your popular older posts.
Being an author myself, I know that getting in front of new audiences boosts sales, generates consulting opportunities and in this case, a little cash. Would you be interested? If so, let me know and I'll set you up.


Bob Walsh
2011-09-19 20:03 UTC
Thanks for contacting me about this. Yes, I'd like to discuss this further, but I think we should take this via e-mail so as to not tire all my other readers :) You can email me at mark(guess which sign goes here)
2011-09-19 20:10 UTC
Phil Sandler #
Hey Mark,

Looking forward to getting your book later this month.

I think it comes down to the definition of Service Locator. I'm not sure that the AutofacChannel example is much different than a common example I come up against, which is a ViewModel factory that more or less wraps a call to the container, and then gets injected as a IViewModelFactory into classes that need to create VMs. I don't feel that this is "wrong," as I only allow this kind of thing in cases where more explicit injection is significantly more painful. However I do still think of it as Service Location, and it does violate the Three Calls Pattern. As long as I limit the number of places this is allowed and everyone is aware of them, I see little risk in doing it this way. Some might argue it's a slippery slope . . .
2011-09-19 20:39 UTC
Phil Sandler #
Off-topic question: in the Domain Events post you linked (Udi's), he uses a static Dispatcher for events, which gets called directly from the AR. In the comments, you talk about favoring having the Dispatcher injected into the AR.

Whether it is a static dependency or an injected instance, it seems unnatural to me to call a service directly from a domain object. I think I've seen you say the same thing, but I'm not sure in what context.

Anyway, was wondering if you had any additional thoughts on the subject. I've been struggling with it for a while, and have settled (temporarily) on firing events outside of the domain object (i.e. call the domain.Method, then fire the event from the command handler).
2011-09-19 20:48 UTC

Thanks for writing.

It's my experience that in MVVM one definitely needs some kind of factory to create ViewModels. However, there's no reason to define it as a Service Locator. A normal (generic) Abstract Factory is preferable, and achieves the same thing.

Regarding the question about whether or not to raise Domain Events from within Entities: it bothered me for a while until I realized that when you move towards CQRS and Event Sourcing, Commands and Events become first-class citizens and this problem tends to go away because you can keep the logic about which commands raises which events decoupled from the Entities. In this architectural style, Entities are immutable, so nothing ever changes within them.

In CQRS we have consumers that consume Commands and Events, and typically we have a single consumer which is responsible for receiving a Command and (upon validation) convert it to an Event. Such a consumer is a Service which can easily hold other Services, such as a channel upon which Events can be published.
2011-09-20 07:48 UTC
Martin #
Hi Mark, just an off-topic comment: code samples in your posts are almost unreadable in google reader. Somehow there are lots of empty lines and indenting is wrong. Don't know if there is anything you can do about it?
2011-09-20 14:41 UTC

I had that problem previously, but I thought I fixed it months ago. From where I'm sitting, the code looks OK both in Google Reader on the Web, Google Reader on Android as well as FeedDemon. Can you share exactly where and how it looks unreadable?
2011-09-21 06:23 UTC
Martin #
Hi Mark, I don't know if my previous post got lost or not. So again, here's a picture of how code snippets look like when I read them via google reader (in FF7, IE9, Chrome15):

2011-09-25 15:06 UTC
Martin, I agree that the screen shot looks strange, but across multiple machines and browsers I've been unable to reproduce it, so it's quite difficult for me to troubleshoot. Do you have any idea what might cause this problem?
2011-09-26 10:48 UTC
Martin #
Update: I found the reason for the problem: it seems I was subscribed to the Atom feed (/SyndicationService.asmx/GetAtom). After unsubscribing and resubscribing to the Rss feed (/SyndicationService.asmx/GetRss), the code snippets look OK.

2011-09-26 13:15 UTC
Good :) Thanks for the update.
2011-09-26 14:20 UTC
Hi Mark, Daniel again - sorry about not responding to your latest mails on the AutoNSubstitute fork - it's not dead on my side, I am just buried in work currently... I hope I can continue working on it in the next weeks.
Why I am posting:
I am currently designing the architecture of a new application and I want to design my domain models persistent ignorant but still use them directly in the NHibernate mapping to benefit from lazy loading and to not have near identical entity objects. One part of a PI domain model is that the models might rely on services to do their work which get injected using constructor injection. Now, NHibernate needs a default constructor by default but that can be changed (see: In the middle of this post there is a class implementation called ReflectionOptimizer that is responsible for creating the entities. It uses an injected container to receive an instance of the requested entity type or falls back to the default implementation of NHibernate if that type is unknown to the container.
Do you think this is using the container in a service locator role?
I think not, because a Poor Man's DI implementation of this class would get a list of factories, one for each supported entity and all of this is pure infrastructure.
The biggest benefit of changing the implementation in a way that it receives factories is that it fails fast: I am constructing all factories along with their dependencies in the composition root.

What is your view on this matter?
2011-10-09 10:14 UTC
Daniel, I think you reached the correct conclusion already. As you say, you could always create a Poor Man's implementation of that factory.

It'd be particularly clean if you could inject an Abstract Factory into your NHibernate infrastructure and keep the container itself isolated to the Composition Root. In any case I agree that this sounds like pure infrastructure code, so it doesn't sound like Service Location.

However, I'd think twice about injecting Services into Entities - see also this post from Miško Hevery.
2011-10-11 16:12 UTC
Simple #
My questions are about Service Bus - I havent found any other place in your blog to ask it =)

If Message Bus is used - how is about Layers? Should it be some kind of "Superlayer" (visible for all other layers?)

How do you think - in which situation should Message Bus be involved?

Should it better be implemented or is there some good products? (C#, not commerce licence)
2012-05-04 08:29 UTC
The way we tend to think about messaging-based applications today (e.g. with CQRS, or Udi Dahan-style SOA), messages are only related to the domain models. Thus, any messaging gateway (like an IBus interface or similar) is only required by the domain model.

It's not too hard to implement a message bus on top of a queue system, but it might be worth taking a look at NServiceBus or Rebus.
2012-05-04 08:46 UTC
Hi Mark, sorry about a very late question on this 2 year old post.
I have implemented this pattern before and everything is well when the return type is void.
So, for dispatching messages, this is a really usefull and flexible pattern

However, I was looking into implementing the same thing for a query dispatcher. The structure is similar, with the difference that your messages are queries and that the consumer actually returns a result.
I do have a working implementation but I cannot get type inference to work on my query dispatcher. That means that every time I call the query dispatcher I need to specify the return type and the query type
This may seem a bit abstract, but you can check out it this question on StackoverFlow: type inference with interfaces and generic constraints.
I'm aware that the way I'm doing it there is not possible with c#, but I was wondering if you'd see a pattern that would allow me to do that.
What is your view on this matter?
Many thanks!
2014-02-23 04:41 UTC

Kenneth, thank you for writing. Your question reminded my of my series about Role Hints, particularly Metadata, Role Interface, and Partial Type Name Role Hints. The examples in those posts aren't generically typed, but I wonder if you might still find those patterns useful?

2014-02-23 08:11 UTC

Hi Mark,
I am also using this pattern as an in-process mediator for commands, queries and events. My mediator (channel in your case) now uses a dependency resolver, which is a pure wrapper around a DI container and only contains resolve methods.

I am now trying to refactor the dependency resolver away by creating separate factories for the command, query and event handlers (in your example this are the consumers). In my current code and also in yours and dozens of other implementations on the net don’t deal with releasing the handlers. My question is should this be a responsibility of the mediator (channel)? I think so because the mediator is the only place that knows about the existence of the handler. The problem I have with my answer is that the release of the handler is always called after a dispatch (send) even though the handler could be used again for sending another command of the same type during the same request (HTTP request for example). This implies that the handler’s lifetime is per HTTP request.

Maybe I am thinking in the wrong direction but I would like to hear your opinion about the releasing handler’s problem.
Many thanks in advance,
Martijn Burgers

2014-07-12 08:50 UTC

Martijn, thank you for writing. The golden rule for decommissioning is that the object responsible for composing the object graph should also be responsible for releasing it. That's what some DI Containers (Castle Windsor, Autofac, MEF) do - typically using a Release method. The reason for that is that only the Composer knows if there are any nodes in the object graph that should be disposed of, so only the Composer can properly decommission an object graph.

You can also implement that Resolve/Release pattern using Pure DI. If you're writing a framework, you may need to define an Abstract Factory with an associated Release method, but otherwise, you can just release the graph when you're done with it.

In the present article, you're correct that I haven't addressed the lifetime management aspect. The Composer here is the last code snippet that composes the PoorMansChannel object. As shown here, the entire object graph has the same lifetime as the PoorMansChannel object, but I don't describe whether or not it's a Singleton, Per Graph, Transient, or something else. However, the code that creates the PoorMansChannel object should also be the code that releases it, if that's required.

In my book's chapter 8, I cover decommissioning in much more details, although I don't cover this particular example. I hope this answer helps; otherwise, please write again.

2014-07-12 14:35 UTC

AutoFixture goes Continuous Delivery with Semantic Versioning

Tuesday, 06 September 2011 20:34:42 UTC

For the last couple of months I've been working on setting up AutoFixture for Continuous Delivery (thanks to the nice people at for supplying the CI service) and I think I've finally succeeded. I've just pushed some code from my local Mercurial repository, and 5 minutes later the release is live on both the CodePlex site as well as on the NuGet Gallery.

The plan for AutoFixture going forward is to maintain Continuous Delivery and switch the versioning scheme from ad hoc to Semantic Versioning. This means that obviously you'll see releases much more often, and versions are going to be incremented much more often. Since the previous release the current release incidentally ended at version 2.2.44, but since the versioning scheme has now changed, you can expect to see 2.3, 2.4 etc. in rapid succession.

While I've been mostly focused on setting up Continuous Delivery, Nikos Baxevanis and Enrico Campidoglio have been busy writing new features:

Apart from these excellent contributions, other new features are

  • Added StableFiniteSequenceCustomization
  • Added [FavorArrays], [FavorEnumerables] and [FavorLists] attributes to extension
  • Added a Generator<T> class
  • Added a completely new project/package called Idioms, which contains convention-based tests (more about this later)
  • Probably some other things I've forgotten about…

While you can expect to see version numbers to increase more rapidly and releases to occur more frequently, I'm also beginning to think about AutoFixture 3.0. This release will streamline some of the API in the root namespace, which, I'll admit, was always a bit haphazard. For those people who care, I have no plans to touch the API in the Ploeh.AutoFixture.Kernel namespace. AutoFixture 3.0 will mainly target the API contained in the Ploeh.AutoFixture namespace itself.

Some of the changes I have in mind will hopefully make the default experience with AutoFixture more pleasant - I'm unofficially thinking about AutoFixture 3.0 as the ‘pit of success' release. It will also enable some of the various outstanding feature requests.

Feedback is, as usual, appreciated.

Service Locator: roles vs. mechanics

Thursday, 25 August 2011 18:55:12 UTC

It's time to take a step back from the whole debate about whether or not Service Locator is, or isn't, an anti-pattern. It remains my strong belief that it's an anti-pattern, while others disagree. Although everyone is welcome to think differently than me, I've noticed that some of the arguments being put forth in defense of Service Locator seem very convincing. However, I believe that in those cases we no longer talk about Service Locator, but something that looks an awful lot like it.

Some APIs are easy to confuse with a ‘real' Service Locator. It probably doesn't help that last year I published an article on how to tell the difference between a Service Locator and an Abstract Factory. In this article I may have focused too much on the mechanics of Service Locator, but as Derick Bailey was so kind to point out, this hides the role the API might play.

To repeat that earlier post, a Service Locator looks like this:

public interface IServiceLocator
    T Create<T>(object context);

All Service Locators I've seen so far look like that, or some variation thereof, but that doesn't mean that the relationship is transitive. Just because an API looks like that it doesn't automatically means that it's a Service Locator.

If it was, all DI containers would be Service Locators. As an example, here's Castle Windsor's Resolve method:

public T Resolve<T>()

Even AutoFixture has an API like that:

MyClass sut = fixture.CreateAnonymous<MyClass>();

It has never been my intention to denounce every single DI container available, as well as my own open source framework. Service Locator is ultimately not identified by the mechanics of its API, but by the role it plays.

A DI container encapsulated in a Composition Root is not a Service Locator - it's an infrastructure component.

It becomes a Service Locator if used incorrectly: when application code (as opposed to infrastructure code) actively queries a service in order to be provided with required dependencies, then it has become a Service Locator.

Service Locators are spread thinly and pervasively throughout a code base - that is just as much a defining characteristic.


Jan #
Sorry to come back to this pattern/anti-pattern discussion but when I get this right, a ServiceLocator per se is not an anti-pattern if used canny and wise.
The anti-pattern comes from a DI container used as a ServiceLocator?
2011-08-26 08:53 UTC
Service Locator is, in my opinion, always an anti-pattern. However, a class with a Service Locator-like signature does not in itself constitute a Service Locator - it has to be used like a Service Locator to be one.

A DI Container is not, in itself, a Service Locator, but it can be used like one. If you do that, it's an anti-pattern, but that doesn't mean that any use of a container constitutes an anti-pattern. When a container is used according to the Register Resolve Release pattern from within a Composition Root it's all good.
2011-08-26 09:22 UTC
wcoenen #
I am confused by your use of the word "transitive" here (and also in your book on page 3). Don't you mean "symmetric" instead of "transitive"?
2011-08-26 15:34 UTC
Yes, well, nobody's perfect :/ Thanks for pointing that out. My mistake. 'Symmetric' is the correct term - I'll see if I can manage to include that correction in the book. It's close, but I think it'll be possible to correct it.
2011-08-28 19:41 UTC
Hi, Mark,

I pop by your blog intermittently: it's good stuff.

For some reason, I always had a particular idea of the type of person you are; but for some other reason, I thought that this stemmed from something other than your (excellent) writings.

I've just realised what it is.

That photo of you, up there on the top right, your, "Contact," photo.

It looks like you have an ear-ring dangling from your left ear.
2011-09-03 16:38 UTC
Tuan Nguyen #
Hi Mark,

I have one concern about Composition Root. For WPF applications, you said that the Composition Root is at OnStartUp. So, if I want to compose the main window and all other windows (with their view models) in the app at once and one place only (I mean at OnStartUp). How could I do? Thanks in advance!
2011-12-04 18:28 UTC
Composing an object graph in WPF is no different than in any other application. In WPF, using the MVVM pattern, you shouldn't compose Windows but rather ViewModels. In the case where the ViewModels rely on run-time values, you can use an Abstract Factory.
2011-12-05 10:46 UTC
Tuan Nguyen #
Thank you for your answer. I still have another question about your book. I am reading on Chapter 14 - Part 4. You said that Unity doesn't have built-in support to auto-registration. So, you suggested using reflection and Register method. I tried to apply your solution to my example with Unity 2.3 but it didn't work. Can you investigate where I am wrong? Following is the source code:

namespace SimpleCSharpApp
class Program
static void Main(string[] args)
// resolve object using Unity
IUnityContainer container = new UnityContainer();
foreach (var t in typeof(Program).Assembly.GetExportedTypes())
if (typeof(IMessageWriter).IsAssignableFrom(t))
container.RegisterType(typeof(IMessageWriter), t, t.FullName);

public class Salutation
private readonly IMessageWriter writer;
public Salutation(IMessageWriter writer)
this.writer = writer;

public void Exclaim()
writer.Write("Hello DI!");

public interface IMessageWriter
void Write(string message);

public class ConsoleMessageWriter : IMessageWriter
public void Write(string message)

2011-12-07 18:06 UTC
Does the code download for the book work for you?
2011-12-07 18:21 UTC
_, Doing some searching as well as noticed your website appears a bit all messed up during my K-meleon web browser. However luckily barely anybody uses it any longer however you might want to consider it.
2012-04-03 11:49 UTC

Joining AppHarbor

Monday, 01 August 2011 14:03:10 UTC

I'm pleased to announce that I'll be joining AppHarbor as a developer. With my long-standing interest in TDD and OOD as well as my more recent interests in open-source .NET software, distributed source control systems, Continuous Delivery etc. AppHarbor seems like a perfect match for me.

Although AppHarbor is very attractive to me, this has been a difficult decision as Commentor has been a great employer. However, despite great customers I just don't feel like consulting at the moment. Since Safewhere went out of business I've been writing much less code than I'd liked, so when presented with an opportunity to join such a congenial outfit as AppHarbor I had few doubts.

I'll still be working out of Copenhagen, Denmark, and I also expect to keep up my usual community engagement at home as well as abroad.


Nikolaj Winnes #
BRAVO! gratulerer
2011-08-01 14:07 UTC
Thats a great reenforcement for AppHarbor :) Good luck on new place!
2011-08-01 15:26 UTC
2011-08-01 17:26 UTC
Sounds good; have fun!
2011-08-01 17:59 UTC
2011-08-01 20:19 UTC
Fantastic news! Congratulations!
2011-08-02 06:43 UTC
Awesome!! Congratulations, I'm stoked for you & the AppHarbor Team. Those guys rock, and you'll be adding even more oomph to a great team.

Looking forward to the results of you guys working together! :)
2011-08-13 23:38 UTC

Composition Root

Thursday, 28 July 2011 15:22:04 UTC

In my book I describe the Composition Root pattern in chapter 3. This post serves as a summary description of the pattern.

The Constructor Injection pattern is easy to understand until a follow-up question comes up:

Where should we compose object graphs?

It's easy to understand that each class should require its dependencies through its constructor, but this pushes the responsibility of composing the classes with their dependencies to a third party. Where should that be?

It seems to me that most people are eager to compose as early as possible, but the correct answer is:

As close as possible to the application's entry point.

This place is called the Composition Root of the application and defined like this:

A Composition Root is a (preferably) unique location in an application where modules are composed together.

This means that all the application code relies solely on Constructor Injection (or other injection patterns), but is never composed. Only at the entry point of the application is the entire object graph finally composed.

The appropriate entry point depends on the framework:

  • In console applications it's the Main method
  • In ASP.NET MVC applications it's global.asax and a custom IControllerFactory
  • In WPF applications it's the Application.OnStartup method
  • In WCF it's a custom ServiceHostFactory
  • etc.

(you can read more about framework-specific Composition Roots in chapter 7 of my book.)

The Composition Root is an application infrastructure component.

Only applications should have Composition Roots. Libraries and frameworks shouldn't.

The Composition Root can be implemented with Poor Man's DI Pure DI, but is also the (only) appropriate place to use a DI Container.

A DI Container should only be referenced from the Composition Root. All other modules should have no reference to the container.

Using a DI Container is often a good choice. In that case it should be applied using the Register Resolve Release pattern entirely from within the Composition Root.

Read more in Dependency Injection Principles, Practices, and Patterns.


David Martin #
Reading the pre-release and it's very good. As for the single composition root: what about complex apps with several deep features? These features could be considered by some to be mini-apps in and of themselves. Would it be appropriate to consider each feature to have its own composition root? I'm thinking of Unity's child containers here. Hopefully the question makes sense. As I use your book as ammo at work I come across the particular question.
2011-08-02 03:24 UTC
Each application/process requires only a single Composition Root. It doesn't matter how complex the application is.

The earlier you compose, the more you limit your options, and there's simply no reason to do that.

You may or may not find this article helpful - otherwise, please write again :)
2011-08-02 07:56 UTC
Erkan Durmaz #
Hi Mark,

My question is quite parallel to David’s. How about we have a complex application that loads its modules dynamically on-demand, and these modules can consist of multiple assemblies.

For example, a client application that has a page-based navigation. Different modules can be deployed and undeployed all the time. At design-time we don’t know all the types we wanna compose and we don’t have direct reference to them.

Should we introduce some infrastructure code, like David suggested, to let the modules register their own types (e.g. services) to a child container, then we apply “resolve” on the loaded module and dispose the child container when we are done with the module?

Looking forward to the final release of your book BTW :-)

2011-09-29 11:57 UTC
The problem with letting each module register their own types to any sort of container is that it tightly couples the modules (and any application in which you might want to use them) to a specific container. Even worse, if you ever need to consume a service provided by one module from another module, you'd have to couple these two to each other. Pretty soon, you'll end up with a tightly coupled mess.

One option is to use whichever DI Container you'd like from the Composition Root, and use its XML capabilities to configure the modules into each particular application that needs them. That's always possible, but adds some overhead, and is rarely required.

A better option is to simply drop all desired modules in an add-in folder and then use convention-based rules to scan each assembly. The most difficult part of that exercise is that you have to think explicitly about cardinality, but that's always an issue with add-in architectures. The MEF chapter of my book discusses cardinality a bit.
2011-10-03 06:18 UTC
I left your book at home, never mind it's here on my desk. I was going to ask what your composition root for webforms looked like, but here it is on pg 227.

I think I'm headed in
UI( depends on Presentation layer and a pure interfaces/DTOs (with any serialization attributes/methods needed). so structure/schema of the 'model' lives in that domain-schema layer. domain model depends on the domain-schema layer, but is purely behaviors (methods/logic). In this way there's a sharing of schema/dto between service layer and presentation layer, with the clean separation of all business logic and only business logic living in the domain layer.So outside of the composition root, neither the UI, presentation layer, or resource access layers (public api -> internal adapter-> individual resource) can touch the domain methods or types. Requires we define an interface for the domain model classes I think? Cross cutting-concerns would live in their own assembly.

Any thoughts?
2011-10-21 16:09 UTC
Why would your UI depend on DTOs? A Data Transfer Object plays the role of a boundary data carrier, just like View Model might do - they fit architecturally at the same place of application architecture: at the boundary. If you base your inner architecture upon DTOs you'll be headed in the direction of an Anemic Domain Model, so that doesn't sound a particularly good idea to me...
2011-10-23 09:05 UTC
I'm only up to reading mid-way through your chapter on doing it right (and jumped to the webforms composition root after reading this blog post), but here is more of my reasoning.

The catch comes when you look at the behavior, and you realize that there is hardly any behavior on these objects, making them little more than bags of getters and setters. Indeed often these models come with design rules that say that you are not to put any domain logic in the the domain objects. AnemicDomain Models - Martin Fowler

It becomes a 'real' domain model when it contains all (or most) of the behaviour that makes up the business domain (note I'm emphasising business logic, not UI or other orthogonal concerns). Anemic Domain Models - Stack overflow

I don't feel that this is headed into an Anemic domain model as the domain model assembly would contain all behaviors specific to the domain.

Designs that share very little state or, even better, have no state at all tend to be less prone to hard to analyze bugs and easier to repurpose when requirements change. Blog article - makes sense but i could be missing multiple boats here.

The domain would have zero state, and would depend on interfaces not DTOs. This would make test-ability and analysis simple.

Motivation for DTOs in the interfaces layer: presentation and persistence could all share the DTOs rather than having them duplicated in either place.
Barring that perceived gain, then the shared layer across all would just be interfaces.

The UI could depend on interfaces only, but the same assembly would contain DTOs that the presentation layer and any persistence could share without having to be duplicated/rewritten. So the domain is completely persistence and presentation ignorant.
2011-10-25 00:17 UTC
It may just be that we use the terminology different, but according to Patterns of Enterprise Application Architecture "a Data Transfer Object is one of those objects our mothers told us never to write. It's often little more than a bunch of fields and the getters and setters for them."

The purpose of a DTO is "to transfer multiple items of data between two processes in a single method call." That is (I take it) not what you are attempting to do here.

Whether or not you implement the business logic in the same assembly as your DTOs has nothing to with avoiding an Anemic Domain Model. The behavior must be defined by the object that holds the data.
2011-10-25 06:46 UTC
Jordan Morris #
I appreciate your exhortations to good architecture, but I am struggling to apply them in my situation.

I am writing a WPF system tray application which has a primary constraint to keep lean, especially in terms of performance and memory usage. This constraint, along with the relatively low complexity of the application, means that I cannot justify the overhead of MVVM. In the absence of a prescribed architecture for building the object graph, I could set one up 'manually' in Application.OnStartup. Presumably it would mean instantiating all the Window objects (with dependences), which are utilised from time to time by the user.

However, I have a problem with these Window instance sitting in memory, doing nothing, 90% of the time. It seems much more sensible to me to instantiate these Window objects in the events where the user asks for them. Yet, I will want to maintain the inversion of control, so how can I avoid accessing the DI container in multiple points in my app?
2013-01-12 10:09 UTC
Jordan, would the Virtual Proxy pattern be of help here?

You can use one of the solutions outlined in my recent series on Role Hints to avoid referencing a container outside of the Composition Root.
2013-01-12 21:17 UTC
Jordan Morris #
Hi Mark

I wanted to post the following comment on your linked page, here
but I am encountering a page error on submission.

The above article suggests a great framework for delayed construction of resources, however I have a different reason for wanting this than loading an assembly.

In my scenario I have occasionally-needed Window objects with heavy GUIs, presenting lots of data. The GUI and data only need to be in memory while the Window is displayed. The lazy loading approach seems to apply, except it leaves me with a couple of questions.

1) When the window is closed again, I want to release the resources. How can I unload and reload it again?

2) What would this line look like when using an IoC like Ninject?

> new Lazy<ISolo<IMarker>>(() => new C1(

An alternative I am considering is putting classes defining events in the object graph, instead of the windows themselves. These event classes could be constructed with instances of dependencies which will be used in the event methods to construct Windows when needed. The dependancies I am wanting to inject to maintain inversion of control are light-weight objects like persistance services which will exist as singletons in memory anyway (due to the way most IoC's work). Do you see any problem with this?

Many thanks.
2013-01-13 23:07 UTC
I'll refer you to section 6.2 in my book regarding lifetime management and the closing of resources. However, if things aren't disposable, you can also just let them go out of scope again and let the garbage collector take care of cleaning up.

I can't help you with Ninject.

2013-01-18 10:03 UTC
Allan Friis Hansen #
I would like to touch and expand upon the subject David Martin raised regarding complex applications with deep features.
You make the point that only the composition root should register dependencies of various layers and components. And thereby only the project containing the composition root would have references to the DI container being used

In a couple of projects I've worked on we have multiple applications and thereby composition root, being websites, service endpoint, console applications etc. giving us around 10 application endpoints that needs to have their own composition roots. But at the same time we work with a components based design which nicely encapsulates different aspects of the system.

If I were to only do my registration code within the place of the composition roots, then applications using the same components would have to register exactly the same dependencies per compontent, leading to severe code duplication
At this point we've gone with a solution where every component has their own registration class which the composition root's then register, so basically out composition roots only compose another level of composition roots. I hope it makes sense!

What I'm explaining here directly violates the principles you make in this post. Which I already had I sense it would. But at the same time I don't know how else to manage this situation effectively. Do you have any ideas for how to handle this type of situation?
2014-08-02 22:28 UTC

Allan, thank you for writing. In the end, you'll need to do what makes sense to you, and based on what you write, it may make sense to do what you do. Still, my first reaction is that I would probably tend to worry less about duplication than you seem to do. However, I don't know your particular project: neither how big your Composition Roots are, or how often they change. In other words, it's easy enough for me to say that a bit of duplication is okay, but that may not fit your reality at all.

Another point is that, in my experience at least, even if you have different applications (web sites, services, console applications, etc.) as part of the same overall system, and they share code, the way they should be composed tend to diverge the longer they live. This is reminiscent of Mathias Verraes' point that duplication may be coincidental, but that the duplicated code may deviate from each other in the future. This could also happen for such applications; e.g. at one point, you may wish to instrument some of your web site's dependencies, but not your batch job.

My best advice is to build smaller systems, in order to keep complexity down, and then build more of them, rather than building big systems. Again, that advice may not be useful in your case...

Another option, since you mention component-based design, is to move towards a convention-based approach, so that you don't have to maintain those Composition Roots at all: just drop in the binaries, and let your DI Container take care of the rest. Take a look at the Managed Extensibility Framework (MEF) for inspiration. Still, while MEF exposes some nice ideas, and is a good source of inspiration, I would personally chose to do something like this with a proper DI Container that also supports run-time Interception and programmable Pointcuts.

In the end, I think I understand your problem, but my overall reaction is that you seem to have a problem you shouldn't have, so my priority would be:

  1. Remove the problem altogether.
  2. Live with it.
  3. If that's not possible, solve it with technology.

2014-08-08 12:02 UTC

It seems that I have a certan inclination for reopening "old" posts. Still the may be considered evergreens! :) Now back on topic.

When it comes to the well known frameworks as ASP.NET MVC, Web.API, WCF, it is quite clear on where and how to set our composition root. But what if we do not have a clear "entry point" to our code? Imagine that you are writing an SDK, laying down some classes that will be used by other developers. Now, you have a class that exposes the following constructor.

public MyClass(IWhatEver whatEver)

Consider also that who is going to use this classes has no idea about IWhatEver nor it should have. For make the usage of MyClass as simple as possible, we I should be able to instatiate MyClass via the parameterless constructor. I had the idea of making the constructor that is used for DI, internal, and the only publically available one to be the paramtereless constructor. Then fetch somehow the instances of my dependencies in the paramterless constructor. Now imagine that I have several of classes as MyClass and that I do have a common set of "services" that are injected inside of them. Now, my questions are.
  1. Can I still have the single composition root?
  2. How do I trigger the composition?
  3. Does recreating the composition root per each "high level" class has a significant performance impact (considering a dependency tree of let's say 1000 objects).
  4. Is there a better way (pattern) to solve a problem like this?
Some of this questions may be slightly off topic, still the main question is, how to deal with the composition root in cases like this cases. Hoping you will make some clarity, I do thank you in advance.
2014-09-18 07:17 UTC

Mario, thank you for writing. If you're writing an SDK, you are writing either a library or a framework, so the Composition Root pattern isn't appropriate (as stated above). Instead, consider the guidelines for writing DI friendly libraries or DI friendly frameworks.

So, in order to answer your specific questions:

1: Conceptually, there should only be a single Composition Root, but it belongs to the application, so as a library developer, then no: 'you' can't have any Composition Root.

2: A library shouldn't trigger any composition, but as I explain in my guidelines for writing DI friendly libraries, you can provide Facades to make the learning curve gentle. For frameworks, you may need to provide appropriate Abstract Factories.

3: The performance impact of Composition shouldn't be a major concern. However, the real problem of attempting to apply the Composition Root pattern where it doesn't apply is that it increases coupling and takes away options that the client developer may want to utilize; e.g. how can a client developer instrument a sub-graph with Decorators if that sub-graph isn't available?

4: There are better ways to design DI friendly libraries and DI friendly frameworks.

2014-09-18 08:12 UTC

Thank you for your quick and valuable replay.

I read your posts and made some toughs about them. Still I am not convinced that what is described in that posts does tackle my problem. Even more I'm realizing that is less and less relevant to the topic of composition root. However, let me try describe a situation.

I do plan to write an SDK (a library that will allow developers to interact with my application and use some of it's functionality). Consider that I'm exposing a class called FancyCalculator. FancyCalculater needs to get some information from the settings repository. Now, I have a dependency on ISettingsRepository implementation and it is injected via a constructor into my FancyCalculator. Nevertheless, who is going to use the FancyCalculator, doesn't know and he shouldn't know anything about ISettingsRepository and it's dependency tree. He is expected only to make an instance of FancyCalculator and call a method on it. I do have a single implementation of ISettingsRepository in form of SettingsRepository class. In this case, how do I get to create an instance of SettingsRepository once my FancyCalculator is created?

A Factory? Service locator? Something else?

Composition root in applications like MVC, WebAPI, etc, is a very nice and clean approach, but what about the situations when we do not have a clean single entry point to the application?

Thank you again!

2014-09-18 12:57 UTC

Mario, thank you for writing again. As far as I understand, you basically have this scenario:

public class FancyCalculator
    private readonly ISettingsRepository settingsRepository;

Then you say: "a dependency on ISettingsRepository implementation and it is injected via a constructor into my FancyCalculator". Okay, that means this:

public class FancyCalculator
    private readonly ISettingsRepository settingsRepository;
    public FancyCalculator(ISettingsRepository settingsRepository)
        if (settingsRepository == null)
            throw new ArgumentNullException("settingsRepository");
        this.settingsRepository = settingsRepository;

But then you say: "who is going to use the FancyCalculator, doesn't know and he shouldn't know anything about ISettingsRepository and it's dependency tree."

That sounds to me like mutually exclusive constraints. Why are you injecting ISettingsRepository into FancyCalculator if you don't want to enable the user to supply any implementation of ISettingsRepository? If you don't want to allow that, the solution is easy:

public class FancyCalculator
    private readonly ISettingsRepository settingsRepository;
    public FancyCalculator()
        this.settingsRepository = new DefaultSettingsRepository();

If you want the best of both worlds, the solution is the Facade pattern I already described in my DI Friendly library article.

2014-09-19 19:56 UTC

Hi Mark, You got the example right. The reason of my choices are the following. I do not want user of my library to know about the dependencies for a couple of reason. First of all it needs to be as simple as possible to use. Second thing the dependencies are let's say "internal", so that code is loosely coupled and testable. He should not know even about them or get bothered at any point. Still I do not think it's wise to create instances of the dependencies in the parameterless constructor. Why? Well, I am concerned about maintainability. I would like a to somehow request the default instance of my dependency in the parameterless constructor and get it, so that I do have a single point in my sdk where teh default dependency is specified, and also that I do not need to handle the dependency tree. The fact is that I can't get what this something should be. I re-read the chapter 5 of your book this weekend to see if I can come up with something valid ideas. What I came up with is that I should use the parameterless constructor of each of my classes to handle it's own default dependencies. In this way, resolving the tree should be fine, and it is easy to maintain. Also to prevent user to see the constructors that to require the dependencies to be injected (and confuse it) I tough of declaring these constructors as internal. This are my words translated in code (easier to understand).

class Program
			    static void Main(string[] args)
			        // Using the SDK by others
			        FancyCalculator calculator = new FancyCalculator();
			public interface ISettingsRepository { }
			public interface ILogging { }
			public class Logging : ILogging { }
			public class SettingsRepository : ISettingsRepository
			    private readonly ILogging logging;
			    public SettingsRepository()
			        this.logging = new Logging();
			    internal SettingsRepository(ILogging logging)
			        if (logging == null)
			            throw new ArgumentNullException("logging");
			        this.logging = logging;
			public class FancyCalculator
			    private readonly ISettingsRepository settingsRepository;
			    public FancyCalculator()
			        this.settingsRepository = new SettingsRepository();
			    internal FancyCalculator(ISettingsRepository settingsRepository)
			        if (settingsRepository == null)
			            throw new ArgumentNullException("settingsRepository");
			        this.settingsRepository = settingsRepository;
			    public int DoSomeThingFancy() { return 1; }

What do you think about a solution like this? What are a bad sides of this approach? Is there any pattern that encapsulates a practise like this?

2014-09-22 10:07 UTC

Mario, you don't want the user of your library to know about the dependencies, in order to make it easy to use the SDK. That's fine: this goal is easily achieved with one of those Facade patterns I've already described. With either Constructor Chaining, or use of the Fluent Builder pattern, you can make it as easy to get started with FancyCalculator as possible: just invoke its parameterless constructor.

What then, is your motivation for wanting to make the other constructors internal? What do you gain from doing that?

Such code isn't loosely coupled, because only classes internal to the library can use those members. Thus, such a design violates the Open/Closed Principle, because these classes aren't open for extension.

2014-09-13 6:48 UTC

Dear Mark,

I am pretty sure that Mario uses internal code in order to be able to write unit tests. Probably the interfaces of the dependecies which he is hiding should also be internal. I think it is a good aproach because thanks to it he can safely refactor those interfaces, beacuse it is not a public API and also he can have unit tests.

Yes - "Such code isn't loosely coupled, because only classes internal to the library can use those member", however I think that very often stable API is very important for libraries.

Could you give any comments on that?

2014-09-25 20:10 UTC

Robert, thank you for writing. Your guess sounds reasonable. Even assuming that this is the case, I find it important to preface my answer with the caution that since I don't know the entire context, my answer is, at best, based on mainstream scenarios I can think of. There may be contexts where this approach is, indeed, the best solution, but in my experience, this tends not to be the case.

To me, the desire to keep an API stable leads to APIs that are so locked down that they are close to useless unless you happen to be so incredibly lucky that you're right on the path of a 'supported use case'. Over the years, I've worked with many object models in the .NET Base Class Library, where I've wanted it to do something a little out of the ordinary, only to find myself in a cul-de-sac of internal interfaces, sealed classes, or internal virtual methods. Many other people have had the same problems with .NET, which, I believe, has been a contributing factor causing so many of the brightest programmers to leave the platform for other, more open platforms and languages (Ruby, Clojure, JavaScript, Erlang, etc.).

The .NET platform is getting much better in this regard, but it's still not nearly as good as it could be. Additionally, .NET is littered with failed technologies that turned out to be completely useless after all (Windows Workflow Foundation, Commerce Server, (early versions of) Entity Framework, etc.).

.NET has suffered from a combination of fear of breaking backwards compatibility, combined with Big Design Up-Front (BDUF). The fact that (after all) it's still quite useful is a testament to the people who originally designed it; these people were (or are) some of most skilled people in the industry. Unless you're Anders Hejlsberg, Krzysztof Cwalina, Brad Abrams, or on a similar level, you can't expect to produce a useful and stable API if you take the conservative BDUF approach.

Instead, what you can do is to use TDD to explore the design of your API. This doesn't guarantee that you'll end up with a stable API, but it does help to make the API as useful as possible.

How do you ensure stability, then? One option is to realise that you can use interfaces as access modifiers. Thus, you can publish a library that mostly contains interfaces and a few high-level classes, and then add the public implementations of those interfaces in other libraries, and clearly document that these types are not guaranteed to remain stable. This option may be useful in some contexts, but if you're really exposing an API to the wild, you probably need a more robust strategy.

The best strategy I've been able to identify so far is to realise that you can't design a stable API up front. You will make mistakes, and you will need to deal with these design mistakes. One really effective strategy is to apply the Strangler pattern: leave the design mistakes in, but add new, better APIs as you learn; SOLID is append-only. In my Encapsulation and SOLID Pluralsight course, I discuss this particular problem and approach in the Append-Only section of the Liskov Substitution Principle module.

2014-09-28 13:40 UTC

Good point Mark! I really like your point of view, in general I agree with you.

I will try to 'defend' the internal dependecies, beacuse it may help us explore all the possible reasonable usages of it.

1. Personlly I would hide my dependencies with internal access when I would be quite certain that my design is bad, but I do not have time to fix it, but I will do it for sure in probably near future.

2. Moreover if I would have unit tests without accessing to internal's (so not the case which I was writing in my previous comment), then I should be able to refactor the 'internals' without even touching the unit tests. This is why I see internal depedencies as a refactoring terchnique - especially when working with legacy code.

These were some cases for Library/Framework. What about internal access in non-modular Application development that is consists of several projects (dll's)? Personally I try to keep everthing internal by default and only make public interfaces, entities, messages etc. for things that are used between the projects. Then I compose them in a dedicated project (which I name Bootstrapper) which is a composition root and has access to all internal types... If your books covers this topic - please just give a reference. I have not read your book so far, but it is in the queue :)

2014-09-28 18:12 UTC

Please note that I'm not saying that internal or private classes are always bad; all I'm saying is that unit testing internal classes (presumably using the [InternalsVisibleTo] attribute) is, in my opinion, not a good idea.

You can always come up with some edge cases against a blanket statement like never unit test internal classes, but in the general case, I believe I've already outlined why I think it's a really poor idea to do so.

Ultimately, I've never understood the need for the [InternalsVisibleTo] attribute. When you're applying it, you're basically making internal types public to select consumers, with all the coupling that implies. Why not make the types truly public then?

As far as I can tell, the main motivation for not making types public is when the creators don't trust their code. If this is the case, making things internal isn't a solution, it's a symptom. Address the problem instead of trying to hide it. Make the types trustworthy by applying encapsulation.

2014-09-29 11:59 UTC

An example from my last days at work: I marked a class as internal that is wrapping some native library (driver for some hardware) using [DllImport]. I want clients of my Library use my classes - not the DLL wrapper - this is why I hide it using internal. However I needed InternalVisibleTo so that I could write integration tests to see if the wrapper really works.

Why making public when nobody outside is using it? YAGNI. I only expose when I know when its needed for the clients. Then for those that I have already exposed I need to have backward compability. The less I have exposed the more easily and safely I can refactor my design. And it is not about always bad design. New requirements frequently come in parallel with refactoring the design. This is why I like the Martin Fowler's idea of Published Interface which is also mentioned in his Access Modifier article.

Additionally I always had a feeling that making everything public can affect badly influence the software architecture. The more encaupsulated are the packages the better. And for me internal access is also a mean of encaupsulation at the packaging level. Three days ago Simon Brown had a presentation named "Software Architecture vs Code" on DevDay conference. When he told something like "do not make public classes by default" people were applauding!

My rules of a thumb for seting the class access modifiers are:

  • private - when it is a helper for a class where it is being nested
  • internal - when it used only in the package
  • public - when other packages needs to use it
  • oh and, how I would love to have this Published Interface!
So for each package I can end up with having some public interfaces and classes (like messages and entities) and many internal interfaces and classes which are only seen by the unit tests and a bootstrapping package (if I have such!) It really helps when you have a project that has about 300k lines of code :)

2014-09-29 16:40 UTC

Thank to Mark Seemann for his very inspiring write about Composition Root design pattern, /2011/07/28/CompositionRoot/

I wrote my own JavaScript Dependency Injection Framework called Di-Ninja with these principles in mind

As I know, is the only one in javascript that implement the Composition-Root design pattern and it's documentation could be another good example to demonstrate how it works.

It work for both NodeJS and browser (with Webpack)

2018-01-04 08:22 UTC

Thanks for this insightful post!

I was just wondering, could we say then that using a Service Locator exclusively at the Composition root is not something bad, actually is perfectly fine?

I'm asking this, since it's wide-spread the service locator 'anti-pattern', however, at the Composition root its usage is not a problem at all, right ?

Thanks for the amazing blog! ;)

2022-01-24 9:25 UTC

Lisber, thank you for writing. Does this answer your question?

2022-01-24 11:49 UTC


The sum up at the end of that article is brilliant!

It becomes a Service Locator if used incorrectly: when application code (as opposed to infrastructure code) actively queries a service in order to be provided with required dependencies, then it has become a Service Locator.

Thank you for clarifying that.

2022-01-24 13:40 UTC

SOLID Code isn't

Tuesday, 07 June 2011 13:46:07 UTC

Recently I had an interesting conversation with a developer at my current client, about how the SOLID principles would impact their code base. The client wants to write SOLID code - who doesn't? It's a beautiful acronym that fully demonstrates the power of catchy terminology.

However, when you start to outline what it actually means people become uneasy. At the point where the discussion became interesting, I had already sketched my view on encapsulation. However, the client's current code base is designed around validation at the perimeter. Most of the classes in the Domain Model are actually internal and implicitly trust input.

We were actually discussing Test-Driven Development, and I had already told them that they should only test against the public API of their code base. The discussion went something like this (I'm hoping I'm not making my ‘opponent' sound dumb, because the real developer I talked to was anything but):

Client: "That would mean that each and every class we expose must validate input!"

Me: "Yes…?"

Client: "That would be a lot of extra work."

Me: "Would it? Why is that?"

Client: "The input that we deal with consist of complex data structures, and we must validate that all values are present and correct."

Me: "Assume that input is SOLID as well. This would mean that each input instance can be assumed to be in a valid state because that would be its own responsibility. Given that, what would validation really mean?"

Client: "I'm not sure I understand what you mean…"

Me: "Assuming that the input instance is a self-validating reference type, what could possibly go wrong?"

Client: "The instance might be null…"

Me: "Yes. Anything else?"

Client: "Not that I can think of…"

Me: "Me neither. This means that while you must add more code to implement proper encapsulation, it's really trivial code. It's just some Guard Clauses."

Client: "But isn't it still gold plating?"

Me: "Not really, because we are designing for change in the general sense. We know that we can't predict specific change, but I can guarantee you that change requests will occur. Instead of trying to predict specific changes and design variability in those specific places, we simply put interfaces around everything because the cost of doing so is really low. This means that when change does happen, we already have Seams in the right places."

Client: "How does SOLID help with that?"

Me: "A result of the Single Responsibility Principle is that each self-encapsulated class becomes really small, and there will be a lot of them."

Client: "Lots of classes… I'm not sure I'm comfortable with that. Doesn't it make it much harder to find what you need?"

Me: "I don't think so. Each class is very small, so although you have many of them, understanding what each one does is easy. In my experience this is a lot easier than trying to figure out what a big class with thousands of lines of code does. When you have few big classes, your object model might look something like this:"

Coarse-grained objects

"There's a few objects and they kind of fit together to form the overall picture. However, if you need to change something, you'll need to substantially change the shape of each of those objects. That's a lot of work, and this is why such an object design isn't particularly adaptable to change.

"With SOLID, on the other hand, you have lots of small-grained objects which you can easily re-arrange to match new requirements:"

Fine-grained objects

And that's when it hit me: SOLID code isn't really solid at all. I'm not a material scientist, but to me a solid indicates a rigid structure. In essence a structure where the particles are tightly locked to each other and can't easily move about.

However, when thinking about SOLID code, it actually helps to think about it more like a liquid (although perhaps a rather viscous one). Each class has much more room to maneuver because it is small and fits together with other classes in many different ways. It's clear that when you push an analogy too far, it breaks apart.

Still, a closing anecdote is appropriate...

My (then) three-year old son one day handed me a handful of Duplo bricks and asked me to build him a dragon. If you've ever tried to build anything out of Duplo you'll know that the ‘resolution' of the bricks is rather coarse-grained. Given that ‘a handful' for a three-year old isn't a lot of bricks, this was quite a challenge. Fortunately, I had an appreciative audience with quite a bit of imagination, so I was able to put the few bricks together in a way that satisfied my son.

Still, building a dragon of comparable size out of Lego bricks is much easier because the bricks have a much finer ‘resolution'. SOLID code is more comparable to Lego than Duplo.


It's easier to use Lego bricks if your dinosaur is small, but if it's life sized - the cost and time using Lego instead of Duplo would be enormous. The same could be said about getting too granular with classes (IShippingOptionProviderFactory, anyone?) on a large project. Speaking of taking analogies too far... isn't building anything (real) out of Lego bricks a bad idea? I mean, I certainly wouldn't want to drive a Lego car or walk across a Lego bridge.
2011-06-07 14:41 UTC
Brilliant article :)
2011-06-07 14:56 UTC
Looks like your thinking is in the line of the DDJ editorial:

Functional units of small size are key to evolvable code. This makes for a fine grained "code sand" which can be brought into ever changing shapes.

The question however is: What is to become smaller? Classes or methods?

I´d say it´s classes that need to be limited in size, maybe 50 to 100 LOC.
Classes are to be kept small because they are the "blueprints" of the smallest stateful runtime units of code which can be recombined: objects.

However this leads to two problems:

1. It´s difficult to think of a decomposition of the very common noun-classes into smaller classes. How to break up a PaymentService class into many smaller classes?

2. Small classes would lead to an explosion of dependencies between all of these classes. They would be hard to manage (even with dependency injection).

That´s two reasons why most developers will probably resists decreasing the size of their classes. They´ll just keep methods small - but the effect of this will be limited. There´s no limit to the size of a class; 1,000 LOC class can consist of 100 methods each 10 lines long.

But the two problems go away if SOLID is applied in combination with a different view on object orientation.

We just need to switch from nouns to verbs when thinking about domain logic classes (not data classes).

If classes become functions (or maybe better behaviors) then there is no decomposition problem anymore. Behaviors can be as small as needed, maybe just 1 LOC.

Also behaviors conceptually don´t have any dependencies on each other (but maybe to an environment); sitting is not dependend on running even though running might "be done" after sitting.
2011-06-09 10:53 UTC
Darin #
@Alex Papadimoulis

"I certainly wouldn't want to drive a Lego car or walk across a Lego bridge"


Great quote! I've always liked the principles behind solid, but have always been a little turned off by some of the zealotry associated with it. Like most everything in life it's great "in moderation".
2011-06-09 12:15 UTC
Hi Mark.
Thanks for your article.
You are describing a philosophy
which is almost standard practice
in Smalltalk since the 80's..
As you know, albeit desired, it is
however not always possible to
make small classes.
Recommended reading:
(also for OO in general)
"Smalltalk, Objects and Design"
by: Chamond Liu
ISBN: 1-8847777-27-9
Manning Publications Co, 1996.
Kind Regards
..btw I am sure that there must be
a photo of yours whereon you look a bit happier?

2011-06-09 12:40 UTC
Bill Berger #
Nice blog. Good visual analogy.

It's my experience that there is lots of discussion around the concept of abstraction and too little discussion around the use and practice of abstraction.

I interview many software development candidates, and I always explore what I consider foundational concepts - one is abstraction. I find most developers / programmers / software engineers (/ whatever), cannot adequately communicate HOW they use abstraction to enable their code - not on an architectural level, not on a modular level, not on a class level, and not even on a simple functional level. This is a real indication of the state of software design and it is alarming to me.

It's my belief that designing abstraction in software engineering is the most critical tool in software construction, but it is the least discussed.
2011-06-09 12:49 UTC
@Bill: I agree that abstraction is missing from many discussions (and from languages). Pars pro toto: in UML class diagrams abstraction (in the form of composition) cannot be distinguished from simple usage.

So since there is no real expression of abstraction (except for inheritance) in code, it´s neglected or left to personal style. Sad.
2011-06-09 13:09 UTC
Mark, thanks for this post. I am telling you that I had almost the exact same conversation with client late last year. At some point he says "but it's SOOO many classes.." Today reading your post I just had to laugh. Good times. :)
2011-06-09 14:25 UTC
Dave Boyle #
"…put interfaces around everything because the cost of doing so is really low."

I’m not convinced that the cost is indeed "really low", for two reasons.

1) Interfaces, before TDD became popular, imparted information to the maintainer of a type because they were used to indicate, for instance, that a given type was to be treated like a collection that could be enumerated or a type that could be compared with other types for the purposes of, say, sorting. When added reflexively as a means of inserting a level of indirection, the interface no longer imparts any information.

2) When navigating around a large code base where nearly all types are accessed via a level of indirection (courtesy of ubiquitous interfaces) my IDE – Visual Studio 2010 – struggles to answer common questions like "what code will be executed when SomeType.ReadHeader() is called?". Hitting F12 doesn’t take me to the definition of the method but rather takes me to the definition of the interface which, because of the point above, is of no value. ReSharper can sometimes find the method with a more advanced search but not always. The upshot of this is that code making heavy use of interfaces becomes much harder to statically analyse.
2011-06-09 14:50 UTC
Bill Berger #
@Ralf: "So since there is no real expression of abstraction (except for inheritance) in code"

Agreed. That point is manifest in @Dave's subsequent post. So, how would the lack of direct language support for abstraction - other than inheritance - be solved? And how would code tracing / debugging interfaces and their implementations be handled?

My gut tells me this is one reason (of several) that scripting-like languages have gained popularity in the recent past, and that we are being pointed in a more real-time, interpreted language direction due to the necessary dynamic nature of these constructs. Hm... {thinking}

Love a blog that makes me think for more than a few seconds.
2011-06-09 16:52 UTC
P #
As a client, looking at the second picture with lots of small parts, I see more complicated picture, therefore more maintenance, more support and more money to spend on it. As a consultant for that client, I see more dollar signs in my bank account...
2011-06-09 22:14 UTC
@Bill: You ask

"So, how would the lack of direct language support for abstraction - other than inheritance - be solved?"

and I don´t know if we should hope for languages to offer support soon. We´ve to live with what we have: Java, C#, C++, Ruby etc. (Scripting languages to me don´t offer better support for abstraction.)

Instead we need to figure out how to use whatever features these languages offer in a way to express abstraction - and I don´t mean inheritance ;-)

And that leads to a separation of mental model from technical features. We need to think in a certain way about our solutions - and then translate our mental models to language reality.

Object orientation tried to unify mental model and code reality. But as all those maintenance nightmare projects show, this has not really worked out as nicely as expected. The combination of OO method (OOAD) and OO programming has not lived up to our expectations.

Since we´re pretty much stuck with OO languages we need to replace the other part of the pair, the method, I´d say.
2011-06-10 08:44 UTC
Kenneth Siewers Møller #
You really need a +1 :)
Very interesting article...
2011-08-24 08:31 UTC

Page 58 of 72

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!