Implementing an Abstract Factory

Thursday, 15 March 2012 21:01:13 UTC

Abstract Factory is a tremendously useful pattern when used with Dependency Injection (DI). While I've repeatedly described how it can be used to solve various problems in DI, apparently I've never described how to implement one. As a comment to an older blog post of mine, Thomas Jaskula asks how I'd implement the IOrderShipperFactory.

To stay consistent with the old order shipper scenario, this blog post outlines three alternative ways to implement the IOrderShipperFactory interface.

To make it a bit more challenging, the implementation should create instances of the OrderShipper2 class, which itself has a dependency:

public class OrderShipper2 : IOrderShipper
{
    private readonly IChannel channel;
 
    public OrderShipper2(IChannel channel)
    {
        if (channel == null)
            throw new ArgumentNullException("channel");
 
        this.channel = channel;
    }
 
    public void Ship(Order order)
    {
        // Ship the order and
        // raise a domain event over this.channel
    }
}

In order to be able to create an instance of OrderShipper2, any factory implementation must be able to supply an IChannel instance.

Manually Coded Factory #

The first option is to manually wire up the OrderShipper2 instance within the factory:

public class ManualOrderShipperFactory :
    IOrderShipperFactory
{
    private readonly IChannel channel;
 
    public ManualOrderShipperFactory(IChannel channel)
    {
        if (channel == null)
            throw new ArgumentNullException("channel");
 
        this.channel = channel;
    }
 
    public IOrderShipper Create()
    {
        return new OrderShipper2(this.channel);
    }
}

This has the advantage that it's easy to understand. It can be unit tested and implemented in the same library that also contains OrderShipper2 itself. This means that any client of that library is supplied with a read-to-use implementation.

The disadvantage of this approach is that if/when the constructor of OrderShipper2 changes, the ManualOrderShipperFactory class must also be corrected. Pragmatically, this may not be a big deal, but one could successfully argue that this violates the Open/Closed Principle.

Container-based Factory #

Another option is to make the implementation a thin Adapter over a DI Container - in this example Castle Windsor:

public class ContainerFactory : IOrderShipperFactory
{
    private IWindsorContainer container;
 
    public ContainerFactory(IWindsorContainer container)
    {
        if (container == null)
            throw new ArgumentNullException("container");
 
        this.container = container;
    }
 
    public IOrderShipper Create()
    {
        return this.container.Resolve<IOrderShipper>();
    }
}

But wait! Isn't this an application of the Service Locator anti-pattern? Not if this class is part of the Composition Root.

If this implementation was placed in the same library as OrderShipper2 itself, it would mean that the library would have a hard dependency on the container. In such a case, it would certainly be a Service Locator.

However, when a Composition Root already references a container, it makes sense to place the ContainerFactory class there. This changes its role to the pure infrastructure component it really ought to be. This seems more SOLID, but the disadvantage is that there's no longer a ready-to-use implementation packaged together with the LazyOrderShipper2 class. All new clients must supply their own implementation.

Dynamic Proxy #

The third option is to basically reduce the principle behind the container-based factory to its core. Why bother writing even a thin Adapter if one can be automatically generated.

With Castle Windsor, the Typed Factory Facility makes this possible:

container.AddFacility<TypedFactoryFacility>();
container.Register(Component
    .For<IOrderShipperFactory>()
    .AsFactory());
var factory =
    container.Resolve<IOrderShipperFactory>();

There is no longer any code which implements IOrderShipperFactory. Instead, a class conceptually similar to the ContainerFactory class above is dynamically generated and emitted at runtime.

While the code never materializes, conceptually, such a dynamically emitted implementation is still part of the Composition Root.

This approach has the advantage that it's very DRY, but the disadvantages are similar to the container-based implementation above: there's no longer a ready-to-use implementation. There's also the additional disadvantage that out of the three alternative here outlined, the proxy-based implementation is the most difficult to understand.


Comments

So what is the advantage to having the manually-coded factory in this example at all? It violates the open/closed principle and provides a useless abstraction of the OrderShipper2 constructor. (Or did you just mention it because it is an example of what not to do?)
2012-03-16 00:35 UTC
Patrick, I just explained what the advantages (and disadvantages) are...
2012-03-16 04:47 UTC
Dmitriy #
Nice article, thanks.
But i have a question. What if object, created by factory, implements IDisposable? Where we should call Dispose()?

Sorry for my English...
2012-03-16 05:39 UTC
Good article, I hadn't considered the idea that a composition root can effectively span across factories. I guess it makes sense since most (all?) DI containers have a module system for binding, which is effectively a means of separating out the composition root.
2012-03-16 09:20 UTC
Dmitriy #
Sorry for offtopic. I have small bug report:
Often, when I open this blog from the google search results page, javascript function "highlightWord" hangs my Firefox. Probably too big cycle on DOM.
2012-03-16 10:30 UTC
About disposal, I can only think of two options - both of which are leaky abstractions: Either let the returned interface also implement IDisposable, or add a Release method to the Abstract Factory.

You can read more about this in chapter 6 in my book.

Regarding the bug report: thanks - I've noticed it too, but didn't know what to do about it...
2012-03-16 12:52 UTC
Thomas #
Mark

You have cleand up my doubts with this sentence "But wait! Isn’t this an application of the Service Locator anti-pattern? Not if this class is part of the Composition Root." I was not just confortable about if it's well done or not.

I use also Dynamic Proxy factory because it's a great feature.

Thomas
2012-03-16 14:22 UTC
Dmitriy #
I have read your book. :)
I thought about this "life time" problem. And I've got an idea.

What if we let a concrete factory to implement the IDisposable interface?
In factory.Create method we will push every resolved service into the HashSet.
In factory.Dispose method we will call container.Release method for each object in HashSet.
Then we register our factory with a short life time, like Transitional.
So the container will release services, created by factory, as soon as possible.

What do you think about it?

About bug in javascript ...
Now the DOM visitor method "highlightWord" called for each word. It is very slow. And empty words, which passed into visitor, caused the creation of many empty SPAN elements. It is much slower.
I allowed myself to rewrite your functions... Just replace it with the following code.

var googleSearchHighlight = function () {
if (!document.createElement) return;
ref = document.referrer;
if (ref.indexOf('?') == -1 || ref.indexOf('/') != -1) {
if (document.location.href.indexOf('PermaLink') != -1) {
if (ref.indexOf('SearchView.aspx') == -1) return;
}
else {
//Added by Scott Hanselman
ref = document.location.href;
if (ref.indexOf('?') == -1) return;
}
}

//get all words
var allWords = [];
qs = ref.substr(ref.indexOf('?') + 1);
qsa = qs.split('&');
for (i = 0; i < qsa.length; i++) {
qsip = qsa[i].split('=');
if (qsip.length == 1) continue;
if (qsip[0] == 'q' || qsip[0] == 'p') { // q= for Google, p= for Yahoo
words = decodeURIComponent(qsip[1].replace(/\+/g, ' ')).split(/\s+/);
for (w = 0; w < words.length; w++) {
var word = words[w];
if (word.length)
allWords.push(word);
}
}
}

//pass words into DOM visitor
if(allWords.length)
highlightWord(document.getElementsByTagName("body")[0], allWords);
}

var highlightWord = function (node, allWords) {
// Iterate into this nodes childNodes
if (node.hasChildNodes) {
var hi_cn;
for (hi_cn = 0; hi_cn < node.childNodes.length; hi_cn++) {
highlightWord(node.childNodes[hi_cn], allWords);
}
}

// And do this node itself
if (node.nodeType == 3) { // text node
//do words iteration
for (var w = 0; w < allWords.length; w++) {
var word = allWords[w];
if (!word.length)
continue;
tempNodeVal = node.nodeValue.toLowerCase();
tempWordVal = word.toLowerCase();
if (tempNodeVal.indexOf(tempWordVal) != -1) {
pn = node.parentNode;
if (pn && pn.className != "searchword") {
// word has not already been highlighted!
nv = node.nodeValue;
ni = tempNodeVal.indexOf(tempWordVal);
// Create a load of replacement nodes
before = document.createTextNode(nv.substr(0, ni));
docWordVal = nv.substr(ni, word.length);
after = document.createTextNode(nv.substr(ni + word.length));
hiwordtext = document.createTextNode(docWordVal);
hiword = document.createElement("span");
hiword.className = "searchword";
hiword.appendChild(hiwordtext);
pn.insertBefore(before, node);
pn.insertBefore(hiword, node);
pn.insertBefore(after, node);
pn.removeChild(node);
}
}
}
}
}
2012-03-16 16:26 UTC
Dmitriy #
Oops... formatting missed.

I upload .js file here http://rghost.ru/37057487
2012-03-16 17:08 UTC
Steve #
Does the manually coded factory really violate the OCP?

Granted, if the constructor to OrderShipper2 changes, you MUST modify the abstract factory. However, isn't modifying the constructor of OrderShipper2 itself a violation of the OCP? If you are adding new dependencies, you are probably making a significant change.

At that point, you would just create a new implementation of IOrderShipper.

2012-03-16 19:03 UTC
Good point, Steve. You're right... I hadn't thought it all the way through :)
2012-03-18 14:03 UTC
Dmitriy, if we let only the concrete factory implement IDisposable, then when should it be disposed? How can we deterministically signal that we are 'done' with it?

(Thank you very much for your assistance with the javascript - I didn't even know which particular script was causing all that trouble. Unfortunately, the script itself is compiled into the dasBlog engine which hosts this blog, so I can't change it. However, I think I managed to switch it off... Yet another reason to find myself a new blog platform...)
2012-03-18 18:25 UTC
Dmitriy #
I think we should have some sort of Scope to handle the life time ending. In request-based application it can be request. If we need smaller life time, we can write our own Scope, based on some application-specific events.

Btw, another thing I think about is the abstract factory's responsibility. If factory consumer can create service with the factory.Create method, maybe we should let it to destroy object with the factory.Destroy method too? Piece of the life time management moves from one place to another. Factory consumer takes responsibility for the life time (defines new scope). For example we have ControllerFactory in ASP.NET MVC. It has the Release method for this.
In other words, why not to add the Release method into the abstract factory?
2012-03-18 21:54 UTC
Implicit scopes like an HTTP request can work pretty well when we can hook into the lifetime cycle of the scope itself. This is possible with HTTP requests, which is why many DI Containers can implicitly clean up when the HTTP request has been handled.

However, what if we have a desktop application or a long-running batch job? How do we define an implicit scope then? One option might me to employ a timeout, but I'll have to think more about this... Not a bad idea, though :)
2012-03-19 10:42 UTC
In multithreaded applications we can use a Thread as a smallest scope.
But for a single long-lived thread we have to invent something.

It seems that there is no silver bullet in the management of a lifetime. At least without using Resolve-Release pattern in several places instead of one ...

Can you write another book on this subject? :)
2012-03-19 12:02 UTC
Dmitriy, I still need to think more about this, but AFACT in order to use a scope for decommissioning, there must be some hook that we can attach to in order to clean up when the scope ends. I'm not aware of such a hook on a Thread, but I must admit that I rarely do low-level work with Threads.

A Task<T>, on the other, provides us with the ability to attach a continuation, so that seems to me to be an appropriate candidate...
2012-03-24 08:46 UTC
Hennadii Omelchenko #
Mark, how does Container-Based factory put up with Register-Resolve-Release pattern? For instance, if we have ten different factories, then we end up with at least ten 'Resolve' calls, and you state in your book that there must be only one, in composition root
2012-12-23 14:31 UTC
You could create each factory instance using the "new" keyword and register the instances with the container.

Or you could register the container with itself, enabling it to auto-wire itself.

None of these options are particularly nice, which is why I instead tend to prefer one of the other options described above.
2012-12-23 15:02 UTC
Julien Vulliet #
Interesting article, one thing I'm wondering (apart from many other questions which would likely require a post or two on their own as an answer), isn't finally using the dynamic proxy solution somehow a hidden dependency to your container? You don't have explicit reference, but if you swap container you might have an issue with this being implemented differently (one container might prefer func, another one interfaces....), so finally you're not able to swap container that easily?
2014-02-28 22:58 UTC

Julien, thank you for your question. Using a Dynamic Proxy is an implementation detail. The consumer (in this case LazyOrderShipper2) depends only on IOrderShipperFactory.

Does using a Dynamic Proxy make it harder to swap containers? Yes, it does, because I'm only aware of one .NET DI Container that has this capability (although it's a couple of years ago since I last surveyed the .NET DI Container landscape). Therefore, if you start out with Castle Windsor, and then later on decide to exchange it for another DI Container, you would have to supply a manually coded implementation of IOrderShipperFactory. You could choose to implement either a Manually Coded Factory, or a Container-based Factory, as described in this article. It's rather trivial to do, so I would hardly call it blocking issue.

The more you rely on specific features of a particular DI Container, the more work you'll have to perform to migrate. This is why I always recommend that you design your classes following normal, good design practices first, without any regard to how you'd use them with any particular DI Container. Only when you've designed your API should you figure out how to compose it with a DI Container (if at all).

2014-03-01 14:47 UTC
jam40jeff #
Isn't the first option (manually coded factory carrying instance dependencies as factory dependencies) the only one which still allows for complete container verification? If that is the case, wouldn't that alone trump any other reasons not to use this option?
2014-12-31 20:57 UTC

Personally, I don't find container verification particularly valuable, but then again, I mostly use Pure DI anyway.

2014-12-31 21:33 UTC

Is Layering Worth the Mapping?

Thursday, 09 February 2012 22:55:44 UTC

For years, layered application architecture has been a de-facto standard for loosely coupled architectures, but the question is: does the layering really provide benefit?

In theory, layering is a way to decouple concerns so that UI concerns or data access technologies don't pollute the domain model. However, this style of architecture seems to come at a pretty steep price: there's a lot of mapping going on between the layers. Is it really worth the price, or is it OK to define data structures that cut across the various layers?

The short answer is that if you cut across the layers, it's no longer a layered application. However, the price of layering may still be too high.

In this post I'll examine the problem and the proposed solution and demonstrate why none of them are particularly good. In the end, I'll provide a pointer going forward.

Proper Layering #

To understand the problem with layering, I'll describe a fairly simple example. Assume that you are building a music service and you've been asked to render a top 10 list for the day. It would have to look something like this in a web page:

Top10Tracks

As part of rendering the list, you must color the Movement values accordingly using CSS.

A properly layered architecture would look something like this:

Each layer defines some services and some data-carrying classes (Entities, if you want to stick with the Domain-Driven Design terminology). The Track class is defined by the Domain layer, while the TopTrackViewModel class is defined in the User Interface layer, and so on. If you are wondering about why Track is used to communicate both up and down, this is because the Domain layer should be the most decoupled layer, so the other layers exist to serve it. In fact, this is just a vertical representation of the Ports and Adapters architecture, with the Domain Model sitting in the center.

This architecture is very decoupled, but comes at the cost of much mapping and seemingly redundant repetition. To demonstrate why that is, I'm going to show you some of the code. This is an ASP.NET MVC application, so the Controller is an obvious place to start:

public ViewResult Index()
{
    var date = DateTime.Now.Date;
    var topTracks = this.trackService.GetTopFor(date);
    return this.View(topTracks);
}

This doesn't look so bad. It asks an ITrackService for the top tracks for the day and returns a list of TopTrackViewModel instances. This is the implementation of the track service:

public IEnumerable<TopTrackViewModel> GetTopFor(
    DateTime date)
{
    var today = DateTime.Now.Date;
    var yesterDay = today - TimeSpan.FromDays(1);
 
    var todaysTracks = 
        this.repository.GetTopFor(today).ToList();
    var yesterdaysTracks = 
        this.repository.GetTopFor(yesterDay).ToList();
 
    var length = todaysTracks.Count;
    var positions = Enumerable.Range(1, length);
 
    return from tt in todaysTracks.Zip(
                positions, (t, p) =>
                    new { Position = p, Track = t })
            let yp = (from yt in yesterdaysTracks.Zip(
                            positions, (t, p) => 
                                new
                                {
                                    Position = p,
                                    Track = t
                                })
                        where yt.Track.Id == tt.Track.Id
                        select yt.Position)
                        .DefaultIfEmpty(-1)
                        .Single()
            let cssClass = GetCssClass(tt.Position, yp)
            select new TopTrackViewModel
            {
                Position = tt.Position,
                Name = tt.Track.Name,
                Artist = tt.Track.Artist,
                CssClass = cssClass
            };
}
 
private static string GetCssClass(
    int todaysPosition, int yesterdaysPosition)
{
    if (yesterdaysPosition < 0)
        return "new";
    if (todaysPosition < yesterdaysPosition)
        return "up";
    if (todaysPosition == yesterdaysPosition)
        return "same";
    return "down";
}

While that looks fairly complex, there's really not a lot of mapping going on. Most of the work is spent getting the top 10 track for today and yesterday. For each position on today's top 10, the query finds the position of the same track on yesterday's top 10 and creates a TopTrackViewModel instance accordingly.

Here's the only mapping code involved:

select new TopTrackViewModel
{
    Position = tt.Position,
    Name = tt.Track.Name,
    Artist = tt.Track.Artist,
    CssClass = cssClass
};

This maps from a Track (a Domain class) to a TopTrackViewModel (a UI class).

This is the relevant implementation of the repository:

public IEnumerable<Track> GetTopFor(DateTime date)
{
    var dbTracks = this.GetTopTracks(date);
    foreach (var dbTrack in dbTracks)
    {
        yield return new Track(
            dbTrack.Id,
            dbTrack.Name,
            dbTrack.Artist);
    }
}

You may be wondering about the translation from DbTrack to Track. In this case you can assume that the DbTrack class is a class representation of a database table, modeled along the lines of your favorite ORM. The Track class, on the other hand, is a proper object-oriented class which protects its invariants:

public class Track
{
    private readonly int id;
    private string name;
    private string artist;
 
    public Track(int id, string name, string artist)
    {
        if (name == null)
            throw new ArgumentNullException("name");
        if (artist == null)
            throw new ArgumentNullException("artist");
 
        this.id = id;
        this.name = name;
        this.artist = artist;
    }
 
    public int Id
    {
        get { return this.id; }
    }
 
    public string Name
    {
        get { return this.name; }
        set
        {
            if (value == null)
                throw new ArgumentNullException("value");
 
            this.name = value;
        }
    }
 
    public string Artist
    {
        get { return this.artist; }
        set
        {
            if (value == null)
                throw new ArgumentNullException("value");
 
            this.artist = value;
        }
    }
}

No ORM I've encountered so far has been able to properly address such invariants - particularly the non-default constructor seems to be a showstopper. This is the reason a separate DbTrack class is required, even for ORMs with so-called POCO support.

In summary, that's a lot of mapping. What would be involved if a new field is required in the top 10 table? Imagine that you are being asked to provide the release label as an extra column.

  1. A Label column must be added to the database schema and the DbTrack class.
  2. A Label property must be added to the Track class.
  3. The mapping from DbTrack to Track must be updated.
  4. A Label property must be added to the TopTrackViewModel class.
  5. The mapping from Track to TopTrackViewModel must be updated.
  6. The UI must be updated.

That's a lot of work in order to add a single data element, and this is even a read-only scenario! Is it really worth it?

Cross-Cutting Entities #

Is strict separation between layers really so important? What would happen if Entities were allowed to travel across all layers? Would that really be so bad?

Such an architecture is often drawn like this:

CrossCuttingEntities

Now, a single Track class is allowed to travel from layer to layer in order to avoid mapping. The controller code hasn't really changed, although the model returned to the View is no longer a sequence of TopTrackViewModel, but simply a sequence of Track instances:

public ViewResult Index()
{
    var date = DateTime.Now.Date;
    var topTracks = this.trackService.GetTopFor(date);
    return this.View(topTracks);}

The GetTopFor method also looks familiar:

public IEnumerable<Track> GetTopFor(DateTime date)
{
    var today = DateTime.Now.Date;
    var yesterDay = today - TimeSpan.FromDays(1);
 
    var todaysTracks = 
        this.repository.GetTopFor(today).ToList();
    var yesterdaysTracks = 
        this.repository.GetTopFor(yesterDay).ToList();
 
    var length = todaysTracks.Count;
    var positions = Enumerable.Range(1, length);
 
    return from tt in todaysTracks.Zip(
                positions, (t, p) => 
                    new { Position = p, Track = t })
            let yp = (from yt in yesterdaysTracks.Zip(
                            positions, (t, p) =>
                                new
                                {
                                    Position = p,
                                    Track = t
                                })
                        where yt.Track.Id == tt.Track.Id
                        select yt.Position)
                        .DefaultIfEmpty(-1)
                        .Single()
            let cssClass = GetCssClass(tt.Position, yp)
            select Enrich(
                tt.Track, tt.Position, cssClass);
}
 
private static string GetCssClass(
    int todaysPosition, int yesterdaysPosition)
{
    if (yesterdaysPosition < 0)
        return "new";
    if (todaysPosition < yesterdaysPosition)
        return "up";
    if (todaysPosition == yesterdaysPosition)
        return "same";
    return "down";
}
 
private static Track Enrich(
    Track track, int position, string cssClass)
{
    track.Position = position;
    track.CssClass = cssClass;
    return track;
}

Whether or not much has been gained is likely to be a subjective assessment. While mapping is no longer taking place, it's still necessary to assign a CSS Class and Position to the track before handing it off to the View. This is the responsibility of the new Enrich method:

private static Track Enrich(
    Track track, int position, string cssClass)
{
    track.Position = position;
    track.CssClass = cssClass;
    return track;
}

If not much is gained at the UI layer, perhaps the data access layer has become simpler? This is, indeed, the case:

public IEnumerable<Track> GetTopFor(DateTime date)
{
    return this.GetTopTracks(date);
}

If, hypothetically, you were asked to add a label to the top 10 table it would be much simpler:

  1. A Label column must be added to the database schema and the Track class.
  2. The UI must be updated.

This looks good. Are there any disadvantages? Yes, certainly. Consider the Track class:

public class Track
{
    public int Id { get; set; }
 
    public string Name { get; set; }
 
    public string Artist { get; set; }
 
    public int Position { get; set; }
 
    public string CssClass { get; set; }
}

It also looks simpler than before, but this is actually not particularly beneficial, as it doesn't protect its invariants. In order to play nice with the ORM of your choice, it must have a default constructor. It also has automatic properties. However, most insidiously, it also somehow gained the Position and CssClass properties.

What does the Position property imply outside of the context of a top 10 list? A position in relation to what?

Even worse, why do we have a property called CssClass? CSS is a very web-specific technology so why is this property available to the Data Access and Domain layers? How does this fit if you are ever asked to build a native desktop app based on the Domain Model? Or a REST API? Or a batch job?

When Entities are allowed to travel along layers, the layers basically collapse. UI concerns and data access concerns will inevitably be mixed up. You may think you have layers, but you don't.

Is that such a bad thing, though?

Perhaps not, but I think it's worth pointing out:

The choice is whether or not you want to build a layered application. If you want layering, the separation must be strict. If it isn't, it's not a layered application.

There may be great benefits to be gained from allowing Entities to travel from database to user interface. Much mapping cost goes away, but you must realize that then you're no longer building a layered application - now you're building a web application (or whichever other type of app you're initially building).

Further Thoughts #

It's a common question: how hard is it to add a new field to the user interface?

The underlying assumption is that the field must somehow originate from a corresponding database column. If this is the case, mapping seems to be in the way.

However, if this is the major concern about the application you're currently building, it's a strong indication that you are building a CRUD application. If that's the case, you probably don't need a Domain Model at all. Go ahead and let your ORM POCO classes travel up and down the stack, but don't bother creating layers: you'll be building a monolithic application no matter how hard you try not to.

In the end it looks as though none of the options outlined in this article are particularly good. Strict layering leads to too much mapping, and no mapping leads to monolithic applications. Personally, I've certainly written quite a lot of strictly layered applications, and while the separation of concerns was good, I was never happy with the mapping overhead.

At the heart of the problem is the CRUDy nature of many applications. In such applications, complex object graphs are traveling up and down the stack. The trick is to avoid complex object graphs.

Move less data around and things are likely to become simpler. This is one of the many interesting promises of CQRS, and particularly it's asymmetric approach to data.

To be honest, I wish I had fully realized this when I started writing my book, but when I finally realized what I'd implicitly felt for long, it was too late to change direction. Rest assured that nothing in the book is fundamentally flawed. The patterns etc. still hold, but the examples could have been cleaner if the sample applications had taken a CQRS-like direction instead of strict layering.


Comments

I have came to the same sequence and conclusions and in my recent application I had the following setup:

- Domain layer with rich objects and protected variations
- Commands and Command handlers (write only) acting on the domain entities
- Mapping domain layer using NHibernate (I thought at first to start by creating a different data model because of the constraints of orm on the domain, but i managed to get rid of most of them - like having a default but private constructor, and mapping private fields instead of public properties ...)

- Read model which consist of view models.
- Red model mapping layer which uses the orm to map view models to database directly (thus i got rid of mapping between view models and domains - but introduced a mapping between view models and database).
- Query objects to query the read model.

at the end of application i came to almost the same conclusion of not being happy with the mappings.

- I have to map domain model to database.
- I have to map between view models (read model) and database.
- I have to map between view models and commands.
- I have to map between commands and domain model.
- Having many fine grained commands is good in terms of defining strict behaviors and ubiquitous language but at the cost of maintainability, now when i introduce a new field most probably will have to modify a database field, domain model, view model, two data mappings for domain model and view model, 1 or more command and command handlers, and UI !

So taking the practice of CQRS for me enriched the domain a little, simplified the operations and separated it from querying, but complicated the implementation and maintenance of the application.

So I am still searching for a better approach to gain the rich domain model behavior and simplifying maintenance and eliminate mappings.
2012-02-10 02:16 UTC
bitbonk #
Good article, thank you for that.

I too have felt pain of a lot of useless mapping a lot times. But even for CRUD applications where the business rules do not change much, I still almost always take the pain and do the mapping because I have a deep sitting fear of monolthic apps and code rot. If I give my entities lots of different concerns (UI, persistence, domainlogic etc.), isn't this kind of code smell the culture medium for more code smell and code rot? When evolving my application and I am faced with a quirk in my code, I often only have two choices: Add a new quirk or remove the original quirk. It kind of seems that in the long run a rotten monolithic app that his hard to maintain and extend becomes almost inevitable if I give up SoC the way you described.

Did I understand you correctly that you'd say it's OK to soften SoC a bit and let the entity travel across all layers for *some* apps? If yes, what kind of apps? Do you share my fear of inevitable code rot? How do you deal with this?
2012-02-10 09:11 UTC
Jonty #
NHibernate allows protected constructors and setters, is that good enough? RavenDB allows private constructor and setters.
2012-02-10 13:30 UTC
Harry McIntyre #
I think you might be interested to hear about the stack I use.

I use either convention based NH mappings which make me almost forget it's there, or MemoryImage to reconstruct my object model (the boilerplate is reduced by wrapping my objects with proxies that intercept method calls and record them as a json Event Source). In both cases I can use a helper library which wraps my objects with proxies that help protect invariants.

My web UI is built using a framework along the lines of the Naked Object a pattern*, which can be customised using ViewModels and custom views, and which enforces good REST design. This framework turns objects into pages and methods into ajax forms.

I built all this stuff after getting frustrated with the problems you mention above, inspired in part by all the time I spent at Uni trying to make games (in which an in memory domain model is rendered to the screen by a disconnected rendering engine, and input collected via a separate mechanism and applied to that model).

The long of the short of it is that I can add a field to the screen in seconds, and add a new feature in minutes. There are some trade offs, but it's great to feel like I am just writing an object model and leaving the rest to my frameworks.

*not Naked Objects MVC, a DIY library thats not ready for the public yet.

http://en.wikipedia.org/wiki/Naked_objects
https://github.com/mcintyre321/NhCodeFirst my mappings lib (but Fluent NH is probably fine)
http://martinfowler.com/bliki/MemoryImage.html
https://github.com/mcintyre321/Harden my invariant helper library
2012-02-10 16:44 UTC
bitbonk, that wasn't what I said at all. On the contrary, the point of the post is that as soon as you soften separation of concerns just a bit, it becomes impossible to maintain that separation. It's an all-or-nothing proposition. Either you maintain strict separation, or you'd be building a monolithic application whether you know it or not.

The only thing I'm saying is that it's better to build a monolithic application when you know and understand that this is what you are doing, than it is to be building a monolithic application while under the illusion that you are building a decoupled application.
2012-02-10 16:46 UTC
Jonty: I don't know... How does protected or private constructors or properties help protect invariants?
2012-02-10 16:48 UTC
nelson #
I understand where you are coming from; but good architecture has always been at odds with RAD. You do make some good points, but as you certainly know, discipline in software design is very important. Many applications we build as "one-offs" are in production for years - and taking the RAD approach almost always results in software that everyone despises (especially the programmer left maintaining it). Proof: COBOL, Office VBA, Ruby on Rails. How many applications were built off of these platforms because it was easier and faster to get started, only to have the resulting codebase become something that doesn't work for anyone? We use C# and good design principles specifically to avoid this scenario. If I were to be in a position where I truly (absolutely truly) knew I wanted a CRUD app, I'd throw it together in Rails - then scrap that project once it grows too complex in favor of CQRS/C#/ASP.net MVC.

If there's a point in there, it's that static languages (without designers) generally don't lend well to RAD, and if RAD is what you need, you have many popular options that aren't C#.

Also, I have two problems with some of the sample code you wrote. Specifically, the Enrich method. First, a method signature such as:

T Method(T obj, ...);

Implies to consumers that the method is pure. Because this isn't the case, consumers will incorrectly use this method without the intention of modifying the original object. Second, if it is true that your ViewModels are POCOs consisting of your mapped properties AND a list of computed properties, perhaps you should instead use inheritance. This implementation is certainly SOLID:

(sorry for the formatting)

class Track
{
/* Mapped Properties */

public Track()
{
}

protected Track(Track rhs)
{
Prop1 = rhs.Prop1;
Prop2 = rhs.Prop2;
Prop3 = rhs.Prop3;
}
}

class TrackWebViewModel : Track
{
public string ClssClass { get; private set; }

public TrackWebViewModel(Track track, string cssClass) : this(track)
{
CssClass = cssClass;
}
}

let cssClass = GetCssClass(tt.Position, yp) select Enrich(tt.Track, tt.Position, cssClass)

Simply becomes

select new TrackWebViewModel(tt.Track, tt.Position, cssClass)
2012-02-10 18:40 UTC
nelson, I never claimed the code in this post was supposed to represent good code :) Agreed that the Enrich method violates CQS.

While your suggested, inheritance-based solution seems cleaner, please note that it also introduces a mapping (in the copy constructor). Once you start doing this, there's almost always a composition-based solution which is at least as good as the inheritance-based solution.
2012-02-10 18:54 UTC
nelson #
Oh, I certainly agree that inheritance should be avoided whenever possible. It's one of the most over (and improperly) used constructs in any OO language. I was simply writing an implementation that would work with your existing code with fewest changes as possible :) perhaps it was the maintainer in me talking, not the designer :p

As far as the copy-ctor, yes, that is a mapping, but when you add a field, you just have to modify a file you're already modifying. This is a bit nicer than having to update mapping code halfway across your application. But you're right, composting a Track into a specific view model would be a nicer approach with fewer modifications required to add more view models and avoiding additional mapping.

Though, I still am a bit concerned about the architecture as a whole. I understand in this post you were thinking out loud about a possible way to reduce complexity in certain types of applications, yet I'm wondering if you had considered using a platform designed for applications like this instead of using C#. Rails is quite handy when you need CRUD.
2012-02-10 19:22 UTC
Well, I was mainly attempting to address the perception that a lot of people have that decoupling creates a lot of mapping and therefore isn't worth the effort (their sentiment, not mine). Unfortunately, I have too little experience with technology stacks outside of .NET to be able to say something authoritative about them, but that particular positioning of Rails matches what I've previously heard.
2012-02-10 19:30 UTC
Hi Mark,

thanks for the blog posts and DI book, which I'm currently reading over the weekend. I guess the scenario you're describing here is more or less commonplace, and as the team I'm working on are constantly bringing this subject up on the whiteboard, I thought I'd share a few thoughts on the mapping part. Letting the Track entity you're mentioning cross all layers is probably not a good idea.

I agree that one should program to interfaces. Thus, any entity (be it an entity created via LINQ to SQL, EF, any other ORM/DAL or business object) should be modeled and exposed via an interface (or hierarchy thereof) from repositories or the like. With that in mind, here's what I think of mapping between entities. Popular ORM/DALs provide support for partial entities, external mapping files, POCOs, model-first etc.

Take this simple example:

// simplified pattern (deliberately not guarding args, and more)
interface IPageEntity { string Name { get; }}
class PageEntity : IPageEntity {}
class Page : IPage
{
private IPageEntity entity;
private IStrategy strategy;

public Page(IPageEntity entity, IStrategy strategy)
{
this.entity = entity;
this.strategy = strategy;
}

public string Name
{
get { return this.entity.Name; }
set { this.strategy.CopyOnWrite(ref this.entity).Name = value; }
}
}

One of the benefits from using an ORM/DAL is that queries are strongly typed and results are projected to strongly typed entities (concrete or anonymous). They are prime candidates for embedding inside of business objects (BO) (as DTOs), as long as they implement the required interface for injected in the constructor. This is much more flexible (and clean) rather than taking a series of arguments, each corresponding to a property on the original entity. Instead of having the BO keeping private members mapped to the constructor args, and subsequently mapping public properties and methods to said members, you simply map to the entity injected in the BO.

The benefit is much greater performance (less copying of args), a consistent design pattern and a layer of indirection in your BO that allows you to capture any potential changes to your entity (such as using the copy-on-write pattern that creates an exact copy of the entity, releasing it from a potential shared cache you've created by decorating the repository, from which the entity originally was retrieved).

Again, in my example one of the goals were to make use of shared (cached) entities for high performance, while making sure that there are no side effects from changing a business object (seeing this from client code perspective).

Code following the above pattern is just as testable as the code in your example (one could argue it's even easier to write tests, and maintenance is vastly reduced). Whenever I see BO constructors taking huge amounts of args I cry blood.

In my experience, this allows a strictly layered application to thrive without the mapping overhead.

Your thoughts?
2012-02-11 13:37 UTC
Hy
What do you propose in cases when you have a client server application with request/response over WCF and the domain model living on the server? Is there a better way than for example mapping from NH entities to DataContracts and from DataContracts to ViewModels?

Thnaks for the post!
2012-02-13 05:46 UTC
I'm not certain the sample code you provided matches the first diagram... the diagram indicates that the Domain Layer only uses Track objects between the other layers. However you mention:

"... asks an ITrackService for the top tracks for the day and returns a list of TopTrackViewModel instances..."

The diagram really makes it look like the TrackService returns Track objects to the Controller and the Controller maps them into TopTrackViewModel objects. If that is the case... how would you communicate the Position and CssClass properties that the Domain Layer has computed between the two layers?

Also, I noticed that ITrackService.GetTopFor(DateTime date) never uses the input parameter... should it really be more like ITrackService.GetTopForToday() instead?

2012-02-13 19:10 UTC
Hi Mark,

Awesome blog! Is there an email address I can contact you in private?
2012-02-18 14:29 UTC
Moss #
What about using your domain objects in the data access layer because you're not using an ORM and doing things manually, so that at least one level of mapping can be done away with? Or is that something that would command the question, why would you do a silly thing like that (not using an ORM)?
2012-02-18 17:03 UTC
Anders, I've been down the road of modeling Entities or Value Objects as interfaces, and it rarely leads to anywhere productive. The problem is that you end up with a lot of interfaces that mainly consists of properties (instead of methods). One day I'll attempt to write a long and formal exposition on why this is problematic, but for now I'll just hint that it violates the Law of Demeter and leads to interface coupling as I quoted Jim Cooper for explaining in my previous blog post. Basically the problem is that such interfaces tend to be Header Interfaces.
2012-02-20 00:23 UTC
Daniel, it's really hard to answer that question without knowing why that web service exists in the first place. If the only reason it exists is to facilitate communication between client and server, you may not need a translation layer. That's basically how WCF RIA services work. However, that tightly couples the client to the service, and you can't deploy or version each independently of the other.

If the service layer exists in order to decouple the service from the client, you must translate from the internal model to a service API. However, if this is the case, keep in mind that at the service boundary, you are no longer dealing with objects at all, so translation or mapping is conceptually very important. If you have .NET on both sides it's easy to lose sight of this, but imagine that either service or client is implemented in COBOL and you may begin to understand the importance of translation.

So as always, it's a question of evaluating the advantages and disadvantages of each alternative in the concrete situation. There's no one correct answer.
2012-02-20 00:37 UTC
Dave, well, yes, you are correct that the service in the code is actually part of the UI API. An error on my part, but I hope I still got the point of the blog post across. It doesn't change the conclusion in any way.
2012-02-20 00:40 UTC
Ilias, you can write me at mark at seemann dot (my initials).
2012-02-20 00:42 UTC
Moss, that's one of the reason I find NoSQL (particularly Event Sourcing) very attractive. There need be no mapping at the data access level - only serialization.

Personally, I'm beginning to consider ORMs the right solution to the wrong problem. A relational database is rarely the correct data store for a LoB application or your typical web site.
2012-02-20 00:46 UTC
Hi, Mark;

Pardon me if I've gotten this wrong, because I'm approaching your post as an actionscript developer who is entirely self-taught. I don't know C#, but I like to read posts on good design from whatever language.

How I might resolve this problem is to create a TrackDisplayModel Class that wraps the Track Class and provides the "enriched" methods of the position and cssClass. In AS, we have something called a Proxy Class that allows us to "repeat" methods on an inner object without writing a lot of code, so you could pass through the properties of the Track by making TrackDisplayModel extend Proxy. By doing this, I'd give up code completion and type safety, so I might not find that long term small cost worth the occasional larger cost of adding a field in more conventional ways.

Does C# have a Class that is like Proxy that could be used in this way?
2012-02-24 15:23 UTC
Excellent post. I really enjoyed reading it and it brings up a good point. Development is a world of trade-offs. Could this issue be helped with document databases such as RavenDB? You could conceivably create a JSON string that could be passed through all layers since the DB understands it intrinsically. Not sure of mapping implications but maybe some relief could be had for some applications. Unfortunately, many applications just aren't the right kinds for document databases as the technology still needs to mature some. Might not work for straight CRUD apps but I don't want to get into a document DB/RDBMS debate. Just a thought.
2012-03-02 15:22 UTC
Amy, while I think you're describing a dynamic approach with that Proxy class, it sounds actually rather close to normal inheritance to me. Basically, in C#, when you inherit from another class, all the base class' members are carried forward into the derived class. From C# 4 and onward, we also have dynamic typing, which would enable us to forego compilation and type safety for dynamic dispatching.

No matter how you'd achieve such things, I'm not to happy about doing this, as most of the variability axes we get from a multi-paradigmatic language such as C# is additive - it means that while it's fairly easy to add new capabilities to objects, it's actually rather hard to remove capabilities.

When we map from e.g. a domain model to a UI model, we should consider that we must be able to do both: add as well as subtract behavior. There may be members in our domain model we don't want the UI model to use.

Also, when the application enables the user to write as well as read data, we must be able to map both ways, so this is another reason why we must be able to add as well as remove members when traversing layers.

I'm not saying that a dynamic approach isn't viable - in fact, I find it attractive in various scenarios. I'm just saying that simply exposing a lower-level API to upper layers by adding more members to it seems to me to go against the spirit of the Single Responsibility Principle. It would just be tight coupling on the semantic level.
2012-03-12 20:41 UTC
Justin, if you pass JSON or any other type of untyped data up and down the layers, it would certainly remove the class coupling. The question is whether or not it would remove coupling in general. If you take a look at the Jim Cooper quote here, he's talking about semantic coupling, and you'd still have that because each layer would be coupled to a specific structure of the data that travels up and down.

You might also want to refer to my answer to Amy immediately above. The data structures the UI requires may be entirely different from the Domain Model, which again could be different from the data access model. The UI will often be an aggregate of data from many different data sources.

In fact, I'd consider it a code smell if you could get by with the same data structure in all layers. If one can do that, one is most likely building a CRUD application, in which case a monolithic RAD application might actually be a better architectural choice.
2012-03-12 20:48 UTC
Matt #
Mark,

You've mentioned in your blogs (here and elsewhere) and on stackoverflow simliar sentiments to your last comment:

<blockquote>
I'd consider it a code smell if you could get by with the same data structure in all layers. If one can do that, one is most likely building a CRUD application, in which case a monolithic RAD application might actually be a better architectural choice.
</blockquote>

Most line of business (LoB) apps consist of CRUD operations. As such, when does an LoB app cross the line from being strictly "CRUDy" and jumps into the realm where where layering (or using CQRS) is preferable? Could you expand on your definition of building a CRUD application?
2012-04-16 16:31 UTC
Matt, I don't think there's any hard rule for that. Consider all the unknowns that are before you when you embark on a new project: this is one of those unknowns. In part, I base such decisions on gut feeling - you may call it experience...

What I think would be the correct thing, on the other hand, would be to make a conscious decision as a team on which way to go. If the team agrees that the project is most likely a CRUD application, then it can save itself the trouble of having all those layers. However, it's very important to stress to non-technical team members and stake holders (project managers etc.) that there's a trade-off here: We've decided to go with RAD development. This will enables us to go fast as long as our assumption about a minimum of business logic holds true. On the other hand, if that assumption fails, we'll be going very slowly. Put it in writing and make sure those people understand this.

You can also put it another way: creating a correctly layered application is slow, but relatively safe. Assuming that you know what you're doing with layering (lots of people don't), developing a layered application removes much of that risk. Such an architecture is going to be able to handle a lot of the changes you'll throw at it, but it'll be slow going.

HTH
2012-04-16 19:02 UTC
My 2 euro cents: First of all, Track is DTO as it has no behaviour. IF all the domain objects are like this, then you're pretty much building an interface for the db and in this case everything is quite clear (CRUD).

If youre 'referring to the case where there is actually a domain to model, then CQRS (the concept) is easily the answer. However, even in such app you'd have some parts which are CRUD. I simply like to use the appropriate tool for the job. For complex Aggregate Roots, Event Sourcing simplifies storage a lot. For 'dumb' objects, directly CRUD stuff. Of course, everything is done via a repository.

Now, for complex domain I'd also use a message driven architecture so I'd have Domain events with handlers which will generate any read model required. I won't really have a mapping problem as event sourcing will solve it for complex objects while for simple objects at most I'll automap or use the domain entities directly .
2012-10-28 09:17 UTC
Hi Mark,

Nice blog!

All the mapping stems from querying/reading the domain model. I have come to the conclusion that this is what leads to all kinds of problems such as:

- lazy loading
- mapping
- fetching strategies

The domain model has state but should *do* stuff. As you stated, the domain needs highly cohesive aggregates to get the object graph a short as possible. I do not use an ORM (I do my own mapping) so I do not have ORM specific classes.

So I am 100% with you on the CQRS. There are different levels to doing CQRS and I think many folks shy away from it because they think there is only the all-or-nothing asynchronous messaging way to get to CQRS. Yet there are various approaches depending on the need, e.g.:-

All within the same transaction scope with no messaging (100% consistent):
- store read data in same table (so different columns) <-- probably the most typical option in this scenario
- store read data in separate table in the same database
- store read data in another table in another database

Using eventual consistency via messaging / service bus (such as Shuttle - http://shuttle.codeplex.com/ | MassTransit | NServiceBus):

- store read data in same table (so different columns)
- store read data in separate table in the same database
- store read data in another table in another database <-- probably the most typical option in this scenario

Using C# I would return the data from my query layer as a DataRow or DataTable and in the case of something more complex a DTO.

Regards,
Eben
2013-01-18 08:38 UTC

Loose Coupling and the Big Picture

Thursday, 02 February 2012 20:37:40 UTC

A common criticism of loosely coupled code is that it's harder to understand. How do you see the big picture of an application when loose coupling is everywhere? When the entire code base has been programmed against interfaces instead of concrete classes, how do we understand how the objects are wired and how they interact?

In this post, I'll provide answers on various levels, from high-level architecture over object-oriented principles to more nitty-gritty code. Before I do that, however, I'd like to pose a set of questions you should always be prepared to answer.

Mu #

My first reaction to that sort of question is: you say loosely coupled code is harder to understand. Harder than what?

If we are talking about a non-trivial application, odds are that it's going to take some time to understand the code base - whether or not it's loosely coupled. Agreed: understanding a loosely coupled code base takes some work, but so does understanding a tightly coupled code base. The question is whether it's harder to understand a loosely coupled code base?

Imagine that I'm having a discussion about this subject with Mary Rowan from my book.

Mary: “Loosely coupled code is harder to understand.”

Me: “Why do you think that is?”

Mary: “It's very hard to navigate the code base because I always end up at an interface.”

Me: “Why is that a problem?”

Mary: “Because I don't know what the interface does.”

At this point I'm very tempted to answer Mu. An interfaces doesn't do anything - that's the whole point of it. According to the Liskov Substitution Principle (LSP), a consumer shouldn't have to care about what happens on the other side of the interface.

However, developers used to tightly coupled code aren't used to think about services in this way. They are used to navigate the code base from consumer to service to understand how the two of them interact, and I will gladly admit this: in principle, that's impossible to do in a loosely coupled code base. I'll return to this subject in a little while, but first I want to discuss some strategies for understanding a loosely coupled code base.

Architecture and Documentation #

Yes: documentation. Don't dismiss it. While I agree with Uncle Bob and like-minded spirits that the code is the documentation, a two-page document that outlines the Big Picture might save you from many hours of code archeology.

The typical agile mindset is to minimize documentation because it tends to lose touch with the code base, but even so, it should be possible to maintain a two-page high-level document so that it stays up to date. Consider the alternative: if you have so much architectural churn that even a two-page overview regularly falls behind, then you're probably having a greater problem than understanding your loosely coupled code base.

Maintaining such a document isn't adverse to the agile spirit. You'll find the same advice in Lean Architecture (p. 127). Don't underestimate the value of such a document.

See the Forest Instead of the Trees #

Understanding a loosely coupled code base typically tends to require a shift of mindset.

Recall my discussion with Mary Rowan. The criticism of loose coupling is that it's difficult to understand which collaborators are being invoked. A developer like Mary Rowan is used to learn a code base by understanding all the myriad concrete details of it. In effect, while there may be ‘classes' around, there are no abstractions in place. In order to understand the behavior of a user interface element, it's necessary to also understand what's happening in the database - and everything in between.

A loosely coupled code base shouldn't be like that.

The entire purpose of loose coupling is that we should be able to reason about a part (a ‘unit', if you will) without understanding the whole.

In a tightly coupled code base, it's often impossible to see the forest for the trees. Although we developers are good at relating to details, a tightly coupled code base requires us to be able to contain the entire code base in our heads in order to understand it. As the size of the code base grows, this becomes increasingly difficult.

In a loosely coupled code base, on the other hand, it should be possible to understand smaller parts in isolation. However, I purposely wrote “should be”, because that's not always the case. Often, a so-called “loosely coupled” code base violates basic principles of object-oriented design.

RAP #

The criticism that it's hard to see “what's on the other side of an interface” is, in my opinion, central. It betrays a mindset which is still tightly coupled.

In many code bases there's often a single implementation of a given interface, so developers can be forgiven if they think about an interface as only a piece of friction that prevents them from reaching the concrete class on the other side. However, if that's the case with most of the interfaces in a code base, it signifies a violation of the Reused Abstractions Principle (RAP) more than it signifies loose coupling.

Jim Cooper, a reader of my book, put it quite eloquently on the book's forum:

“So many people think that using an interface magically decouples their code. It doesn't. It only changes the nature of the coupling. If you don't believe that, try changing a method signature in an interface - none of the code containing method references or any of the implementing classes will compile. I call that coupled”

Refactoring tools aside, I completely agree with this statement. The RAP is a test we can use to verify whether or not an interface is truly reusable - what better test is there than to actually reuse your interfaces?

The corollary of this discussion is that if a code base is massively violating the RAP then it's going to be hard to understand. It has all the disadvantages of loose coupling with few of the benefits. If that's the case, you would gain more benefit from making it either more tightly coupled or truly loosely coupled.

What does “truly loosely coupled” mean?

LSP #

According to the LSP a consumer must not concern itself with “what's on the other side of the interface”. It should be possible to replace any implementation with any other implementation of the same interface without changing the correctness of the program.

This is why I previously said that in a truly loosely coupled code base, it isn't ‘hard' to understand “what's on the other side of the interface” - it's impossible. At design-time, there's nothing ‘behind' the interface. The interface is what you are programming against. It's all there is.

Mary has been listening to all of this, and now she protests:

Mary: “At run-time, there's going to be a concrete class behind the interface.”

Me (being annoyingly pedantic): “Not quite. There's going to be an instance of a concrete class which the consumer invokes via the interface it implements.”

Mary: “Yes, but I still need to know which concrete class is going to be invoked.”

Me: “Why?”

Mary: “Because otherwise I don't know what's going to happen when I invoke the method.”

This type of answer often betrays a much more fundamental problem in a code base.

CQS #

Now we are getting into the nitty-gritty details of class design. What would you expect that the following method does?

public List<Order> GetOpenOrders(Customer customer)

The method name indicates that it gets open orders, and the signature seems to back it up. A single database query might be involved, since this looks a lot like a read-operation. A quick glance at the implementation seems to verify that first impression:

public List<Order> GetOpenOrders(Customer customer)
{
    var orders = GetOrders(customer);
    return (from o in orders
            where o.Status == OrderStatus.Open
            select o).ToList();
}

Is it safe to assume that this is a side-effect-free method call? As it turns out, this is far from the case in this particular code base:

private List<Order> GetOrders(Customer customer)
{
    var gw = new CustomerGateway(this.ConnectionString);
    var orders = gw.GetOrders(customer);
    AuditOrders(orders);
    FixCustomer(gw, orders, customer);
    return orders;
}

The devil is in the details. What does AuditOrders do? And what does FixCustomer do? One method at a time:

private void AuditOrders(List<Order> orders)
{
    var user = Thread.CurrentPrincipal.Identity.ToString();
    var gw = new OrderGateway(this.ConnectionString);
    foreach (var o in orders)
    {
        var clone = o.Clone();
        var ar = new AuditRecord
        {
            Time = DateTime.Now,
            User = user
        };
        clone.AuditTrail.Add(ar);
        gw.Update(clone);
 
        // We don't want the consumer to see the audit trail.
        o.AuditTrail.Clear();
    }
}

OK, it turns out that this method actually makes a copy of each and every order and updates that copy, writing it back to the database in order to leave behind an audit trail. It also mutates each order before returning to the caller. Not only does this method result in an unexpected N+1 problem, it also mutates its input, and perhaps even more surprising, it's leaving the system in a state where the in-memory object is different from the database. This could lead to some pretty interesting bugs.

Then what happens in the FixCustomer method? This:

// Fixes the customer status field if there were orders
// added directly through the database.
private static void FixCustomer(CustomerGateway gw,
    List<Order> orders, Customer customer)
{
    var total = orders.Sum(o => o.Total);
    if (customer.Status != CustomerStatus.Preferred
        && total > PreferredThreshold)
    {
        customer.Status = CustomerStatus.Preferred;
        gw.Update(customer);
    }
}

Another potential database write operation, as it turns out - complete with an apology. Now that we've learned all about the details of the code, even the GetOpenOrders method is beginning to look suspect. The GetOrders method returns all orders, with the side effect that all orders were audited as having been read by the user, but the GetOpenOrders filters the output. In the end, it turns out that we can't even trust the audit trail.

While I must apologize for this contrived example of a Transaction Script, it's clear that when code looks like that, it's no wonder why developers think that it's necessary to contain the entire code base in their heads. When this is the case, interfaces are only in the way.

However, this is not the fault of loose coupling, but rather a failure to adhere to the very fundamental principle of Command-Query Separation (CQS). You should be able to tell from the method signature alone whether invoking the method will or will not have side-effects. This is one of the key messages from Clean Code: the method name and signature is an abstraction. You should be able to reason about the behavior of the method from its declaration. You shouldn't have to read the code to get an idea about what it does.

Abstractions always hide details. Method declarations do too. The point is that you should be able to read just the method declaration in order to gain a basic understanding of what's going on. You can always return to the method's code later in order to understand detail, but reading the method declaration alone should provide the Big Picture.

Strictly adhering to CQS goes a long way in enabling you to understand a loosely coupled code base. If you can reason about methods at a high level, you don't need to see “the other side of the interface” in order to understand the Big Picture.

Stack Traces #

Still, even in a loosely coupled code base with great test coverage, integration issues arise. While each class works fine in isolation, when you integrate them, sometimes the interactions between them cause errors. This is often because of incorrect assumptions about the collaborators, which often indicates that the LSP was somehow violated.

To understand why such errors occur, we need to understand which concrete classes are interacting. How do we do that in a loosely coupled code base?

That's actually easy: look at the stack trace from your error report. If your error report doesn't include a stack trace, make sure that it's going to do that in the future.

The stack trace is one of the most important troubleshooting tools in a loosely coupled code base, because it's going to tell you exactly which classes were interacting when an exception was thrown.

Furthermore, if the code base also adheres to the Single Responsibility Principle and the ideals from Clean Code, each method should be very small (under 15 lines of code). If that's the case, you can often understand the exact nature of the error from the stack trace and the error message alone. It shouldn't even be necessary to attach a debugger to understand the bug, but in a pinch, you can still do that.

Tooling #

Returning to the original question, I often hear people advocating tools such as IDE add-ins which support navigation across interfaces. Such tools might provide a menu option which enables you to “go to implementation”. At this point it should be clear that such a tool is mainly going to be helpful in code bases that violate the RAP.

(I'm not naming any particular tools here because I'm not interested in turning this post into a discussion about the merits of various specific tools.)

Conclusion #

It's the responsibility of the loosely coupled code base to make sure that it's easy to understand the Big Picture and that it's easy to work with. In the end, that responsibility falls on the developers who write the code - not the developer who's trying to understand it.


Comments

Thomas #
Mark, another great blog post. What do you think about inroducing interfaces to enable unit testing. I we take into the consideration only the production code we could have a 1:1 mapping between interfaces and abstractions and thus violating the RAP principle. But from the point of view of unit testing it's very handy. Do you think that unit testing justify the RAP violation ?

Thomas
2012-02-02 22:56 UTC
Thomas #
We could also ask ourselves "Why do we use abstraction in our code?" Abstraction is a mean to isolate implementations like method bodies, so when we're implementing a MethodA which is dependend on some interface, from the MethodA perspective, we don't care about the implementation of that interface. In practice, this works pretty well and abstractions bring welcomed flexibility to make our code more refactorable, which leads to more maintainable program. The key is that if you need to know what implementation resides behind an abstraction, you are in trouble: not only there might be multiple different implementations but also, some of these implementations might be created or refactored in the future.
2012-02-02 23:08 UTC
If you introduce interfaces to enable testability then you are not breaking the RAP IMHO. You can't really say that you have a 1:1 mapping as both the production code and your unit tests are consumers of your API.

You can argue that the most important consumer is the production code and I definitely agree with you! But let's not underestimate the role and the importance of the automated test as this is the only consumer that should drive the design of your API.
2012-02-03 16:30 UTC
Thomas, Javi

I'm a big proponent of the concept that, with TDD, unit tests are actually the first consumer of your code, and the final application only a close second. As such, it may seem compelling to state that you're never breaking the RAP if you do TDD. However, as a knee-jerk reaction, I just don't buy that argument, which made me think why that is the case...

I haven't thought this completely through yet, but I think many (not all) unit tests pose a special case. A very specialized Test Double isn't really a piece of reuse as much as it's a simulation of the production environment. Add into this mix any dynamic mock library, and you have a tool where you can simulate anything.

However, simulation isn't the same as reuse.

I think a better test would be if you can write a robust and maintainable Fake. If you can do that, the abstraction is most likely reuseable. If not, it probably isn't.
2012-02-03 19:08 UTC
Thomas #
You're very exigent Mark ;) Puting aside TDD and Unit testing. How would you achieve loosely coupling between assemblies without an interface ? For example an MVC application, I prefer to be dependant on ISmthRepository in my controller than to reference the implementation of ISmthRepository which could be SqlSmthRepository. Even if I know that there will be only 1:1 implementation ?

Thanks,

Thomas
2012-02-05 09:33 UTC
I'm not saying that you can't program to interfaces, but I'm saying that if you can't reuse those interfaces, it's time to take a good hard look at them. So if you know there's only ever going to be one implementation of ISmthRepository, what does that tell you about that interface?

In any case, please refer back to the original definition of the RAP. It doesn't say that you aren't allowed to program against 1:1 interfaces - it just states that as long as it remains a 1:1 interface, you haven't proved that it's a generalization. Until that happens, it should be treated as suspect.

However, the RAP states that we should discover generalizations after the fact, which implies that we'd always have to go through a stage where we have 1:1 interfaces. As part of the Refactoring step of Red/Green/Refactor, it should be our responsibility to merge interfaces, just as it's our responsibility to remove code duplication.
2012-02-05 10:36 UTC
Justin #
I would like some clarification as some of these principles seem contradictory in nature. Allow me to explain:
1. RAP says that using an interface that has only one implementation is unnecessary.
2. Dependency inversion principle states that both client and service should depend on an abstraction.
3. Tight coupling is discouraged and often defined as depending on a class (i.e. "newing" up a class for use).

So in order to depend on an abstraction (I understand that "abstraction" does not necessarily mean interface all of the time), you need to create an interface and program to it. If you create an interface for this purpose but it only has one implementation than it is suspect under RAP. I understand also that RAP also refers to pointless interfaces such as "IPerson" that has a Person class as an implementation or "IProduct" that has one Product implementation.

But how about a controller that needs a business service or a business service that needs a data service? I find it easier to build a business service in isolation from a data service or controller so I create an interface for the data services I need but don't create implementations. Then I just mock the interfaces to make sure that my business service works through unit testing. Then I move on to the next layer, making sure that the services then implement the interface defined by the business layer. Thoughts? Opinions?
2012-02-05 19:11 UTC
Justin

Remember that with TDD we should move through the three phases of Red, Green and Refactor. During the Refactoring phase we typically look towards eliminating duplication. We are (or at least should be) used to do this for code duplication. The only thing the RAP states is that we should extend that effort towards our interface designs.

Please also refer to the other comment on this page.

HTH
2012-02-05 19:38 UTC
First of all, great post!

I think one thing to consider in the whole loose coupling debate is granularity.
Not too many people talk about granularity, and without that aspect, I think it is impossible to really have enough context to say whether something is too tightly or loosely coupled. I wrote about the idea here: http://simpleprogrammer.com/2010/11/09/back-to-basics-cohesion-and-coupling-part-2/

Essentially what I am saying is that some things should be coupled. We don't want to create unneeded abstractions, because they introduce complexity. The example I use is Enterprise FizzBuzz. At the same time, we should be striving to build the seams at the points of change which should align in a well designed system with responsibility.

This is definitely a great topic though. Could talk about it all day.
2012-03-02 14:44 UTC

SOLID is Append-only

Tuesday, 03 January 2012 14:43:47 UTC

SOLID is a set of principles that, if applied consistently, has some surprising effect on code. In a previous post I provided a sketch of what it means to meticulously apply the Single Responsibility Principle. In this article I will describe what happens when you follow the Open/Closed Principle (OCP) to its logical conclusion.

In case a refresher is required, the OCP states that a class should be open for extension, but closed for modification. It seems to me that people often forget the second part. What does it mean?

It means that once implemented, you shouldn't touch that piece of code ever again (unless you need to correct a bug).

Then how can new functionality be added to a code base? This is still possible through either inheritance or polymorphic recomposition. Since the L in SOLID signifies the Liskov Substitution Principle, SOLID code tends to be based on loosely coupled code composed into an application through copious use of interfaces - basically, Strategies injected into other Strategies and so on (also due to Dependency Inversion Principle). In order to add functionality, you can create new implementations of these interfaces and redefine the application's Composition Root. Perhaps you'd be wrapping existing functionality in a Decorator or adding it to a Composite.

Once in a while, you'll stop using an old implementation of an interface. Should you then delete this implementation? What would be the point? At a certain point in time, this implementation was valuable. Maybe it will become valuable again. Leaving it as an potential building block seems a better choice.

Thus, if we think about working with code as a CRUD endeavor, SOLID code can be Created and Read, but never Updated or Deleted. In other words, true SOLID code is append-only code.

Example: Changing AutoFixture's Number Generation Algorithm #

In early 2011 an issue was reported for AutoFixture: Anonymous numbers were created in monotonically increasing sequences, but with separate sequences for each number type:

integers: 1, 2, 3, 4, 5, …

decimals: 1.0, 2.0, 3.0, 4.0, 5.0, …

and so on. However, the person reporting the issue thought it made more sense if all numbers shared a single sequence. After thinking about it a little while, I agreed.

Because the AutoFixture code base is fairly SOLID we decided to leave the old implementations in place and implement the new behavior in new classes.

The old behavior was composed from a set of ISpecimenBuilders. As an example, integers were generated by this class:

public class Int32SequenceGenerator : ISpecimenBuilder
{
    private int i;
 
    public int CreateAnonymous()
    {
        return Interlocked.Increment(ref this.i);
    }
 
    public object Create(object request,
        ISpecimenContext context)
    {
        if (request != typeof(int))
        {
            return new NoSpecimen(request);
        }
 
        return this.CreateAnonymous();
    }
}

Similar implementations generated decimals, floats, doubles, etc. Instead of modifying any of these classes, we left them in the code base and created a new ISpecimenBuilder that generates all numbers from a single sequence:

public class NumericSequenceGenerator : ISpecimenBuilder
{
    private int value;
 
    public object Create(object request,
        ISpecimenContext context)
    {
        var type = request as Type;
        if (type == null)
            return new NoSpecimen(request);
 
        return this.CreateNumericSpecimen(type);
    }
 
    private object CreateNumericSpecimen(Type request)
    {
        var typeCode = Type.GetTypeCode(request);
 
        switch (typeCode)
        {
            case TypeCode.Byte:
                return (byte)this.GetNextNumber();
            case TypeCode.Decimal:
                return (decimal)this.GetNextNumber();
            case TypeCode.Double:
                return (double)this.GetNextNumber();
            case TypeCode.Int16:
                return (short)this.GetNextNumber();
            case TypeCode.Int32:
                return this.GetNextNumber();
            case TypeCode.Int64:
                return (long)this.GetNextNumber();
            case TypeCode.SByte:
                return (sbyte)this.GetNextNumber();
            case TypeCode.Single:
                return (float)this.GetNextNumber();
            case TypeCode.UInt16:
                return (ushort)this.GetNextNumber();
            case TypeCode.UInt32:
                return (uint)this.GetNextNumber();
            case TypeCode.UInt64:
                return (ulong)this.GetNextNumber();
            default:
                return new NoSpecimen(request);
        }
    }
 
    private int GetNextNumber()
    {
        return Interlocked.Increment(ref this.value);
    }
}

Adding a new class in itself has no effect, so in order to recompose the default behavior of AutoFixture, we changed a class called DefaultPrimitiveBuilders by removing the old ISpecimenBuilders like Int32SequenceGenerator and instead adding NumericSequenceGenerator:

yield return new StringGenerator(() => 
    Guid.NewGuid());
yield return new ConstrainedStringGenerator();
yield return new StringSeedRelay();
yield return new NumericSequenceGenerator();
yield return new CharSequenceGenerator();
yield return new RangedNumberGenerator();
// even more builders...

NumericSequenceGenerator is the fourth class being yielded here. Before we added NumericSequenceGenerator, this class instead yielded Int32SequenceGenerator and similar classes. These were removed.

The DefaultPrimitiveBuilders class is part of AutoFixture's default Facade and is the closest we get to a Composition Root for the library. Recomposing this Facade enabled us to change the behavior of AutoFixture without modifying (other) existing classes.

As Enrico (who implemented this change) points out, the beauty is that the previous behavior is still in the box, and all it takes is a single method call to bring it back:

var fixture = new Fixture().Customize(
    new NumericSequencePerTypeCustomization());

The only class we had to modify was the DefaultPrimitiveBuilders, which is where the object graph is composed. In applications this corresponds to the Composition Root, so even in the face of SOLID code, you still need to modify the Composition Root in order to recompose the application. However, use of a good DI Container and a strong set of conventions can do much to minimize the required editing of such a class.

SOLID versus Refactoring #

SOLID is a goal I strive towards in the way I write code and design APIs, but I don't think I've ever written a significant code base which is perfectly SOLID. While I consider AutoFixture a ‘fairly' SOLID code base, it's not perfect, and I'm currently performing some design work in order to change some abstractions for version 3.0. This will require changing some of the existing types and thereby violating the OCP.

It's worth noting that as long as you can stick with the OCP you can avoid introducing breaking changes. A breaking change is also an OCP violation, so adhering to the OCP is more than just an academic exercise - particularly if you write reusable libraries.

Still, while none of my code is perfect and I occasionally have to refactor, I don't refactor much. By definition, refactoring means violating the OCP, and while I have nothing against refactoring code when it's required, I much prefer putting myself in a situation where it's rarely necessary in the first place.

I've often been derided for my lack of use of Resharper. When replying that I have little use for Resharper because I write SOLID code and thus don't do much refactoring, I've been ridiculed for being totally clueless. People don't realize the intimate relationship between SOLID and refactoring. I hope this post has helped highlight that connection.


Comments

Jon Wingfield #
I've found that it's very difficult to accomplish OCP in practice, mostly because of evolutionary design. An example is having the foresight to know when the cohesion/coupling barrier has been crossed. Also, when one gains greater insight into a domain, refactoring is necessary (even crucial). but this violates OCP as well. I find that my designs are pretty crappy up front but evolve as patterns emerge. I prefer this to up-front engineering (except in some cases), because it yields a code base that is cohesive when appropriate, but also decoupled when appropriate.
2012-01-03 16:09 UTC
Agreed, OCP is hard.

However, adhering to OCP doesn't indicate that you have to do BDUF. What it means is that once you get a 'rush of insight' (as Domain-Driven Design puts it) you don't modify existing classes. Instead, you introduce new classes to follow the new model.

This may seem wasteful, but due to the very fine-grained nature of SOLID code, it means that those classes that follow the old model (that you've just realized can be improved) are basically 'wrong' because they model the domain in the 'wrong' way. Re-implementing that part of the application's behavior while leaving the old code in place is typically more efficient because it's only going to be a very small part of the code base (again due to the granularity) and because you can do it in micro-iterations since you're not changing anything. Thus, dangerous 'big-bang' refactorings are avoided.

In any case, I never said that SOLID is easy. What I'm saying is that SOLID code has certain characteristics, and if a code base doesn't exhibit those characteristics, it's not (perfectly) SOLID. In reality, I expect that there are very few code bases that can live up to what is essentially an ideal more than a practically attainable goal.
2012-01-03 16:42 UTC
Jon Wingfield #
Not trying to spam your blog, but maybe OCP is better viewed as a means for enhancement, whereas refactoring is intended to improve the existing design. Sometimes when adding functionality, we realize that the existing design doesn't accomodate change very well, and thus we refactor (hopefully separately from implementing new functionality). I'm still a little uneasy about applying OCP when fixing bugs and especially design issues (which you already alleviated to).
2012-01-03 17:33 UTC
Mark,

Irrelevant of my association with ReSharper, I think that refactoring (be it with or without tools) and SOLID design are not mutually exclusive. You are basing your argument mostly on the premise that we get things right the first and that is not always the case. Test Driven Development in itself for instance is about evolving design.

As for ReSharper not being needed (or replace ReSharper with any other enhancing tool), I find it kind of amusing because it seems that there is some imaginary line that developers draw whereby what's included in Visual Studio is sufficient when it comes to refactoring. Everything else is superflous. That is of course until the next version of Visual Studio includes it. And that's if we think about these types of tools as refactoring only, which is so far from the truth.

Btw, switch statement violates OCP and yes it doesn't change until it does change. I'd add that normally when I violate OCP I try and make sure the tests are in pace to let me know if something breaks.
2012-01-03 17:55 UTC
Jon, that's well put.

When it comes to fixing bugs, the OCP specifically states that it's OK to modify existing code, so you shouldn't be uneasy about that.
2012-01-03 19:32 UTC
Hadi, I'd never claim that it's possible to get things right on the first attempt. Looking at AutoFixture (again), I had to do a lot of refactoring and redesign to arrive at version 2.0, which is 'fairly' SOLID. Still, I have more changes in store for version 3.0, which indicates to me that it's still not SOLID - although it's vastly better than before.

Still, instead of refactoring, sometimes it makes more sense to leave the old stuff to atrophy and build the new, better API up around it. That's basically the Strangler pattern applied at the code level.

That said, there are some of the refactorings that ReSharper has that I'd love to have in my IDE. It's just that I think that already VS is too slow and heavy on my machine - even without ReSharper...
2012-01-03 19:42 UTC
You could build a whole set of tools around append-only programming; version control in particular would be a piece of cake. An editor that let you use a class or function as a starting point and then save an edited version as a new class or function (without affecting the existing version) would be quite helpful.

I tried an extremely SOLID design for a prototype recently, with very few stable dependencies and leaning on the container for almost everything. I didn't quite adhere to OCP as you've described it here, but in retrospect it would have almost eliminated regressions. There was a lot of pushback to the design as soon as we got another developer on (just before we threw away the prototype), though.

The usual complaints about being unable to see the big picture (due to container registrations being made mostly linearly) came through, and my choice to compose functions rather than objects certainly didn't help as it resulted in some quite foreign-looking code. I think tooling could have helped, but we've decided to stick to KISS and stabilise more dependencies for the upcoming releases.

On the subject of tooling, I think something that's missing from DI tooling is a graphical designer containing a tree view of what your container will resolve at runtime, with markers for missing dependencies and such. A "DIML" file, perhaps, that generates a .diml.cs or .diml.vb when saved. Then you could have a find-and-replace-style feature for replacing dependencies, respecting OCP.
2012-01-03 21:53 UTC
Andreas Triesch #
This might seem a bit like a newbie question (well, with regards to SOLID I am one) - does that mean one should mostly use 'protected virtual' methods as opposed to 'private', in case I need something just a little different? Or does the notion I might need to do so in a particular case rather imply my class might be too large (violating SRP) and I should try to achieve flexibility by breaking it up and composing my logic via DI instead of using inheritance?
2012-01-03 22:11 UTC
Let me support you, Mark, in not having much use for ReSharper.

Event though I´m using it, I´m not using it much for refactoring. Out of all the refactorings I use maybe just 2-3 (rename, extract method, move class to separate file).

My main use for ReSharper is as a test runner.

So I agree: ReSharper is a tool for developers wrestling with tons of legacy code they need to refactor. But if your code base is clean... then the need for larger rearrangements is rare. "Refactoring to deeper insight" sometimes requires such rearrangements. But this too need not be that hard, if the functional units are fine grained.
2012-01-04 10:15 UTC
Jeff, I think that the use of a DI Container and application of the OCP are perpendicular concerns.

Andreas, the 'old' definition of OCP (Meyer's) is to use inheritance to extend an existing class. However, when we favor composition over inheritance, the default way to extend a class is to apply a Decorator to it.

Ralf, I think you've nailed it. The reason why I've never felt much need for ReSharper is because I avoid legacy code like the plague. In fact, I've turned down job offers (that payed better than anything I've ever received) because it involved dealing with too much legacy code. If I were ever to deal substantially with legacy code, I might very well install ReSharper.
2012-01-04 18:30 UTC
Mark, you're right. I went off on a bit of a tangent!

I guess the only relation that I was riffing off is that a great tool for writing and composing SOLID code would help with both.
2012-01-04 19:32 UTC
You said that you don't refactor often. Does that mean that you don't practice TDD? As I understand it, refactoring is an essential step in each TDD cycle.
Refactoring aside, it seems to me that TDD practice makes you violate OCP since you start with the simplest implementation and keep improving it (hence changing existing code) to make new tests pass.
2012-03-17 08:51 UTC
Payman, I do practice TDD, and I do refactor regularly as part of the Red/Green/Refactor cycle.

What I meant (but perhaps did not explicitly state) was that once a piece of code is released to production, it changes status. That kind of code I don't often refactor.
2012-03-18 14:02 UTC

Testing Container Configurations

Wednesday, 21 December 2011 13:25:32 UTC

Here's a question I often get:

“Should I test my DI Container configuration?”

The motivation for asking mostly seems to be that people want to know whether or not their applications are correctly wired. That makes sense.

A related question I also often get is whether or not a particular container has a self-test feature? In this post I'll attempt to answer both questions.

Container Self-testing #

Some DI Containers have a method you can invoke to make it perform a consistency check on itself. As an example, StructureMap has the AssertConfigurationIsValid method that, according to documentation does “a full environment test of the configuration of [the] container.” It will “try to create every configured instance [...]”

Calling the method is really easy:

container.AssertConfigurationIsValid();

Such a self-test can often be an expensive operation (not only for StructureMap, but in general) because it's basically attempting to create an instance of each and every Service registered in the container. If the configuration is large, it's going to take some time, but it's still going to be faster than performing a manual test of the application.

Two questions remain: Is it worth invoking this method? Why don't all containers have such a method?

The quick answer is that such a method is close to worthless, which also explains why many containers don't include one.

To understand the answer, consider the set of all components contained in the container in this figure:

The container contains the set of components IFoo, IBar, IBaz, Foo, Bar, Baz, and Qux so a self-test will try to create a single instance of each of these seven types. If all seven instances can be created, the test succeeds.

All this accomplishes is to verify that the configuration is internally consistent. Even so, an application could require instances of the ICorge, Corge or Grault types which are completely external to the configuration, in which case resolution would fail.

Even more subtly, resolution would also fail whenever the container is queried for an instance of IQux, since this interface isn't part of the configuration, even though it's related to the concrete Qux type which is registered in the container. A self-test only verifies that the concrete Qux class can be resolved, but it never attempts to create an instance of the IQux interface.

In short, the fact that a container's configuration is internally consistent doesn't guarantee that all services required by an application can be served.

Still, you may think that at least a self-test can constitute an early warning system: if the self-test fails, surely it must mean that the configuration is invalid? Unfortunately, that's not true either.

If a container is being configured using Auto-registration/Convention over Configuration to scan one or more assemblies and register certain types contained within, chances are that ‘too many' types will end up being registered - particularly if one or more of these assemblies are reusable libraries (as opposed to application-specific assemblies). Often, the number of redundant types added is negligible, but they may make the configuration internally inconsistent. However, if the inconsistency only affects the redundant types, it doesn't matter. The container will still be able to resolve everything the current application requires.

Thus, a container self-test method is worthless.

Then how can the container configuration be tested?

Explicit Testing of Container Configuration #

Since a container self-test doesn't achieve the desired goal, how can we ensure that an application can be composed correctly?

One option is to write an automated integration test (not a unit test) for each service that the application requires. Still, if done manually, you run the risk of forgetting to write a test for a specific service. A better option is to come up with a convention so that you can identify all the services required by a specific application, and then write a convention-based test to verify that the container can resolve them all.

Will this guarantee that the application can be correctly composed?

No, it only guarantees that it can be composed - not that this composition is correct.

Even when a composed instance can be created for each and every service, many things may still be wrong:

  • Composition is just plain wrong:
    • Decorators may be missing
    • Decorators may have been added in the wrong order
    • The wrong services are injected into consumers (this is more likely to happen when you follow the Reused Abstractions Principle, since there will be multiple concrete implementations of each interface)
  • Configuration values like connection strings and such are incorrect - e.g. while a connection string is supplied to a constructor, it may not contain the correct values.
  • Even if everything is correctly composed, the run-time environment may prevent the application from working. As an example, even if an injected connection string is correct, there may not be any connection to the database due to network or security misconfiguration.

In short, a Subcutaneous Test or full System Test is the only way to verify that everything is correctly wired. However, if you have good test coverage at the unit test level, a series of Smoke Tests is all that you need at the System Test level because in general you have good reason to believe that all behavior is correct. The remaining question is whether all this correct behavior can be correctly connected, and that tends to be an all-or-nothing proposition.

Conclusion #

While it would be possible to write a set of convention-based integration tests to verify the configuration of a DI Container, the return of investment is too low since it doesn't remove the need for a set of Smoke Tests at the System Test level.


Comments

Graham #
While you're correct that integration tests are not required, they can still provide value if they pin-point the problem more quickly.

A failing smoke test won't always tell you exactly what went wrong (while that's a failing of the test, it's not always that easy to fix). Rather than investigating, I'd prefer to have a failing integration test that means the smoke test won't even be run.

I find the most value comes from tests that try and resolve the root of the object graph from the bootstrapped container, i.e. anything I (or a framework) explicitly try and resolve at runtime.

Their being green doesn't necessarily mean the application is fine, but their being red means it's definitely broken. It is a duplication of testing, but so are the smoke tests and some of the unit tests (hopefully!).
2011-12-22 11:18 UTC
Jan #
I am currently not quite sure whether an automated configuration test is really required or not.

The correct wireing is already tested by the DI Container itself. An error prone configuration will be obvious in the first developer or at least user test.
So, is this kind of test overhead, useful or even necessary?
I probably wouldn't do these kind tests.
2011-12-22 20:36 UTC
Mike Bridge #
In my application, I'd like to assert that my composition root will correctly instantiate a decorator in one situation and not in another. I managed to miswire this on my first attempt and have been trying to figure out how to write a simple test to assert that I'd specified the dependencies correctly.

Would it be worthwhile strategy to unit test my container's configuration by mocking the container's resolver? I'd like to be able to run my registration on a container, then assert that the mocked resolver received the correct "Resolve" messages in the correct order. Currently I'm using validating this with an integration test, but I was thinking that this would be much simpler---if my container supports it.








2011-12-28 19:23 UTC
That sounds like a brittle test, because instead of testing the 'what' you'd be testing the 'how'.
2011-12-29 07:50 UTC
Mike Bridge #
Thanks, that's probably a correct assessment. I was intending it to be an interaction test which asserts "given a certain input to my DI container initializer, my decorator should be created when the container instantiates my graph".

I'll go back and think a bit more about how I can test the resulting behaviour instead of the implementation.
2011-12-29 17:12 UTC
Hy
Nice post! We have a set of automated developer acceptance tests which start the system in an machine.specifications test, execute business commands, shuts down the system and does business asserts on the external subsystems which are stubbed away depending on the feature under test. With that we have an implicit container test: the system boots and behave as expected. If someone adds new ccomponents for that feature which are not properly registered the tests will fail.

Daniel
2012-01-14 07:58 UTC
Thomas #
Hi Mark,

While I agree on the principle one question came to my mind related to the first part. I don't understand why resolving IQux is an issue because even in runtime it's not required and not registered?

Thanks

Thomas
2012-02-03 18:16 UTC
Thomas, the point of IQux is that, in the example, the container doesn't know about the IQux interface. A container self-test is only going to walk through all components to see if they can be resolved. However, such a test will never attempt to resolve IQux, since the container doesn't know about it.

So, if an application code base requires IQux, the application is going to fail even when the container self-test succeeds.
2012-02-03 19:30 UTC
Thomas #
Mark, I understand that. However even in the production scenario when a code base requires IQux at some point in the time (for example Baz requires IQux in the constructor), it should be registered in the container, otherwise it won't work. I think I should miss something.
2012-02-05 11:10 UTC
Yes - that's why a container self-test is worthless. The container doesn't know about the requirements of you application. All it can do is to test whether or not it's internally consistent. It doesn't tell you anything about its ability to resolve the object graphs your application is going to need.
2012-02-05 12:02 UTC
Rajkumar Srinivasan #
Mark, Great post. In order to avoid violation of Resused Abstraction Principle, I infer from some of your other posts that we need to provide null object implementation for all interfaces or abstractions. I am just trying to confirm if my inference is correct.
2012-06-05 09:10 UTC
I'm not sure I can agree with that - that sounds a bit like Cargo Culting to me... The point of the RAP is that it tells us something about the degree of abstraction of our interfaces. If the sole purpose of adding a Null Object implementation is to adhere to the RAP, it may address the mechanics of the RAP, but not the spirit.
2012-06-05 09:18 UTC

You have made a very basic and primitive argument to sell a complex but feasible process as pointless:

such a method is close to worthless

and without a working example of a better way you explain why you are right. I am an avid reader of your posts and often reference them but IMO this particular argument is not well reasoned.

Your opening argument explains why you may have an issue when using the Service Locator anti-pattern:

an application could require instances of the ICorge, Corge or Grault types which are completely external to the configuration, in which case resolution would fail.

Assertions such as the following would ideally be specified in a set of automated tests regardless of the method of composition

Decorators may be missing

&

Decorators may have been added in the wrong order

And I fail to see how Pure DI would make the following into lesser issues

Configuration values like connection strings and such are incorrect - e.g. while a connection string is supplied to a constructor, it may not contain the correct values

&

the run-time environment may prevent the application from working


My response was prompted by a statement made by you on stackoverflow. You are most highly regarded when it comes to .NET and DI and I feel statements such as "Some people hope that you can ask a DI Container to self-diagnose, but you can't." are very dangerous when they are out-of-date.

Simple Injector will diagnose a number of issues beyond "do I have a registration for X". I'm not claiming that these tests alone are full-proof validation of your configuration but they are set of built in tests for issues that can occur in any configuration (including a Pure configuration) and these combined tests are far from worthless ...

  • LifeStyle Mismatches: The component depends on a service with a lifestyle that is shorter than that of the component
  • Short Circuited Dependencies: The component depends on an unregistered concrete type and this concrete type has a lifestyle that is different than the lifestyle of an explicitly registered type that uses this concrete type as its implementation
  • Potential Single Responsibility Violations: The component depends on too many services
  • Container-registered Type: A concrete type that was not registered explicitly and was not resolved using unregistered type resolution, but was created by the container using the default lifestyle
  • Torn Lifestyle: Multiple registrations with the same lifestyle map to the same component
  • Ambiguous Lifestyles: Multiple registrations with the different lifestyles map to the same component
  • Disposable Transient Components: A registration has been made with the Transient lifestyle for a component that implements IDisposable

Your claim that "such a method is close to worthless" may be true for the majority of the available .NET DI Containers but it does not take Simple Injector's diagnostic services into consideration.

2015-12-11 09:15 UTC

Factory Overload

Monday, 19 December 2011 13:04:55 UTC

Recently I received a question from Kelly Sommers about good ways to refactor away from Factory Overload. Basically, she's working in a code base where there's an explosion of Abstract Factories which seems to be counter-productive. In this post I'll take a look at the example problem and propose a set of alternatives.

An Abstract Factory (and its close relative Product Trader) can serve as a solution to various challenges that come up when writing loosely coupled code (chapter 6 of my book describes the most common scenarios). However, introducing an Abstract Factory may be a leaky abstraction, so don't do it blindly. For example, an Abstract Factory is rarely the best approach to address lifetime management concerns. In other words, the Abstract Factory has to make sense as a pure model element.

That's not the case in the following example.

Problem Statement #

The question centers around a code base that integrates with a database closely guarded by DBA police. Apparently, every single database access must happen through a set of very fine-grained stored procedures.

For example, to update the first name of a user, a set of stored procedures exist to support this scenario, depending on the context of the current application user:

User type Stored procedure Parameter name
Admin update_admin_firstname adminFirstName
Guest update_guest_firstname guestFirstName
Regular update_regular_firstname regularFirstName
Restricted update_restricted_firstname restrictedFirstName

As this table demonstrates, not only is there a stored procedure for each user context, but the parameter name differs as well. However, in this particular case it seems as though there's a pattern to the names.

If this pattern is consistent, I think the easiest way to address these variations would be to algorithmically build the strings from a couple of templates.

However, this is not the route taken by Kelly's team, so I assume that things are more complicated than that; apparently, a templated approach is not viable, so for the rest of  this article I'm going to assume that it's necessary to write at least some code to address each case individually.

The current solution that Kelly's team has implemented is to use an Abstract Factory (Product Trader) to translate the user type into an appropriate IUserFirstNameModifier instance. From the consuming code, it looks like this:

var modifier = factory.Create(UserTypes.Admin);
modifier.Commit("first");

where the factory variable is an instance of the IUserFirstNameModifierFactory interface. This is certainly loosely coupled, but looks like a leaky abstraction. Why is a factory needed? It seems that its single responsibility is to translate a UserTypes instance (an enum) into an IUserFirstNameModifier. There's a code smell buried here - try to spot it before you read on :)

Proposed Solution #

Kelly herself suggests an alternative involving a concrete Builder which can create instances of a single concrete UserFirstNameModifier with or without an implicit conversion:

// Implicit conversion.
UserFirstNameModifier modifier1 = 
    builder.WithUserType(UserTypes.Guest);
 
// Without implicit conversion.
var modifier2 = builder
    .WithUserType(UserTypes.Restricted)
    .Create();

While this may seem to reduce the number of classes involved, it has several drawbacks:

  • First of all, the Fluent Builder pattern implies that you can forgo invoking any of the WithXyz methods (WithUserType) and just accept all the default values encapsulated in the builder. This again implies that there's a default user type, which may or may not make sense in that particular domain. Looking at Kelly's code, UserTypes is an enum (and thus has a default value), so if WithUserType isn't invoked, the Create method defaults to UserTypes.Admin. That's a bit too implicit for my taste.
  • Since all involved classes are now concrete, the proposed solution isn't extensibile (and by corollary hard to unit test).
  • The builder is essentially a big switch statement.

Both the current implementation and the proposed solution involves passing an enum as a method parameter to a different class. If you've read and memorized Refactoring you should by now have recognized both a code smell and the remedy.

Alternative 1a: Make UserType Polymorphic #

The code smell is Feature Envy and a possible refactoring is to replace the enum with a Strategy. In order to do that, an IUserType interface is introduced:

public interface IUserType
{
    IUserFirstNameModifer CreateUserFirstNameModifier();
}

Usage becomes as simple as this:

var modifier = userType.CreateUserFirstNameModifier();

Obviously, more methods can be added to IUserType to support other update operations, but care should be taken to avoid creating a Header Interface.

While this solution is much more object-oriented, I'm still not quite happy with it, because apparently, the context is a CQRS style architecture. Since an update operation is essentially a Command, then why model the implementation along the lines of a Query? Both Abstract Factory and Factory Method patterns represent Queries, so it seems redundant in this case. It should be possible to apply the Hollywood Principle here.

Alternative 1b: Tell, Don't Ask #

Why have the user type return an modifier? Why can't it perform the update itself? The IUserType interface should be changed to something like this:

public interface IUserType
{
    void CommitUserFirtName(string firstName);
}

This makes it easier for the consumer to commit the user's first name because it can be done directly on the IUserType instance instead of first creating the modifier.

It also makes it much easier to unit test the consumer because there's no longer a mix of Command and Queries within the same method. From Growing Object-Oriented Software we know that Queries should be modeled with Stubs and Commands with Mocks, and if you've ever tried mixing the two you know that it's a sort of interaction that should be minimized.

Alternative 2a: Distinguish by Type #

While I personally like alternative 1b best, it may not be practical in all situations, so it's always valuable to examine other alternatives.

The root cause of the problem is that there's a lot of stored procedures. I want to reiterate that I still think that the absolutely easiest solution would be to generate a SqlCommand from a string template, but given that this article assumes that this isn't possible or desirable, it follows that code must be written for each stored procedure.

Why not simply define an interface for each one? As an example, to update the user's first name in the context of being an ‘Admin' user, this Role Interface can be used:

public interface IUserFirstNameAdminModifier
{
    void Commit(string firstName);
}

Similar interfaces can be defined for the other user types, such as IUserFirstNameRestrictedModifier, IUserFirstNameGuestModifier and so on.

This is a very simple solution; it's easy to implement, but risks violating the Reused Abstractions Principle (RAP).

Alternative 2b: Distinguish by Generic Type #

The problem with introducing interfaces like IUserFirstNameAdminModifier, IUserFirstNameRestrictedModifier, IUserFirstNameGuestModifier etc. is that they differ only by name. The Commit method is the same for all these interfaces, so this seems to violate the RAP. It'd be better to merge all these interfaces into a single interface, which is what Kelly's team is currently doing. However, the problem with this is that the type carries no information about the role that the modifier is playing.

Another alternative is to turn the modifier interface into a generic interface like this:

public interface IUserFirstNameModifier<T> 
    where T : IUserType
{
    void Commit(string firstName);
}

The IUserType is a Marker Interface, so .NET purists are not going to like this solution, since the .NET Type Design Guidelines recommend against using Marker Interfaces. However, it's impossible to constrain a generic type argument against an attribute, so the party line solution is out of the question.

This solution ensures that consumers can now have dependencies on IUserFirstNameModifier<AdminUserType>, IUserFirstNameModifier<RestrictedUserType>, etc.

However, the need for a marker interface gives off another odeur.

Alternative 3: Distinguish by Role #

The problem at the heart of alternative 2 is that it attempts to use the type of the interfaces as an indicator of the roles that Services play. It's seems that making the type distinct works against the RAP, but when the RAP is applied, the type becomes ambiguous.

However, as Ted Neward points out in his excellent series on Multiparadigmatic .NET, the type is only one axis of variability among many. Perhaps, in this case, it may be much easier to use the name of the dependency to communicate its role instead of the type.

Given a single, ambiguous IUserFirstNameModifier interface (just as in the original problem statement), a consumer can distinguish between the various roles of modifiers by their names:

public partial class SomeConsumer
{
    private readonly IUserFirstNameModifier adminModifier;
    private readonly IUserFirstNameModifier guestModifier;
 
    public SomeConsumer(
        IUserFirstNameModifier adminModifier,
        IUserFirstNameModifier guestModifier)
    {
        this.adminModifier = adminModifier;
        this.guestModifier = guestModifier;
    }
 
    public void DoSomething()
    {
        if (this.UseAdmin)
            this.adminModifier.Commit("first");
        else
            this.guestModifier.Commit("first");
    }
}

Now it's entirely up to the Composition Root to compose SomeConsumer with the correct modifiers, and while this can be done manually, it's an excellent case for a DI Container and a bit of Convention over Configuration.

Conclusion #

I'm sure that if I'd spent more time analyzing the problem I could have come up with more alternatives, but this post is becoming long enough already.

Of the alternatives I've suggested here, I prefer 1b or 3, depending on the exact requirements.


Comments

Nice work, Mark.

Personally I prefer 1b over 3. The if-statement in option 3 looks a bit suspicious to me as one day it might give rise to some maintenance if more user types have to be supported by the SomeConsumer type, so I am afraid it may violate the open/closed principle. Option 1b looks more straight forward to me.
2011-12-21 12:48 UTC
Travis #
wow, those dba enforced sprocs and security measures are friction no developer should have to face

sorry, way OT
2011-12-21 17:52 UTC
Emanuel Pasat #
I have a similar situation and it might be a good ocassion to clarify it.
I'm trying to implement a visitor pattern over a list of visitees object, constructed with a factory.

A visitee requires some external services, so, it might be resonable to have them injected already by IOC in the _factory (VisiteeFactory). isLast is an example of other contextual information, outside of dto, but required to create other visitee types.
Given that is a bounded context, how can I improve this design?

Pseudocode follows:

IVisiteeFactory
{
VisiteeAdapterBase Create(Dto dto, bool isLast);
}

VisiteeFactory : IVisiteeFactory
{
ctor VisiteeFactory(IExternalService service)

public VisiteeAdapterBase Create(Dto dto, bool isLast)
{
// lot of switches and ifs to determine the type of VisiteeAdapterBase
if (isLast && dto.Type == "1") {
return new Type1VisiteeAdapter(... dto props...);
}

}
}

ConsumerCtor(VisiteeFactory factory, List<Dto> dtoList)
{
// guards
_factory = factory;
_dtoList = dtoList;
}
ConsumerDoStuff
{
foreach (var dto in _dtoList) {
// isLast is additional logic, outside of dto
var visiteeAdapter = _factory.Create(dto, isLast);
visiteeAdapter.Accept(visitor);
}
}

2011-12-22 10:06 UTC
Too many classes/interfaces, why not use a simple lookup table?
2012-01-09 06:48 UTC
Eber, let me quoute myself from this particular post: "I think the easiest way to address these variations would be to algorithmically build the strings from a couple of templates". A lookup table falls into that category, so I agree that such a thing would be easier if at all possible.

The whole premise of the rest of the post is that for some reason, it's more complicated than that...
2012-01-09 19:58 UTC

Polymorphic Consistency

Wednesday, 07 December 2011 08:40:21 UTC

Asynchronous message passing combined with eventual consistency makes it possible to build very scalable systems. However, sometimes eventual consistency isn't appropriate in parts of the system, while it's acceptable in other parts. How can a consistent architecture be defined to fit both ACID and eventual consistency? This article provides an answer.

The case of an online game #

Last week I visited Pixel Pandemic, a company that produces browser-based MMORPGs. Since each game world has lots of players who can all potentially interact with each other, scalability is very important.

In traditional line of business applications, eventual consistency is often an excellent fit because the application is a projection of the real world. My favorite example is an inventory system: it models what's going on in one or more physical warehouses, but the real world is the ultimate source of truth. A warehouse worker might accidentally drop and damage some of the goods, in which case the application must adjust after the fact.

In other words, the information contained within line of business applications tend to lag after the real world. It's impossible to guarantee that the application is always consistent with the real world, so eventual consistency is a natural fit.

That's not the case with an online game world. The game world itself is the source of truth, and it must be internally consistent at all times. As an example, in Zombie Pandemic, players fight against zombies and may take damage along the way. Players can heal themselves, but they would prefer (I gather) that the healing action takes place immediately, and not some time in the future where the character might be dead. Similarly, when a player hits a zombie, they'd prefer to apply the damage immediately. (However, I think that even here, eventual consistency might provide some interesting game mechanics, but that's another discussion.)

While discussing these matters with the nice people in Pixel Pandemic, it turned out that while some parts of the game world have to be internally consistent, it's perfectly acceptable to use eventual consistency in other cases. One example is the game's high score table. While a single player should have a consistent view of his or her own score, it's acceptable if the high score table lags a bit.

At this point it seemed clear that this particular online game could use an appropriate combination of ACID and eventual consistency, and I think this conclusion can be generalized. The question now becomes: how can a consistent architecture encompass both types of consistency?

Problem statement #

With the above example scenario in mind the problem statement can be generalized:

Given that an application should apply a mix of ACID and eventual consistency, how can a consistent architecture be defined?

Keep in mind that ACID consistency implies that all writes to a transactional resource must take place as a blocking method call. This seems to be at odds with the concept of asynchronous message passing that works so well with eventual consistency.

However, an application architecture where blocking ACID calls are fundamentally different than asynchronous message passing isn't really an architecture at all. Developers will have to decide up-front whether or not a given operation is or isn't synchronous, so the ‘architecture' offers little implementation guidance. The end result is likely to be a heterogeneous mix of Services, Repositories, Units of Work, Message Channels, etc. A uniform principle will be difficult to distill, and the whole thing threatens to devolve into Spaghetti Code.

The solution turns out to be not at all difficult, but it requires that we invert our thinking a bit. Most of us tend to think about synchronous code first. When we think about code performing synchronous work it seems difficult (perhaps even impossible) to retrofit asynchrony to that model. On the other hand, the converse isn't true.

Given an asynchronous API, it's trivial to provide a synchronous, blocking implementation.

Adopting an architecture based on asynchronous message passing (the Pipes and Filters architecture) enables both scenarios. Eventual consistency can be achieved by passing messages around on persistent queues, while ACID consistency can be achieved by handling a message in a blocking call that enlists a (potentially distributed) transaction.

An example seems to be in order here.

Example: keeping score #

In the online game world, each player accumulates a score based on his or her actions. From the perspective of the player, the score should always be consistent. When you defeat the zombie boss, you want to see the result in your score right away. That sounds an awful lot like the Player is an Aggregate Root and the score is part of that Entity. ACID consistency is warranted whenever the Player is updated.

On the other hand, each time a score changes it may influence the high score table, but this doesn't need to be ACID consistent; eventual consistency is fine in this case.

Once again, polymorphism comes to the rescue.

Imagine that the application has a GameEngine class that handles updates in the game. Using an injected IChannel<PointsChangedEvent> it can update the score for a player as simple as this:

/* Lots of other interesting things happen
    * here, like calculating the new score... */
 
var cmd =
    new ScoreChangedEvent(this.playerId, score);
this.pointsChannel.Send(cmd);

The Send method returns void, so it's a good example of a naturally asynchronous API. However, the implementation must do two things:

  • Update the Player Aggregate Root in a transaction
  • Update the high score table (eventually)

That's two different types of consistency within the same method call.

The first step to enable this is to employ the trusty old Composite design pattern:

public class CompositeChannel<T> : IChannel<T>
{
    private readonly IEnumerable<IChannel<T>> channels;
 
    public CompositeChannel(params IChannel<T>[] channels)
    {
        this.channels = channels;
    }
 
    public void Send(T message)
    {
        foreach (var c in this.channels)
        {
            c.Send(message);
        }
    }
}

With a Composite channel it's possible to compose a polymorphic mix of IChannel<T> implementations, some blocking and some asynchronous.

ACID write #

To update the Player Aggregate Root a simple Adapter writes the event to a persistent data store. This could be a relational database, a document database, a REST resource or something else - it doesn't really matter exactly which technology is used.

public class PlayerStoreChannel : 
    IChannel<ScoreChangedEvent>
{
    private readonly IPlayerStore store;
 
    public PlayerStoreChannel(IPlayerStore store)
    {
        this.store = store;
    }
 
    public void Send(ScoreChangedEvent message)
    {
        this.store.Save(message.PlayerId, message);
    }
}

The important thing to realize is that the IPlayerStore.Save method will be a blocking method call - perhaps wrapped in a distributed transaction. This ensures that updates to the Player Aggregate Root always leave the data store in a consistent state. Either the operation succeeds or it fails during the method call itself.

This takes care of the ACID consistent write, but the application must also update the high score table.

Asynchronous write #

Since eventual consistency is acceptable for the high score table, the message can be transmitted over a persistent queue to be picked up by a background process.

A generic class can server as an Adapter over an IQueue abstraction:

public class QueueChannel<T> : IChannel<T>
{
    private readonly IQueue queue;
    private readonly IMessageSerializer serializer;
 
    public QueueChannel(IQueue queue,
        IMessageSerializer serializer)
    {
        this.queue = queue;
        this.serializer = serializer;
    }
 
    public void Send(T message)
    {
        this.queue.Enqueue(
            this.serializer.Serialize(message));
    }
}

Obvously, the Enqueue method is another void method. In the case of a persistent queue, it'll block while the message is being written to the queue, but that will tend to be a fast operation.

Composing polymorphic consistency #

Now all the building blocks are available to compose both channel implementations into the GameEngine via the CompositeChannel. That might look like this:

var playerConnString = ConfigurationManager
    .ConnectionStrings["player"].ConnectionString;
 
var gameEngine = new GameEngine(
    new CompositeChannel<ScoreChangedEvent>(
        new PlayerStoreChannel(
            new DbPlayerStore(playerConnString)),
        new QueueChannel<ScoreChangedEvent>(
            new PersistentQueue("messageQueue"),                        
            new JsonSerializer())));

When the Send method is invoked on the channel, it'll first invoke a blocking call that ensures ACID consistency for the Player, followed by asynchronous message passing for eventual consistency in other parts of the application.

Conclusion #

Even when parts of an application must be implemented in a synchronous fashion to ensure ACID consistency, an architecture based on asynchronous message passing provides a flexible foundation that enables you to polymorphically mix both kinds of consistency in a single method call. From the perspective of the application layer, this provides a consistent and uniform architecture because all mutating actions are modeled as commands end events encapsulated in messages.


Comments

Thanks a lot for mentioning us here at Pixel Pandemic and for your insights Mark! I very much agree with your conclusion and at this point we're discussing an architectural switch to what you're outlining here (a form of polymorphic consistency) and an event sourcing model for our persistent storage needs.

We're working on ways to make as many aspects of our games as possible fit with an eventual consistency model by, e.g. simply by changing the way we communicate information about the virtual game world state to players (to put them in a frame of mind in which eventual consistency fits naturally with their perception of the game state).

Looking very much forward to meeting with you again soon and discussing more details!
2011-12-07 09:46 UTC
Jake #
Could you also use the async ctp stuff to do it all in a single command, so that you are not blocking while waiting for the persistant store to do I/O, and then when it calls back push the message onto the message queue then return.. If you were using something like async controllers in mvc 4 it would mean you could do something like registering a user which saves them to the database, then pass this event information onto the persistant queue so a backend could pick it up and send emails, and do other tasks that are longer to execute.

await this.store.Save(message.PlayerId, message);
this.queue.Enqueue(this.serializer.Serialize(message));

Keen to hear your thoughts
Jake
2011-12-08 13:38 UTC
Why not? You can combine the async functionality with the approach described above. It could make the application more efficient, since it would free up a thread while the first transaction is being completed.

However, while the async CTP makes it easier to write asynchronous code, it doesn't help with blocking calls. It may be more efficient, but not necessarily faster. You can't know whether or not a transaction has committed until it actually happens, so you still need to wait for that before you decide whether or not to proceed.

BTW, F# has had async support since its inception, so it's interesting to look towards what people are doing with that. Agents (the F# word for Actors) seem to fit into that model pretty well, and as far as I can tell, an Agent is simply an in-memory asynchronous worker process.
2011-12-08 14:07 UTC
Hi Mark, firstly: great post.

I do have a question, though. I can see how this works for all future commands, because they will all need to load the aggregate work to work on it and that will be ACID at all times. What I'm not sure about is how that translates to the query side of the coin - where the query is *not* the high-score table, but must be immediately available on-screen.

Even if hypothetical, imagine the screen has a typical Heads-Up-Display of relevant information - stuff like 'ammo', 'health' and 'current score'. These are view concerns and will go down the query arm of the CQRS implementation. For example, the view-model backing this HUD could be stored in document storage for the player. This is, then, subject to eventual consistency and is not ACID, right?

I'm clearly not 'getting' this bit of the puzzle at the moment, hopefully you can enlighten me.
2011-12-14 21:23 UTC
A HUD is exactly the sort of scenario that a must be implemented by a synchronous write. If you want to be sure that the persisted data is ACID consistent, it must be written as a synchronous, blocking operation. This means that once the query side comes along, it can simply read from the same persistent store because it's always up to date. That sort of persisted data isn't eventually consistent - it's atomically consistent.
2011-12-21 08:08 UTC

TDD improves reusability

Thursday, 10 November 2011 16:55:10 UTC

There's this meme going around that software reuse is a fallacy. Bollocks! The reuse is a fallacy meme is a fallacy :) To be fair, I'm not claiming that everything can and should be reused, but my claim is that all code produced by Test-Driven Development (TDD) is being reused. Here's why:

When tests are written first, they act as a kind of REPL. Tests tease out the API of the SUT, as well as its behavior. In this point in the development process, the tests serve as a feedback mechanism. Only later, when the tests and the SUT stabilize, will the tests be repurposed (dare I say ‘reused'?) as regression tests. In other words: over time, tests written during TDD have more than one role:

  1. Feedback mechanism
  2. Regression test

Each test plays one of these roles at a time, but not both.

While the purpose of TDD is to evolve the SUT, the process produces two types of artifacts:

  1. Tests
  2. Production code

Notice how the tests appear before the production code, which is an artifact of the test code.

The unit tests are the first client of the production API.

When the production code is composed into an application, that application becomes the second client, so it reuses the API. This is a very beneficial effect of TDD, and probably one of the main reasons why TDD, if done correctly, produces code of high quality.

A colleague once told me (when referring to scale-out) that the hardest step is to go from one server to two servers, and I've later found that principle to apply much more broadly. Generalizing from a single case to two distinct cases is often the hardest step, and it becomes much easier to generalize further from two to an arbitrary number of cases.

This explains why TDD is such an efficient process. Apart from the beneficial side effect of producing a regression test suite, it also ensures that at the time the API goes into production, it's already being shared (or reused) between at least two distinct clients. If, at a later time, it becomes necessary to add a third client, the hard part is already done.

TDD produces reusable code because the production application reuses the API which were realized by the tests.


Comments

Martin #
I am a junior developer and I am doing TDD for a small project right now and I can only agree. My code looks much better because i really use it instead of making assumptions how it should be used (ADD - Assumption Driven Development)
2011-11-10 18:59 UTC
Hi all TDD fans.
If you are using NUnit for TDD you may find useful NUnit.Snippets NuGet package - "almost every assert is only few keystrokes away" TM ;)
http://writesoft.wordpress.com/2011/08/14/nunit-snippets/
http://nuget.org/List/Packages/NUnit.Snippets
2011-11-11 12:03 UTC
I think that you're equating *using* a class with reusing a class - the two aren't the same.
2011-11-16 23:04 UTC
Simple #
Hello Mark!

Do you use some test coverage software?

Is there some free test coverage tools thats worth to use? )




2012-05-11 07:11 UTC
Simple #
Or if you dont know free tools - maybe some commercial tools - but not very expensive ))

I have found for example "dotcover" from jetbrains - 140 its ok for the company )
2012-05-11 09:13 UTC
I rarely use code coverage tools. Since I develop code almost exclusively with TDD, I know my coverage is good.

I still occasionally use code coverage tools when I work in a team environment, so I have nothing against them. When I do, I just use the coverage tool which is built into Visual Studio. When used with the TestDriven.NET Visual Studio add-in it's quite friction-less.
2012-05-11 18:56 UTC

Independency

Tuesday, 08 November 2011 15:29:05 UTC

Now that my book about Dependency Injection is out, it's only fitting that I also invert my own dependencies by striking out as an independent consultant/advisor. In the future I'm hoping to combine my writing and speaking efforts, as well as my open source interests, with helping out clients write better code.

If you'd like to get help with Dependency Injection or Test-Driven Development, SOLID, API design, application architecture or one of the other topics I regularly cover here on my blog, I'm available as a consultant worldwide.

When it comes to Windows Azure, I'll be renewing my alliance with my penultimate employer Commentor, so you can also hire me as part of larger deal with Commentor.

In case you are wondering what happened to my employment with AppHarbor, I resigned from my position there because I couldn't make it work with all the other things I also would like to do. I still think AppHarbor is a very interesting project, and I wish my former employers the best of luck with their endeavor.

This has been a message from the blog's sponsor (myself). Soon, regular content will resume.


Comments

Well shux, I was waiting on pins and needles for some magic unicorn stuff from ya! I hear ya though, gotta have that liberty. :) I'm often in the same situation.

Best of luck to you, I'll be reading the blog as always.

BTW - Got the physical book finally, even though I'm no newb of IoC and such, I'd have loved a solid read when I was learning about the options back in the day. ;)

Cheers!
2011-11-08 16:11 UTC
Best of luck.

As with all other endevours you set your mind to, you will for sure also excel as a free agent.
2011-11-08 16:58 UTC
Flemming Laugesen #
Congratulation my friend - looking forward to take advances of you expertice :)

Cheers,
Flemming
2011-11-08 19:45 UTC
Congratulations on your decision, and the very best of luck, I'm sure you'll have heaps of succes.
2011-11-08 21:52 UTC
I wish you the best with your new adventure. I cannot thank you enough for all I learned from your book on Dependency Injection.
Regards,
One of your Fans in USA,
2011-11-09 03:08 UTC
Congrats! Best of luck.
2011-11-09 09:00 UTC

SOLID concrete

Tuesday, 25 October 2011 15:01:15 UTC

Greg Young gave a talk at GOTO Aarhus 2011 titled Developers have a mental disorder, which was (semi-)humorously meant, but still addressed some very real concerns about the cognitive biases of software developers as a group. While I have no intention to provide a complete resume of the talk, Greg said one thing that made me think a bit (more) about SOLID code. To paraphrase, it went something like this:

Developers have a tendency to attempt to solve specific problems with general solutions. This leads to coupling and complexity. Instead of being general, code should be specific.

This sounds correct at first glance, but once again I think that SOLID code offers a solution. Due to the Single Responsibility Principle each SOLID concrete (pardon the pun) class will tend to very specifically address a very narrow problem.

Such a class may implement one (or more) general-purpose interface(s), but the concrete type is specific.

The difference between the generality of an interface and the specificity of a concrete type becomes more and more apparent the better a code base applies the Reused Abstractions Principle. This is best done by defining an API in terms of Role Interfaces, which makes it possible to define a few core abstractions that apply very broadly, while implementations are very specific.

As an example, consider AutoFixture's ISpecimenBuilder interface. This is a very central interface in AutoFixture (in fact, I don't even know just how many implementations it has, and I'm currently too lazy to count them). As an API, it has proven to be very generally useful, but each concrete implementation is still very specific, like the CurrentDateTimeGenerator shown here:

public class CurrentDateTimeGenerator : ISpecimenBuilder
{
    public object Create(object request, 
        ISpecimenContext context)
    {
        if (request != typeof(DateTime))
        {
            return new NoSpecimen(request);
        }
 
        return DateTime.Now;
    }
}

This is, literally, the entire implementation of the class. I hope we can agree that it's very specific.

In my opinion, SOLID is a set of principles that can help us keep an API general while each implementation is very specific.

In SOLID code all concrete types are specific.


Comments

nelson #
I don't agree with the "Reused Abstractions Principle" article at all. Programming to interfaces provides many benefits even in cases where "one interface, multiple implementations" doesn't apply.

For one, ctor injection for dependencies makes them explicit and increases readability of a particular class (you should be able to get a general idea of what a class does by looking at what dependencies it has in its ctor). However, if you're taking in more than a handful of dependencies, that is an indication that your class needs refactoring. Yes, you could accept dependencies in the form of concrete classes, but in such cases you are voiding the other benefits of using interfaces.

As far as API writing goes, using interfaces with implementations that are internal is a way to guide a person though what is important in your API and what isn't. Offering up a bunch of instantiatable classes in an API adds to the mental overhead of learning your code - whereas only marking the "entry point" classes as public will guide people to what is important.

Further, as far as API writing goes, there are many instances where Class A may have a concrete dependency on Class B, but you wish to hide the methods that Class A uses to talk to Class B. In this case, you may create an interface (Interface B) with all of the public methods that you wish to expose on Class B and have Class B implement them, then add your "private" methods as simple, non-virtual, methods on Class B itself. Class A will have a property of type Interface B, which simply returns a private field of type Class B. Class A can now invoke specific methods on Class B that aren't accessible though the public API using it's private reference to the concrete Class B.

Finally, there are many instances where you want to expose only parts of a class to another class. Let's say that you have an event publisher. You would normally only want to expose the methods that have to do with publishing to other code, yet that same class may include facilities that allow you to register handler objects with it. Using interfaces, you can limit what other classes can and can't do when they accept your objects as dependencies.

These are instances of what things you can do with interfaces that make them a useful construct on their own - but in addition to all of that, you get the ability to swap out implementations without changing code. I know that often times implementations are never swapped out in production (rather, the concrete classes themselves are changed), which is why I mention this last, but in the rare cases where it has to be done, interfaces make this scenario possible.

My ultimate point is that interfaces don't always equal generality or abstraction. They are simply tools that we can use to make code explicit and readable, and allow us to have greater control over method/property accessibility.
2011-10-25 18:15 UTC
The RAP fills the same type of role as unit testing/TDD: theoretically, it's possible to write testable code without writing a single unit test against it, but how do you know?

It's the same thing with the RAP: how can you know that it's possible to exchange one implementation of an interface with another if you've never tried it? Keep in mind that Test Doubles don't count because you can always create a bogus implementation of any interface. You could have an interface that's one big leaky abstraction: even though it's an interface, you can never meaningfully replace that single implementation with any other meaningful production implementation.

Also: using an interface alone doesn't guarantee the Liskov Substitution Principle. However, by following the RAP, you get a strong indication that you can, indeed, replace one implementation with another without changing the correctness of the code.
2011-10-25 18:56 UTC
nelson #
That was my point, though. You can use interfaces as a tool to solve problems that have nothing directly to do with substituting implementations. I think people see this as the only usecase for the language construct, which is sad. These people then turn around and claim that you shouldn't be using interfaces at all, except for cases in which substituting implementation is the goal. I think this attitude disregards many other proper uses for the construct; the most important I think is being able to hide implementation details in the public API.

If an interface does not satisfy RAP, it does not absolutely mean that interface is invalid. Take the Customer and CustomerImpl types specified in the linked article. Perhaps the Customer interface simply exposes a public, readonly, interface for querying information about a customer. The CustomerImpl class, instantiated and acted upon behind the scenes in the domain services, may specify specific details such as OR/mapping or other behavior that isn't intended to be accessible to client code. Although a slightly contrived example (I would prefer the query model to be event sourced, not mapped to a domain model mapped to an RDBMS), I think this use is valid and should not immediately be thrown out because it does not satisfy RAP.
2011-10-25 20:15 UTC
On his bio it says that Greg Young writes for Experts Exchange. Maybe he's the one with the mental disorder :P
2011-10-26 04:16 UTC
Nelson, I think we agree :) To me, the RAP is not an absolute rule, but just another guideline/metric. If none of my interfaces have multiple implementations, I start to worry about the quality of my abstractions, but I don't find it problematic if some of my interfaces have only a single implementation.

Your discussion about interfaces fit well with the Interface Segregation Principle and the concept of Role Interfaces, and I've also previously described how interfaces act as access modifiers.
2011-10-26 08:37 UTC

Page 58 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!