Bank OCR kata in F#: user story 1

Tuesday, 29 May 2012 06:26:34 UTC

In case my previous post left readers in doubt about whether or not I like Functional Programming (FP), this post should make it clear that I (also) love FP.

A couple of years ago I had the pleasure of performing a technical review of Real World Functional Programming, which caused me to significantly shift my C# coding style towards a more functional style.

Now I want to take the next step, so I've started doing katas in F#. In a series of blog posts, I'll share my experiences with learning F# as I go along. I'm sure that the F# gods will wince from my code, but that's OK - if any of my readers find this educational, my goal will be met.

In this post I'll start out with the first use case of the Bank OCR kata.

The core unit test #

The first thing I wanted to do was write a couple of tests to demonstrate that I could parse the input. This Parameterized Test uses xUnit.net data theories and FsUnit for the assertion:

[<Theory>]
[<InlineData("
    _  _     _  _  _  _  _ 
  | _| _||_||_ |_   ||_||_|
  ||_  _|  | _||_|  ||_| _|", 123456789)>]
[<InlineData("
    _  _     _  _  _  _  _ 
  | _| _||_||_ |_   ||_||_|
  ||_  _|  | _||_|  ||_||_|", 123456788)>]
[<InlineData("
    _  _     _  _  _  _  _ 
  | _| _||_||_ |_   ||_||_|
  | _| _|  | _||_|  ||_||_|", 133456788)>]
[<InlineData("
    _  _     _  _  _  _  _ 
  | _| _||_||_ |_   ||_|| |
  | _| _|  | _||_|  ||_||_|", 133456780)>]
let ParseToNumberReturnsCorrectResult entry expected =
    entry
    |> ParseToNumber
    |> should equal expected

One of the many nice features of F# is that it's possible to break string literals over multiple lines, so instead of having one wide, unreadable string with "\r\n" instead of line breaks, I could write the test cases directly in the test source code and keep them legible.

In the test function body itself, I pipe the entry function argument into the ParseToNumber function using the pipeline operator |>, so

entry
|> ParseToNumber

is equivalent to writing

ParseToNumber entry

However, by piping the input into the SUT and the result further on to the assertion, I achieve a very dense test method where it's fairly clear that it follows the Arrange Act Assert (AAA) pattern.

The assertion is expressed using FsUnit's F# adapter over xUnit.net. It should be clear that it states that the result (piped from the ParseToNumber function) should be equal to the expected function argument.

Implementation #

Here's the body of the ParseToNumber function:

let ParseToNumber entry =
    entry
    |> ParseToDigits
    |> Seq.map Option.get
    |> Seq.reduce (fun x y -> x * 10 + y)

If you're used to read and write C# you may be wondering about the compactness of it all. Where are all the type declarations?

F# is a strongly typed language, but it has very sophisticated type inferencing capabilities, so the compiler is able to infer that the signature is string - > int, which (in this case) is read as a function which takes a string as input and returns an integer. The equivalent C# notation would be public int ParseToNumber(string entry).

How does the F# compiler know this?

The entry parameter is easy to infer because the function body invokes the ParseToDigits function, which takes a string as input. Therefore, entry must be a string as well.

The output type is a little harder to understand, but basically it boils down to something like this: the ParseToDigits function returns a sequence (basically, an IEnumerable<T>) of integer options. The Seq.map function converts this to a sequence of integers, and the Seq.fold function aggregates that sequence into a single integer.

Don't worry if you didn't understand all that yet - I'll walk you through the body of the function now.

The entry argument is piped into the ParseToDigits function, which returns a sequence of integer options. An option is a type that either has, or doesn't have, a value, so the next step is to unwrap all the values. This is done by piping the option sequence into the Seq.map function, which is equivalent to the LINQ Select method. The Option.get method simply unwraps a single option. It will throw an exception if the value is None, but in this implementation, this is the desired behavior.

The last thing to do is to aggregate the sequence of integers into a single number. The Seq.reduce method aggregates the sequence according to a given accumulator function.

In F#, an inline code block is written with the fun keyword, but it's entirely equivalent to a C# code block, which would look like this:

(x, y) => x * 10 + y

This function basically interprets the sequence of integers as a stream of digits, so in order to arrive at the correct digital representation of the number, each accumulated result is multiplied by ten and added to the new digit.

Obviously, all the real parsing takes place in the ParseToDigits function:

let ParseToDigits (entry : string) =
    let toIntOption ocrDigit =
        match ocrDigit |> Seq.toArray with
        | [| (' ', '|', '|'); ('_', ' ', '_'); (' ', '|', '|') |] -> Some 0
        | [| (' ', ' ', ' '); (' ', ' ', ' '); (' ', '|', '|') |] -> Some 1
        | [| (' ', ' ', '|'); ('_', '_', '_'); (' ', '|', ' ') |] -> Some 2
        | [| (' ', ' ', ' '); ('_', '_', '_'); (' ', '|', '|') |] -> Some 3
        | [| (' ', '|', ' '); (' ', '_', ' '); (' ', '|', '|') |] -> Some 4
        | [| (' ', '|', ' '); ('_', '_', '_'); (' ', ' ', '|') |] -> Some 5
        | [| (' ', '|', '|'); ('_', '_', '_'); (' ', ' ', '|') |] -> Some 6
        | [| (' ', ' ', ' '); ('_', ' ', ' '); (' ', '|', '|') |] -> Some 7
        | [| (' ', '|', '|'); ('_', '_', '_'); (' ', '|', '|') |] -> Some 8
        | [| (' ', '|', ' '); ('_', '_', '_'); (' ', '|', '|') |] -> Some 9
        | _ -> None
 
    let lines = entry.Split([| "\r\n" |], StringSplitOptions.RemoveEmptyEntries)
 
    Seq.zip3 lines.[0] lines.[1] lines.[2]
    |> chunk 3
    |> Seq.map toIntOption

That may look scary, but is actually not that bad.

The function contains the nested function toIntOption which is local to the scope of ParseToDigits. One of the things I don't like that much about F# is that you have to declare and implement the details before the general flow, so you almost have to read the code backwards if you want the overview before diving into all the details.

The first thing the ParseToDigit function does is to split the input into three lines according to the specification of the kata. It uses the standard .NET String.Split method to do that.

Next, it zips each of these three lines with each other. When you zip something, you take the first element of each sequence and create a tuple out of those three elements; then the second element of each sequence, and so forth. The result is a sequence of tuples, where each tuple represents a vertical slice down a digit.

As an example, let's look at the digit 0:

 _ 
| |
|_|

The first vertical slice contains the characters ' ', '|' and '|' so that results in the tuple (' ', '|', '|'). The next vertical slice corresponds to the tuple ('_', ' ', '_') and the third (' ', '|', '|').

With me so far? OK, good.

The Seq.zip3 function produces a sequence of such tuples, but since the kata states that each digit is exactly three characters wide, it's necessary to divide that stream of tuples into chunks of three. In order to do that, I pipe the zipped sequence to the chunk function.

Seq.zip3 lines.[0] lines.[1] lines.[2]
|> chunk 3

I'll post the chunk function further down, but the result is a sequence of sequence of chars which can then be mapped with the toIntOption function.

Now that I've explained how those tuples look, the toIntOption shouldn't be too hard to understand. Each ocrDigit argument is expected to be a sequence of three tuples, so after creating an array from the sequence, the function uses pattern matching to match each tuple array to an option value.

It simply matches an array of the tuples (' ', '|', '|'); ('_', ' ', '_'); (' ', '|', '|') to the number 0 (as explained above), and so on.

If there's no match, it returns the option value None, instead of Some.

Hang on, I'm almost done. The last part of the puzzle is the chunk function:

let rec chunk size sequence =
    let skipOrEmpty size s =
        if s |> Seq.truncate size |> Seq.length >= size
        then s |> Seq.skip size
        else Seq.empty
 
    seq {
        yield sequence |> Seq.truncate size
 
        let next = sequence |> skipOrEmpty size
        if not (Seq.isEmpty next) then yield! chunk size next
        }

Just like the ParseToDigits function, the chunk function uses a nested function to do some of its work.

First of all you may notice that the function declaration uses the rec keyword, which means that this is a recursive function. FP is all about recursion, so this is just something you'll have to get used to.

The main part of the function consists of a sequence expression, denoted by the seq keyword.

First the function returns the first n elements from the sequence, in order to return the first chunk. Next, it skips that chuck and if the sequence isn't empty it recursively calls itself with the rest of the sequence.

The skipOrEmpty function is just a little helper function to work around the behavior of the Seq.skip function, which throws an exception if you ask it to skip some elements of a sequence which doesn't contain that many elements.

That's it. The code is actually quite compact, but I just took a long time explaining it all in some detail.

In a future post I'll walk through the other use cases of the Bank OCR kata.


Comments

Simon Skov Boisen #
Great post! I just love how succinct F# code is. I'm a bit worried about the language though. I'm not sure how much Microsoft will support it in the future - the F# 3.0 features seems very awesome - but the language is still lagging a lot of functional stuff like functors and polymorphic variants.
2012-05-29 12:16 UTC
I'm not sure about your implementation of chunk. I think all helper functions that take IEnumerable (a.k.a. seq in F#) should iterate over the source at most once. That's because it will be more efficient and, more importantly, some sequences can be iterated only once.

Yeah, in a small example like this it doesn't really matter and readability is more important. But I think things like this should be mentioned, so that people are not surprised if they try to use the function in different contexts.
2012-05-29 12:42 UTC

Design patterns across paradigms

Friday, 25 May 2012 03:47:29 UTC

This blog post makes the banal observation that design patterns tend to be intimately coupled to the paradigm in which they belong.

It seems as though lately it has become hip to dismiss design patterns as a bit of a crutch to overcome limitations in Object-Oriented Design (OOD). One example originates from The Joy of Clojure:

"In this section, we'll attempt to dissuade you from viewing Clojure features as design patterns [...] and instead as an inherent nameless quality.

"[...]the patterns described [in Design Patterns] are aimed at patching deficiencies in popular object-oriented programming languages. This practical view of design patterns isn't directly relevant to Clojure, because in many ways the patterns are ever-present and are first-class citizens of the language itself." (page 303)

From what little I've understood about Clojure, that seems like a reasonable assertion.

Another example is a tweet by Neal Swinnerton:

"With functions that accept functions and/or return functions 400 of the 600 pages of design patterns can be burnt says @stilkov ‪#euroclojure"

and Stefan Tilkov elaborates:

"@ploeh @sw1nn @ptrelford many OO DPs only exist to replace missing FP features in the first place"

First of all: I completely agree. In fact, I find these statements rather uncontroversial. The only reason I'm writing this is because I see this sentiment popping up more and more (but perhaps I'm just suffering from the Baader-Meinhof phenomenon).

Patterns for OOD #

Many 'traditional' design patterns describe how to solve problems which are complex when working with object-oriented languages. A lot of those problems are inherent in the object-oriented paradigm.

Other paradigms may provide inherent solutions to those problems. As an example, Functional Programming (FP) includes the central concept of Function Composition. Functions can be composed according to signature. In OOD terminology you can say that the signature of a function defines its type.

Most of the classic design patterns are based upon the idea of programming to an interface instead of a concrete class. In OOD, it's necessary to point this out as a piece of explicit advice because the default in OOD is to program against a concrete class.

That's not the case in FP because functions can be composed as long as their signatures are compatible. Loose coupling is, so to speak, baked into the paradigm.

Thus, it's true that many of the OOD patterns are irrelevant in an FP context. That doesn't mean that FP doesn't need patterns.

Patterns for FP #

So if FP (or Clojure, for that matter) inherently address many of the shortcomings of OOD that give rise to patterns, does it mean that design patterns are redundant in FP?

Hardly. This is reminiscent of the situation of a couple of years ago when Ruby on Rails developers were pointing fingers at everyone else because they had superior productivity, but now when they are being tasked with maintaining complex system, they are learning the hard way that Active Record is an anti-pattern. Duh.

FP has shortcomings as well, and patterns will emerge to address them. While FP has been around for a long time, it hasn't been as heavily used (and thus subjected to analysis) as OOD, so the patterns may not yet have been formulated yet, but if FP gains traction (and I believe it will), patterns will emerge. However, they will be different patterns.

Once we have an extensive body of patterns for FP, we'll be able to state the equivalent of the introductory assertion:

Most of the established FP patterns address shortcomings of FP. Using the FloopyDoopy paradigm makes most of them redundant.

What would be shortcomings of FP? I don't know them all, but here's a couple of suggestions:

Mutability #

Apart from calculation-intensive software, most software is actually all about mutating state: Take an order. Save to the database. Write a file. Send an email. Render a view. Print a document. Write to the console. Send a message...

In FP they've come up with this clever concept of monads to 'work around' the problem of mutating state. Yes, monads are very clever, but if they feel foreign in OOD it's because they're not required. Mutation is an inherent part of the OOD paradigm, and it very intuitively maps to the 'real world', where mutation happens all the time. Monads are cool, but not particularly intuitive.

FP practitioners may not realize this, but a monad is a design pattern invented to address a shortcoming in 'pure' functional languages.

Discoverability #

As Phil Trelford kindly pointed out at GOTO Copenhagen 2012, OOD is often characterized by 'dot-driven development.' What does that mean?

It means that given a variable, we can often just enter ".", and our IDE is going to give us a list of methods we can call on the object:

IntelliSense

Since behavior is contained by each type, we can use patterns such as Fluent Interface to make it easy to learn a new API. While we can laugh at the term 'dot-driven development' it's hard to deny that it makes it very easy to learn.

The API itself carries the information about how to use it, and what is possible. That's very Clean Code. Out-of-band documentation isn't required.

I wouldn't even know how to address this shortcoming in FP, but I'm sure patterns will evolve.

Different paradigms require different patterns #

All of this boils down to a totally banal observation:

Most patterns address shortcomings specific to a paradigm

Saying that most of the OOD patterns are redundant in FP is like saying that you don't need oven mittens to pour a glass of water.

There might be a slight overlap, but I'd expect most patterns to be tightly coupled to the paradigm in which they were originally stated.

There might be a few patterns that are generally applicable.

The bottom line is that FP isn't inherently better just because many of the OOD design patterns are redundant. Neither is OOD inherently better. They are different. That's all.


Comments

Martin #
Good post, I entirely agree with the conclusion.

Reg. mutatable state:

"Mutation is an inherent part of the OOD paradigm, and it very intuitively maps to the 'real world', where mutation happens all the time."

As Rich Hickley also told us at Goto Copenhagen, above statement may not be entirely true. Current state changes, but that does not mean that previous state ceases to exist. Also, as software devs we are not necessarily interested in representing the real world, but rather a view of the world that suits our business domain. So the question becomes: does the OOD notion of mutable state suit our business domain, or is the FP model, where everything is an immutable value in time, a better fit?
2012-05-25 12:42 UTC
Martin, I agree that often the FP view is superior because it contains more information. What I said was that mutation is often more intuitive. Consider a kettle of boiling water: over time, the water in the kettle mutates - it becomes steam.

While we are programmers, we are still human, and unless you get involved in pretty advanced theoretical physics, a practical and very precise world view is that objects change state over time.
2012-05-25 13:07 UTC
Nice post in a mostly neglected subject. I am really interested to understand such patterns.

Alson, I see dot-driven development even more production in functional programming, or at least what I do as FP in C#. Function composition is nicely enhanced using static extensions over Func, Action and Task. I used this in one of my posts on comparing performance of object instantiation methods.
2012-06-03 20:43 UTC
anonymous coward #
This
http://c2.com/cgi/wiki?AreDesignPatternsMissingLanguageFeatures
discussed the necessity of Pattern in FP.
The notion that pattern are not required in FP seems to be around for some time,
PaulGraham said "Peter Norvig found that 16 of the 23 patterns in Design Patterns were 'invisible or simpler' in Lisp." http://www.norvig.com/design-patterns/
2012-06-13 06:53 UTC
Rant Antiblog #
Very naive article, unfortunately written with a lot of conviction.
- Trying to combine OOD and pure FP is pointless and misses the point. Less is more.
- Design patterns are analogous to cooking recipes.
- Monads are not a "design pattern invented to address a shortcoming in 'pure' functional languages".
2012-06-13 08:57 UTC

I came late to comment, but again I'm here due to an exchange of tweets with the author. I've pointed out a S.O. question titled SOLID for functional programming with a totally misleading answer (in my opinion, it's clear).

In that case the answer lacks technical correctness, but in this post Mark Seemann points out a concept that is difficult to contradict.

Patterns are abstraction that provides concrete solutions (pardon the pun) to identifiable problems that present themselves in different shapes.
Tha fact that when we talk of patterns we talk of them in relation to OO languages, it just a contingency.

I've to completely agree with the author; functional languages need patterns too and as more these become popular as more functional patterns will arise.

2013-17-03 09:19 UTC

TDD test suites should run in 10 seconds or less

Thursday, 24 May 2012 15:20:48 UTC

Most guidance about Test-Driven Development (TDD) will tell you that unit tests should be fast. Examples of such guidance can be found in FIRST and xUnit Test Patterns. Rarely does it tell you how fast unit tests should be.

10 seconds for a unit test suite. Max.

Here's why.

When you follow the Red/Green/Refactor process, ideally you'd be running your unit test suite at least three times for each iteration:

  1. Red. Run the test suite.
  2. Green. Run the test suite.
  3. Refactor. Run the test suite.

Each time you run the unit test suite, you're essentially blocked. You have to wait until the test run has completed until you get the result.

During that wait time, it's important to keep focus. If the test run takes too long, your mind starts to wonder, and you'll suffer from context switching when you have to resume work.

When does your mind start to wonder? After about 10 seconds. That number just keeps popping up when the topic turns to focused attention. Obviously it's not a hard limit. Obviously there are individual and contextual variations. Still, it seems as though a 10 second short-term attention span is more or less hard-wired into the human brain.

Thus, a unit test suite used for TDD should run in less than 10 seconds. If it's slower, you'll be less productive because you'll constantly lose focus.

Implications #

The test suite you work with when you do TDD should execute in less than 10 seconds on your machine. If you have hundreds of tests, each test should be faster than 100 milliseconds. If you have thousands, each test should be faster than 10 milliseconds. You get the picture.

That test suite doesn't need to be the same as is running on your CI server. It could be a subset of tests. That's OK. The TDD suite may just be part of your Test Pyramid.

Selective test runs #

Many people work around the Slow Tests anti-pattern by only running one test at a time, or perhaps one test class. In my experience, this is not an optimal solution because it slows you down. Instead of just going

  1. Red
  2. Run
  3. Green
  4. Run
  5. Refactor
  6. Run

you'd need to go

  1. Red
  2. Specifically instruct your Test Runner to run only the test you just wrote
  3. Green
  4. Decide which subset of tests may have been affected by the new test. This obviously involves the new test, but may include more tests.
  5. Run the tests you just selected
  6. Refactor
  7. Decide which subset of tests to run now
  8. Run the tests you just selected

Obviously, that introduces friction into your process. Personally, I much prefer to have a fast test suite that I can run all the time at a key press.

Still, there are tools available that promises to do this analysis for you. One of them are Mighty Moose, with which I've had varying degrees of success. Another similar approach is NCrunch.

References? #

For a long time I've been wanting to write this article, but I've always felt held back by a lack of citations I could exhibit. While the Wikipedia link does provide a bit of information, it's not that convincing in itself (and it also cites the time span as 8 seconds).

However, that 10 second number just keeps popping up in all sorts of contexts, not all of them having anything to do with software development, so I decided to run with it. However, if some of my readers can provide me with better references, I'd be delighted.

Or perhaps I'm just suffering from confirmation bias...


Comments

What are your thoughts on continuous test runners like NCrunch? The time for test running is effectively reduced to zero, but you no longer get the red/greeen cycle. Is the cycle crucial, or the notion of running test often the important bit?
2012-05-24 17:52 UTC
In Agile we have this concept of rapid feedback. That a test runner runs in the background doesn't address the problem of Slow Tests. It's no good if you start writing more code and then five minutes later, your continuous test runner comes back and tells you that something you did five minutes ago caused the test suite to fail. That's not timely feedback.

Some of that can be alleviated with a DVCS, but rapid feedback just makes things a lot easier.
2012-05-24 18:39 UTC
Vasily Kirichenko #
>> Still, there are tools available that promises to do this analysis for you. One of them are
>> Mighty Moose, with which I've had varying degrees of success. Another similar approach is NCrunch.

I tried both the tools in a real VS solution (contained about 180 projects, 20 of them are test ones). Mighty Moose was continuously throwing exceptions and I didn't manage to get it work at all. NCrunch could not compile some of projects (it uses Mono compiler) but works with some success with the remained ones. Yes, feedback is rather fast (2-5 secs instead of 10-20 using ReSharper test runner), but it unstable and I've returned to ReSharper back.
2012-05-25 05:27 UTC
As a practical matter, I've seen up to 45 seconds be _tolerable_, not ideal.
2012-06-04 13:39 UTC
I fully agree.

TDD cannot be used for large applications.
2012-06-22 06:43 UTC

I was doing some research myself. From what I understand, the Wikipedia article you link to talks mainly about attention span in the context of someone doing a certain task and being disturbed. However, while we wait for our unit tests to finish, we're not doing anything. This is more similar to waiting for a website to download and render.

The studies on "tolerable waiting time" are all over the map, but even old ones talk about 2 seconds. This paper mentions several studies, two of them from pre-internet days (scroll down to the graphics for a comparison table). This would mean that, ideally, we would need our tests to run in not 10 but 2 seconds! I say ideally, because this seams unrealistic to me at this moment (especially in projects where even compilation takes longer than 2 seconds). Maybe in the future, who knows.

2018-09-26 14:35 UTC

Peter, thank you for writing. I recall reading about the 10 second rule some months before I wrote the article, but that, when writing the article, I had trouble finding public reference material. Thank you for making the effort of researching this and sharing your findings. I don't dispute what you wrote, but here are some further thoughts on the topic:

If the time limit really is two seconds, that's cause for some concern. I agree with you that it might be difficult to achieve that level of response time for a moderately-sized test suite. This does, however, heavily depend on various factors, the least of which isn't the language and platform.

For instance, when working with a 'warm' C# code base, compilation time can be fast enough that you might actually be able to compile and run hundreds of tests within two seconds. On the other hand, just compiling a moderately-sized Haskell code base takes longer than two seconds on my machine (but then, once Haskell compiles, you don't need a lot of tests to verify that it works correctly).

When working in interpreted languages, like SmallTalk (where TDD was originally rediscovered), Ruby, or JavaScript, there's no compilation step, so tests start running immediately. Being interpreted, the test code may run slower than compiled code, but my limited experience with JavaScript is that it can still be fast enough.

2018-09-26 18:01 UTC

Vendor Media Types With the ASP.NET Web API

Tuesday, 24 April 2012 11:45:41 UTC

In RESTful services, media types (e.g. application/xml, application/json) are an important part of Content Negotiation (conneg in the jargon). This enables an API to provide multiple representations of the same resource.

Apart from the standard media types such as application/xml, application/json, etc. an API can (and often should, IMO) expose its resources using specialized media types. These often take the form of vendor-specific media types, such as application/vnd.247e.catalog+xml or application/vnd.247e.album+json.

In this article I'll present some initial findings I've made while investigating this in the ASP.NET Web API (beta).

For an introduction to conneg with the Web API, see Gunnar Peipman's ASP.NET blog, particularly these two posts:

The Problem #

In a particular RESTful API, I'd like to enable vendor-specific media types as well as the standard application/xml and application/json media types.

More specifically, I'd like to add these media types to the API:

  • application/vnd.247e.album+xml
  • application/vnd.247e.artist+xml
  • application/vnd.247e.catalog+xml
  • application/vnd.247e.search-result+xml
  • application/vnd.247e.track+xml
  • application/vnd.247e.album+json
  • application/vnd.247e.artist+json
  • application/vnd.247e.catalog+json
  • application/vnd.247e.search-result+json
  • application/vnd.247e.track+json

However, I can't just add all these media types to GlobalConfiguration.Configuration.Formatters.XmlFormatter.SupportedMediaTypes or GlobalConfiguration.Configuration.Formatters.JsonFormatter.SupportedMediaTypes. If I do that, each and every resource in the API would accept and (claim to) return all of those media types. That's not what I want. Rather, I want specific resources to accept and return specific media types.

For example, if a resource (Controller) returns an instance of the SearchResult (Model) class, it should only accept the media types application/vnd.247e.search-result+xml, application/vnd.247e.search-result+json (as well as the standard application/xml and application/json media types).

Likewise, a resource handling the Album (Model) class should accept and return application/vnd.247e.album+xml and application/vnd.247e.album+json, and so on.

Figuring out how to enable such behavior took me a bit of fiddling (yes, Fiddler was involved).

The Solution #

The Web API uses a polymorphic collection of MediaTypeFormatter classes. These classes can be extended to be more specifically targeted at a specific Model class.

For XML formatting, this can be done by deriving from the built-in XmlMediaTypeFormatter class:

public class TypedXmlMediaTypeFormatter : XmlMediaTypeFormatter
{
    private readonly Type resourceType;
 
    public TypedXmlMediaTypeFormatter(Type resourceType,
        MediaTypeHeaderValue mediaType)
    {
        this.resourceType = resourceType;
 
        this.SupportedMediaTypes.Clear();
        this.SupportedMediaTypes.Add(mediaType);
    }
 
    protected override bool CanReadType(Type type)
    {
        return this.resourceType == type;
    }
 
    protected override bool CanWriteType(Type type)
    {
        return this.resourceType == type;
    }
}

The implementation is quite simple. In the constructor, it makes sure to clear out any existing supported media types and to add only the media type passed in via the constructor.

The CanReadType and CanWriteType overrides only return true of the type parameter matches the type targeted by the particular TypedXmlMediaTypeFormatter instance. You could say that the TypedXmlMediaTypeFormatter provides a specific match between a media type and a resource Model class.

The JSON formatter is similar:

public class TypedJsonMediaTypeFormatter : JsonMediaTypeFormatter
{
    private readonly Type resourceType;
 
    public TypedJsonMediaTypeFormatter(Type resourceType,
        MediaTypeHeaderValue mediaType)
    {
        this.resourceType = resourceType;
 
        this.SupportedMediaTypes.Clear();
        this.SupportedMediaTypes.Add(mediaType);
    }
 
    protected override bool CanReadType(Type type)
    {
        return this.resourceType == type;
    }
 
    protected override bool CanWriteType(Type type)
    {
        return this.resourceType == type;
    }
}

The only difference from the TypedXmlMediaTypeFormatter class is that this one derives from JsonMediaTypeFormatter instead of XmlMediaTypeFormatter.

With these two classes available, I can now register all the custom media types in Global.asax like this:

GlobalConfiguration.Configuration.Formatters.Add(
    new TypedXmlMediaTypeFormatter(
        typeof(Album),
        new MediaTypeHeaderValue(
            "application/vnd.247e.album+xml")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedXmlMediaTypeFormatter(
        typeof(Artist),
        new MediaTypeHeaderValue(
            "application/vnd.247e.artist+xml")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedXmlMediaTypeFormatter(
        typeof(Catalog),
        new MediaTypeHeaderValue(
            "application/vnd.247e.catalog+xml")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedXmlMediaTypeFormatter(
        typeof(SearchResult),
        new MediaTypeHeaderValue(
            "application/vnd.247e.search-result+xml")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedXmlMediaTypeFormatter(
        typeof(Track),
        new MediaTypeHeaderValue(
            "application/vnd.247e.track+xml")));
 
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedJsonMediaTypeFormatter(
        typeof(Album),
        new MediaTypeHeaderValue(
            "application/vnd.247e.album+json")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedJsonMediaTypeFormatter(
        typeof(Artist),
        new MediaTypeHeaderValue(
            "application/vnd.247e.artist+json")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedJsonMediaTypeFormatter(
        typeof(Catalog),
        new MediaTypeHeaderValue(
            "application/vnd.247e.catalog+json")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedJsonMediaTypeFormatter(
        typeof(SearchResult),
        new MediaTypeHeaderValue(
            "application/vnd.247e.search-result+json")));
GlobalConfiguration.Configuration.Formatters.Add(
    new TypedJsonMediaTypeFormatter(
        typeof(Track),
        new MediaTypeHeaderValue(
            "application/vnd.247e.track+json")));

This is rather repetitive code, but I'll leave it as an exercise to the reader to write a set of conventions that appropriately register the correct media type for a Model class.

Caveats #

Please be aware that I've only tested this with a read-only API. You may need to tweak this solution in order to also handle incoming data.

As far as I can tell from the Web API source repository, it seems as though there are some breaking changes in the pipeline in this area, so don't bet the farm on this particular solution.

Lastly, it seems as though this solution doesn't correctly respect opt-out quality parameters in incoming Accept headers. As an example, if I request a 'Catalog' resource, but supply the following Accept header, I'd expect the response to be 406 (Not Acceptable).

Accept: application/vnd.247e.search-result+xml; q=1, */*; q=0.0

However, the result is that the service falls back to its default representation, which is application/json. Whether this is a problem with my approach or a bug in the Web API, I haven't investigated.


Comments

For those who are interested in returning 406 depending on client Accept header, there is an article about it and the my attempt to improve the code in it:

http://pedroreys.com/2012/02/17/extending-asp-net-web-api-content-negotiation/
https://gist.github.com/2499672
2012-05-03 02:10 UTC
According to HTTP Spec: "Use of non-registered media types is discouraged"


You might find this useful.
2012-12-09 14:37 UTC

Wiring HttpControllerContext With Castle Windsor

Thursday, 19 April 2012 15:14:30 UTC

In a previous post I demonstrated how to wire up HttpControllerContext with Poor Man's DI. In this article I'll show how to wire up HttpControllerContext with Castle Windsor.

This turns out to be remarkably difficult, at least with the constraints I tend to set for myself:

  • Readers of this blog may have an inkling that I have an absolute abhorrence of static state, so anything relying on that is out of the question.
  • In addition to that, I also prefer to leverage the container as much as possible, so I'm not keen on duplicating a responsibility that really belongs to the container.
  • No injecting the container into itself. That's just unnatural and lewd (without being fun).
  • If possible, the solution should be thread-safe.
  • The overall solution should still adhere to the Register Resolve Release pattern, so registering a captured HttpControllerContext is a no-go. It's also unlikely to work, since you'd need to somehow scope each registered instance to its source request.
  • That also rules out nested containers.

OK, so given these constraints, how can an object graph like this one be created, only with Castle Windsor instead of Poor Man's DI?

return new CatalogController(
    new RouteLinker(
        baseUri,
        controllerContext));

Somehow, the HttpControllerContext instance must be captured and made available for further resolution of a CatalogController instance.

Capturing the HttpControllerContext #

The first step is to capture the HttpControllerContext instance, as early as possible. From my previous post it should be reasonably clear that the best place to capture this instance is from within IHttpControllerActivator.Create.

Here's the requirement: the HttpControllerContext instance must be captured and made available for further resolution of the object graph - preferably in a thread-safe manner. Ideally, it should be captured in a thread-safe object that can start out uninitialized, but then allow exactly one assignment of a value.

At the time I first struggled with this, I was just finishing The Joy of Clojure, so this sounded to me an awful lot like the description of a promise. Crowdsourcing on Twitter turned up that the .NET equivalent of a promise is TaskCompletionSource<T>.

Creating a custom IHttpControllerActivator with an injected TaskCompletionSource<HttpControllerContext> sounds like a good approach. If the custom IHttpControllerActivator can be scoped to a specific request, that would be the solution then and there.

However, as I've previously described, the current incarnation of the ASP.NET Web API has the unfortunate behavior that all framework Services (such as IHttpControllerActivator) are resolved once and cached forever (effectively turning them into having the Singleton lifestyle, despite what you may attempt to configure in your container).

With Dependency Injection, the common solution to bridge the gap between a long-lasting lifestyle and a shorter lifestyle is a factory.

Thus, instead of injecting TaskCompletionSource<HttpControllerContext> into a custom IHttpControllerActivator, a Func<TaskCompletionSource<HttpControllerContext>> can be injected to bridge the lifestyle gap.

One other thing: the custom IHttpControllerActivator is only required to capture the HttpControllerContext for further reference, so I don't want to reimplement all the functionality of DefaultHttpControllerActivator. This is the reason why the custom IHttpControllerActivator ends up being a Decorator:

public class ContextCapturingControllerActivator :
    IHttpControllerActivator
{
    private readonly IHttpControllerActivator activator;
    private readonly Func<TaskCompletionSource<HttpControllerContext>>
        promiseFactory;
 
    public ContextCapturingControllerActivator(
        Func<TaskCompletionSource<HttpControllerContext>> promiseFactory,
        IHttpControllerActivator activator)
    {
        this.activator = activator;
        this.promiseFactory = promiseFactory;
    }
 
    public IHttpController Create(
        HttpControllerContext controllerContext,
        Type controllerType)
    {
        this.promiseFactory().SetResult(controllerContext);
        return this.activator.Create(controllerContext, controllerType);
    }
}

The ContextCapturingControllerActivator class simply Decorates another IHttpControllerActivator and does one thing before delegating the call to the inner implementation: it uses the factory to create a new instance of TaskCompletionSource<HttpControllerContext> and assigns the HttpControllerContext instance to that promise.

Scoping #

Because the Web API is basically being an ass (I can write this here, because I'm gambling that the solitary reader making it this far is so desperate that he or she is not going to care about the swearing) by treating framework Services as Singletons, it doesn't matter how it's being registered:

container.Register(Component
    .For<IHttpControllerActivator>()
    .ImplementedBy<ContextCapturingControllerActivator>());
container.Register(Component
    .For<IHttpControllerActivator>()
    .ImplementedBy<DefaultHttpControllerActivator>());

Notice that because Castle Windsor is being such an awesome library that it implicitly understands the Decorator pattern, I can simple register both Decorator and Decoratee in an ordered sequence.

The factory itself must also be registered as a Singleton (the default in Castle Windsor):

container.Register(Component
    .For<Func<TaskCompletionSource<HttpControllerContext>>>()
    .AsFactory());

Here, I'm taking advantage of Castle Windsor's Typed Factory Facility, so I'm simply asking it to treat a Func<TaskCompletionSource<HttpControllerContext>> as an Abstract Factory. Doing that means that every time the delegate is being invoked, Castle Windsor will create an instance of TaskCompletionSource<HttpControllerContext> with the correct lifetime.

This provides the bridge from Singleton lifestyle to PerWebRequest:

container.Register(Component
    .For<TaskCompletionSource<HttpControllerContext>>()
    .LifestylePerWebRequest());

Notice that TaskCompletionSource<HttpControllerContext> is registered with a PerWebRequest lifestyle, which means that every time the above delegate is invoked, it's going to create an instance which is scoped to the current request. This is exactly the desired behavior.

Registering HttpControllerContext #

The only thing left is registering the HttpControllerContext class itself:

container.Register(Component
    .For<HttpControllerContext>()
    .UsingFactoryMethod(k => 
        k.Resolve<TaskCompletionSource<HttpControllerContext>>()
            .Task.Result)
    .LifestylePerWebRequest());

This defines that HttpControllerContext instances are going to be resolved the following way: each time an HttpControllerContext instance is requested, the container is going to look up a TaskCompletionSource<HttpControllerContext> and return the result from that task.

The TaskCompletionSource<HttpControllerContext> instance is scoped per web request and previously captured (as you may recall) by the ContextCapturingControllerActivator class.

That's all (sic!) there's to it :)


Comments

Jon #
This is awesome. Love the fact the solution contains so many different techniques to accomplish something that should be easy!
2012-04-19 22:34 UTC
Thomas #
Mark, will .LifestylePerWebRequest() work with self hosting feature of Web API ? As far as I know .LifestylePerWebRequest() relies on the underlying HttpContext which is not available when self hosting.I haven't take enough time to investigate further but if you have any hints I'll be glad to hear from you.
2012-04-24 14:24 UTC
Thomas, I haven't investigated that yet. Sorry.
2012-04-25 06:23 UTC
ChrisCa #
Hi
This is a great article and is exactly what I need as I'm tring to use your Hyprlinkr library. However, I've come up against this problem:
http://stackoverflow.com/questions/12977743/asp-web-api-ioc-resolve-httprequestmessage

any ideas?
2012-10-19 15:44 UTC
The current blog post describes how to use Castle Windsor with a preview of the Web API. Since there were breaking changes between the preview and the RTM version, the approach described here no longer works. Please refer to this blog post for a description of how to make DI work with Castle Windsor in Web API RTM.
2012-10-19 16:57 UTC

Injecting HttpControllerContext With the ASP.NET Web API

Tuesday, 17 April 2012 15:17:05 UTC

The ASP.NET Web API (beta) defines a class called HttpControllerContext. As the name implies, it provides a context for a Controller. This article describes how to inject an instance of this class into a Service.

The Problem #

A Service may need an instance of the HttpControllerContext class. For an example, see the RouteLinker class in my previous post. A Controller, on the other hand, may depend on such a Service:

public CatalogController(IResourceLinker resourceLinker)

How can a CatalogController instance be wired up with an instance of RouteLinker, which again requires an instance of HttpControllerContext? In contrast to the existing ASP.NET MVC API, there's no easy way to read the current context. There's no HttpControllerContext.Current method or any other easy way (that I have found) to refer to an HttpControllerContext as part of the Composition Root.

True: it's easily available as a property on a Controller, but at the time of composition, there's no Controller instance (yet). A Controller instance is exactly what the Composition Root is attempting to create. This sounds like a circular reference problem. Fortunately, it's not.

The Solution #

For Poor Man's DI, the solution is relatively simple. As I've previously described, by default the responsibility of creating Controller instances is handled by an instance of IHttpControllerActivator. This is, essentially, the Composition Root (at least for all Controllers).

The Create method of that interface takes exactly the HttpControllerContext required by RouteLinker - or, put differently: the framework will supply an instance every time it invokes the Create method. Thus, a custom IHttpControllerActivator solves the problem:

public class PoorMansCompositionRoot : IHttpControllerActivator
{
    public IHttpController Create(
        HttpControllerContext controllerContext,
        Type controllerType)
    {
        if (controllerType == typeof(CatalogController))
        {
            var url = HttpContext.Current.Request.Url;
            var baseUri =
                new UriBuilder(
                    url.Scheme,
                    url.Host,
                    url.Port).Uri;
 
            return new CatalogController(
                new RouteLinker(
                    baseUri,
                    controllerContext));
        }
 
        // Handle other types here...
    }
}

The controllerContext parameter is simply passed on to the RouteLinker constructor.

The only thing left is to register the PoorMansCompositionRoot with the ASP.NET Web API. This can be done in Global.asax by using the GlobalConfiguration.Configuration.ServiceResolver.SetResolver method, as described in my previous post. Just resolve IHttpControllerActivator to an instance of PoorMansCompositionRoot.


Comments

mick delaney #
this is a nightmare really... has there been a discussion on the web api forums?
http://forums.asp.net/1246.aspx/1?Web+API
i think i might create one and reference this article directly if thats ok?

i upvoted your uservoice issue and there seems to be another one more generally on DI for asp.net http://aspnet.uservoice.com/forums/41199-general-asp-net/suggestions/487734-put-ioc-front-and-centre

doesn't seem like they listen to anyone though....


2012-04-20 09:35 UTC
Yes, please go ahead :)
2012-04-20 09:58 UTC
mick delaney #
i've started it off from here.

will add some more examples to the thread over the weekend

http://forums.asp.net/p/1795175/4943052.aspx/1?p=True&t=634705120844896186
2012-04-20 13:49 UTC
It is an interesting problem.

I would also like to look at it differently. ControllerContext is a volatile object with its lifetime spanning only a single method. It is true that we typically limit the lifetime of a controller to a single method but it can be different.

One major difference here is that ControllerContext is not a classic dependency since it is added AFTER the object creation by the framework and not provided by the DI frameqork. As such I could resort to an Initialise method on my RouteLinker object, making it responsibility of controller to provide such volatile information.
2012-04-23 09:09 UTC
Ali, that's not an approach I like. First of all because an Initialize method would introduce a Temporal Coupling, but also because it would mean that one would potentially need to pass the HttpControllerContext through a lot of intermediary layers in order to invoke that Initialize method on a very low-level dependency. That's API pollution.
2012-04-23 09:24 UTC
Well, I know that you are an authority in the subject. So I probably need to go away, read your links and then have a think about it. Thanks anyway, and keep up the good work.
2012-04-23 16:32 UTC
I realize that this article is more general, but specifically for IResourceLinker, would it make more sense to just pass HttpControllerContext as a second parameter from controller action. What would be the downside of this approach?
2012-05-02 14:03 UTC
Dmitry, that would mean that you'd need to pass the parameter through each and every method through all intermediary layers (see also my answer above). This would seriously pollute the API.
2012-05-02 14:10 UTC

Hyperlinking With the ASP.NET Web API

Tuesday, 17 April 2012 14:46:42 UTC

When creating resources with the ASP.NET Web API (beta) it's important to be able to create correct hyperlinks (you know, if it doesn't have hyperlinks, it's not REST). These hyperlinks may link to other resources in the same API, so it's important to keep the links consistent. A client following such a link should hit the desired resource.

This post describes an refactoring-safe approach to creating hyperlinks using the Web API RouteCollection and Expressions.

The Problem #

Obviously hyperlinks can be hard-coded, but since incoming requests are matched based on the Web API's RouteCollection, there's a risk that hard-coded links become disconnected from the API's incoming routes. In other words, hard-coding links is probably not a good idea.

For reference, the default route in the Web API looks like this:

routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "{controller}/{id}",
    defaults: new
    {
        controller = "Catalog",
        id = RouteParameter.Optional
    }
);

A sample action fitting that route might look like this:

public Artist Get(string id)

where the Get method is defined by the ArtistController class.

Desired Outcome #

In order to provide a refactoring-safe way to create links to e.g. the artist resource, the strongly typed Resource Linker approach outlined by José F. Romaniello can be adopted. The IResourceLinker interface looks like this:

public interface IResourceLinker
{
    Uri GetUri<T>(Expression<Action<T>> method);
}

This makes it possible to create links like this:

var artist = new Artist
{
    Name = artistName,
    Links = new[]
    {
        new Link
        {
            Href = this.resourceLinker.GetUri<ArtistController>(r =>
                r.Get(artistsId)).ToString(),
            Rel = "self"
        }
    },
    // More crap goes here...
};

In this example, the resourceLinker field is an injected instance of IResourceLinker.

Since the input to the GetUri method is an Expression, it's being checked at compile time. It's refactoring-safe because a refactoring tool will be able to e.g. change the name of the method call in the Expression if the name of the method changes.

Example Implementation #

It's possible to implement IResourceLinker over a Web API RouteCollection. Here's an example implementation:

public class RouteLinker : IResourceLinker
{
    private Uri baseUri;
    private readonly HttpControllerContext ctx;
 
    public RouteLinker(Uri baseUri, HttpControllerContext ctx)
    {
        this.baseUri = baseUri;
        this.ctx = ctx;
    }
 
    public Uri GetUri<T>(Expression<Action<T>> method)
    {
        if (method == null)
            throw new ArgumentNullException("method");
 
        var methodCallExp = method.Body as MethodCallExpression;
        if (methodCallExp == null)
        {
            throw new ArgumentException("The expression's body must be a MethodCallExpression. The code block supplied should invoke a method.\nExample: x => x.Foo().", "method");
        }
 
        var routeValues = methodCallExp.Method.GetParameters()
            .ToDictionary(p => p.Name, p => GetValue(methodCallExp, p));
 
        var controllerName = methodCallExp.Method.ReflectedType.Name
            .ToLowerInvariant().Replace("controller", "");
        routeValues.Add("controller", controllerName);
 
        var relativeUri = this.ctx.Url.Route("DefaultApi", routeValues);
        return new Uri(this.baseUri, relativeUri);
    }
 
    private static object GetValue(MethodCallExpression methodCallExp,
        ParameterInfo p)
    {
        var arg = methodCallExp.Arguments[p.Position];
        var lambda = Expression.Lambda(arg);
        return lambda.Compile().DynamicInvoke().ToString();
    }
}

This isn't much different from José F. Romaniello's example, apart from the fact that it creates a dictionary of route values and then uses the UrlHelper.Route method to create a relative URI.

Please note that this is just an example implementation. For instance, the call to the Route method supplies the hard-coded string "DefaultApi" to indicate which route (from Global.asax) to use. I'll leave it as an exercise for the interested reader to provide a generalization of this implementation.


Comments

This is very helpful! I'm currently exploring a solution that injects links into representations via filter based on some sort of state machine for the resource, though I might be leading myself down a DSL path.
2012-04-17 23:07 UTC

IQueryable is Tight Coupling

Monday, 26 March 2012 13:53:31 UTC

From time to time I encounter people who attempt to express an API in terms of IQueryable<T>. That's almost always a bad idea. In this post, I'll explain why.

In short, the IQueryable<T> interface is one of the best examples of a Header Interface that .NET has to offer. It's almost impossible to fully implement it.

Please note that this post is about the problematic aspects of designing an API around the IQueryable<T> interface. It's not an attack on the interface itself, which has its place in the BCL. It's also not an attack on all the wonderful LINQ methods available on IEnumerable<T>.

You can say that IQueryable<T> is one big Liskov Substitution Principle (LSP)violation just waiting to happen. In the next two section, I will apply Postel's law to explain why that is.

Consuming IQueryable<T> #

The first part of Postel's law applied to API design states that an API should be liberal in what it accepts. In other words, we are talking about input, so an API that consumes IQueryable<T> would take this generalized shape:

IFoo SomeMethod(IQueryable<Bar> q);

Is that a liberal requirement? It most certainly is not. Such an interface demands of any caller that they must be able to supply an implementation of IQueryable<Bar>. According to the LSP we must be able to supply any implementation without changing the correctness of the program. That goes for both the implementer of IQueryable<Bar> as well as the implementation of SomeMethod.

At this point it's important to keep in mind the purpose of IQueryable<T>: it's intended for implementation by query providers. In other words, this isn't just some sequence of Bar instances which can be filtered and projected; no, this is a query expression which is intended to be translated into a query somewhere else - most often some dialect of SQL.

That's quite a demand to put on the caller.

It's certainly a powerful interface (or so it would seem), but is it really necessary? Does SomeMethod really need to be able to perform arbitrarily complex queries against a data source?

In one recent discussion, it turns out that all the developer really wanted to do was to be able to select based on a handful of simple criteria. In another case, the developer only wanted to do simple paging.

Such requirements could be modeled much simpler without making huge demands on the caller. In both cases, we could provide specialized Query Objects instead, or perhaps even simpler just a set of specialized queries:

IFoo FindById(int fooId);
 
IFoo FindByCorrelationId(int correlationId);

Or, in the case of paging:

IEnumerable<IFoo> GetFoos(int page);

This is certainly much more liberal in that it requires the caller to supply only the required information in order to implement the methods. Designing APIs in terms of Role Interfaces instead of Header Interfaces makes the APIs much more flexible. This will enable you to respond to change.

Exposing IQueryable<T> #

The other part of Postel's law states that an API should be conservative in what it sends. In other words, a method must guarantee that the data it returns conforms rigorously to the contract between caller and implementer.

A method returning IQueryable<T> would take this generalized shape:

IQueryable<Bar> GetBars();

When designing APIs, a huge part of the contract is defined by the interface (or base class). Thus, the return type of a method specifies a conservative guarantee about the returned data. In the case of returning IQueryable<Bar> the method thus guarantees that it will return a complete implementation of IQueryable<Bar>.

Is that conservative?

Once again invoking the LSP, a consumer must be able to do anything allowed by IQueryable<Bar> without changing the correctness of the program.

That's a big honking promise to make.

Who can keep that promise?

Current Implementations #

Implementing IQueryable<T> is a huge undertaking. If you don't believe me, just take a look at the official Building an IQueryable provider series of blog posts. Even so, the interface is so flexible and expressive that with a single exception, it's always possible to write a query that a given provider can't translate.

Have you ever worked with LINQ to Entities or another ORM and received a NotSupportedException? Lots of people have. In fact, with a single exception, it's my firm belief that all existing implementations violate the LSP (in fact, I challenge my readers to refer me to a real, publicly available implementation of IQueryable<T> that can accept any expression I throw at it, and I'll ship a free copy of my book to the first reader to do so).

Furthermore, the subset of features that each implementation supports varies from query provider to query provider. An expression that can be translated by the Entity framework may not work with Microsoft's OData query provider.

The only implementation that fully implements IQueryable<T> is the in-memory implementation (and referring to this one does not earn you a free book). Ironically, this implementation can be considered a Null Object implementation and goes against the whole purpose of the IQueryable<T> interface exactly because it doesn't translate the expression to another language.

Why This Matters #

You may think this is all a theoretical exercise, but it actually does matter. When writing Clean Code, it's important to design an API in such a way that it's clear what it does.

An interface like this makes false guarantees:

public interface IRepository
{
    IQueryable<T> Query<T>();
}

According to the LSP and Postel's law, it would seem to guarantee that you can write any query expression (no matter how complex) against the returned instance, and it would always work.

In practice, this is never going to happen.

Programmers who define such interfaces invariably have a specific ORM in mind, and they implicitly tend to stay within the bounds they know are safe for that specific ORM. This is a leaky abstraction.

If you have a specific ORM in mind, then be explicit about it. Don't hide it behind an interface. It creates the illusion that you can replace one implementation with another. In practice, that's impossible. Imagine attempting to provide an implementation over an Event Store.

The cake is a lie.


Comments

The differences between the different providers is even more subtle than NotSupportedException's
If you have the following query you get different results with the EF LINQ provider and the LINQ to Objects provider

var r = from x in context.Xs select new { x.Y.Z }

dependant on the type of join between the sets or if Y is null

makes in-memory testing difficult
2012-03-26 17:24 UTC
Sergio Romero #
I will look for the implementation of IQueryable<T> that does not violate the principle but, what if I already own your book? :)

P.S. Kudos by the way. I love every single piece of it.
2012-03-26 20:12 UTC
Vytautas #
While swapping implementations from a specific ORM to an event store is a major undertaking, it is not very difficult to swap one persistence mechanism to another when you query persistent view models (which are usually found in the same type of applications where you can find an event store). They usually have automatic properties and (next to) no relations so queries on them are relatively simple to not throw many NotSupportedExceptions.

Also, having a generic repository as an intermediate step between an ORM and a specific repository lets you unit test specific repositories and/or use them with multiple persistence mechanisms (which is sometimes desirable). Despite the leaks, this "pattern" does have some uses in my book when used VERY carefully (although it is misused in 99 cases out of 100 when you see it).
2012-03-26 20:43 UTC
With your suggested approach, don't you just end up with a large interface of different queries? If not, how do you decide to split them up? Wouldn't it be better to have a more general purpose IQuery < T >.Execute()?
2012-03-26 21:17 UTC
I've created an IRepository interface which doesn't consume or expose any IQueryable object. But I've created a separate library (abstraction) on the top of this abstract IRepository and name it "EntityRepository" which is abstract, implements the IRepository and is dedicated to EF. EntityRepository and its derived types may consume or expose IQueryable.
Everybody who wants to implement the repositories using EF would use EntityRepository and everybody who wants to use other ORMs can create his own derived abstraction from IRepository.

But the consumer codes in my library just use the IRepository inteface (Ex. A CRUD ViewModel base class).

It it correct?
2012-03-27 19:42 UTC
Amir, that sounds correct. IQueryable is a useful tool, but it can be somewhat misleading as part of an API (interfaces or base classes). On concrete classes it's less misleading.
2012-03-28 05:03 UTC
Matt Warren #
You should take a look at some of Bart De Smet's stuff, he talks about IQueryable and it's problems.

See http://bartdesmet.net/blogs/bart/archive/2008/08/15/the-most-funny-interface-of-the-year-iqueryable-lt-t-gt.aspx and also take a look at the section 9min 30 secs into this talk http://channel9.msdn.com/Events/PDC/PDC10/FT10
2012-03-28 10:47 UTC
Daniel Cazzulino #
-- IFoo SomeMethod(IQueryable{Bar} q);

Who on Earth defines an API that *takes* an IQueryable? Nobody I know, and
I've never had the need to. So the first half of the post is kinda
irrelevant ;)

-- Implementing IQueryable{T} is a huge undertaking.

That's why NOBODY has to. The ORM you use will (THEY will take the huge
undertaking and they have, go look for EF and NHibernate and even RavenDB
who do provide such an API), as well as Linq to Objects for your testing
needs.

So, the IRepository.Query{T} is doing *exactly* what its definition
promises: "Mediates between the domain and data mapping layers using a
collection-like interface for accessing domain objects." (
http://martinfowler.com/eaaCatalog/repository.html)

In the ORM-implemented case, it goes against the data mapping layer, in the
Linq to Objects case, it does directly over the collection of objects.

It's the PERFECT abstraction to make code that goes against a DB/repository
unit-testable. (not that it frees you from having full integration
end-to-end tests that go against the *real* repository to make sure you're
not using any unsupported constructs against the in-memory version).

IQueryable{T} got rid of an entire slew of useless interfaces to abstract
the real repository. We should wholeheartedly embrace it and move forward,
instead of longing for the days when we had to do all that manually ;)
2012-03-28 14:19 UTC
Daniel Cazzulino #
-- Imagine attempting to provide an implementation over an Event Store.

That's the thing, if this interface is used in the read side of a CQRS
system, there's no event store there. Views need flexible queryability. No?
2012-03-28 14:20 UTC
I'm with kzu here -- I'm not really buying into your claim that IQueryable is a bad (leaky, etc) abstraction using the logic you provide. A method returning an IQueryable is *only* providing a guarantee that it describes a query over a set of objects. Nothing more. It gives no guarantees that the query can be executed, only what the query looks like.

Saying this:

public IQueryable{Customer} GetActiveCustomers() {
return CustomersDb.Where(x => x.IsActive).ToList().AsQueryable();
}

tells consumers that the return value of the method is a representation of how the results will be materialized when the query is executed. Saying this, OTOH:

public IEnumerable{Customer} GetActiveCustomers() {
return CustomersDb.Where(x => x.IsActive).ToList();
}

explicitly tells consumers that you are returning the *results* of a query, and it's none of the consuming code's business to know how it got there.

Materialization of those query results is what you seemed to be focused on, and that's not IQueryable's task. That's the task of the specific LINQ provider. IQueryable's job is only to *describe* queries, not actually execute them.
2012-03-28 15:21 UTC
Ethan J. Brown #
Completely agree with Mark here - IQueryable in, for instance, a repository interface may seem like a natural fit, but in fact it's a very very bad idea.

A repository interface should be explicit about the operations it provides over the set of data, and should not open the door to arbitrary querying that is not part of the applications overall design / architecture. YAGNI

I know that this may fly in the face of those who are used to arbitrary late-bound SQL style query flexibility and code-gen'd ORMs, but this type of setup is not one that's conducive to performance or scaling in the long term. ORMs may be good to get something up and running without a lot of effort (or thought into how the data is used), but they rapidly show their cracks / leaks over time. This is why you see sites like StackOverflow as they reach scale adopt MicroORMs that are closer to the metal like Dapper instead of using EF, L2S or NHibernate. (Other great Micro ORMs are Massive, OrmLite, and PetaPoco to name just a few.)


Is it work to be explicit in your contracts around data access? Absolutely... but this is a better long-term design decision when you factor in testability and performance goals. How do you test the perf characteristics of an API that doesn't have well-defined operations? Well... you can't really.

Also -- consider stores that are *not* SQL based as Mark alludes to. We're using Riak, where querying capabilities are limited in scope and there are performance trade-offs to certain types of operations on secondary indices. We accept this tradeoff b/c the operational characteristics of Riak are unparalleled and it has great latency characteristics when retrieving data. We need to be explicit about how we allow data to be retrieved in our data access layer because some things are a no-go in our underlying store (for instance, listing keys is extremely expensive at the moment in Riak).

Bind yourself to IQueryable and you've handcuffed yourself. IQueryable is simply not a fit in a lot of cases, and in others it's total overkill... data access is not one size fits all. Embrace the polyglot!
2012-03-29 12:09 UTC
Daniel #
Thanks for your blog post. One big problem with APIs that are based on IQueryable is testability, see this question for example: http://stackoverflow.com/questions/9921647/unit-testing-with-queries-defined-in-extension-methods.

However, I don't yet see how to design an easy to use and read API with query objects. Maybe you could provide a repository implementation that works without IQueryable but is still readable and easy to use?
2012-03-29 12:35 UTC
Daniel, according to your blog post http://blogs.clariusconsulting.net/kzu/how-to-design-a-unit-testable-domain-model-with-entity-framework-code-first/ you seem to do quite a lot of work to make the code testable, even re-implementing the Include method. Claiming that IQueryable is the "perfect" abstraction here doesn't convince me.
2012-03-29 12:39 UTC
Daniel #
What do you think about the API outlined in this post: http://stackoverflow.com/a/9943907/572644
2012-03-30 13:29 UTC
Looks good, Daniel. I'd love to hear your experience with this once you've tried it on for some time :)
2012-03-30 14:15 UTC
I agree that it can be annoying when you try to materialize a IQueryable to find that something in the query is not supported by the referenced QueryProvider.

However, exposing IQueryable in an API is so powerful that it's worth this minor annoyance.
2012-03-30 19:35 UTC
Steven Pena #
Excellent points - couldn't agree more. Even when a method of IQueryable has been implemented, you can't even guarantee the same behavior. For example - look at "Contains". The EF Linq provider uses %, which will match any case, while the LinqToObjects provider will match by case. I tend to use IQueryable in my DAL and surface up a more meaningful and controlled API through my Repositories. This prevents that leaky abstraction and allows me to control the behavior of that persistence layer (caching, eager loading of aggregates...)
2012-03-31 12:46 UTC
Nathan Brown #
What does this say for things like ASP.Net Web API? It is asking you to expose your data through an IQueryable. I tried to use the OData stuff that was using IQueryable and could never get it to fully work with NHibernate because of specific unsupported query structures.

I like the idea of the OData or Web API queries, but would rather they just expose the query parameters and let us build adapters from the query to the datastore as relevant.
2012-03-31 18:24 UTC
Completely agree with this. IQueryable/OData is an anti-pattern in the making - especially for external Web APIs, which is why it will never be an out-of-the-box option in ServiceStack: http://stackoverflow.com/questions/9577938/odata-with-servicestack

The whole idea of web service interfaces is to expose a technology-agnostic interoperable API to the outside world.

Exposing an IQueryable/OData endpoint effectively couples your services to using OData indefinitely as you wont be able to feasibly determine what 'query space' existing clients are binded to, i.e. what existing queries/tables/views/properties you need to freeze/support indefinitely. This is an issue when exposing any implementation at the surface area of your API, as it limits your ability to add your own custom logic on it, e.g. Authorization, Caching, Monitoring, Rate-Limiting, etc. And because OData is really slow, you'll hit performance/scaling problems with it early. The lack of control over the endpoint, means you're effectively heading for a rewrite: https://coldie.net/?tag=servicestack

Lets see how feasible is would be to move off an oData provider implementation by looking at an existing query from Netflix's OData api:

http://odata.netflix.com/Catalog/Titles?$filter=Type%20eq%20'Movie'%20and%20(Rating%20eq%20'G'%20or%20Rating%20eq%20'PG-13')

This service is effectively now coupled to a Table/View called 'Titles' with a column called 'Type'.

And how it would be naturally written if you weren't using OData:

http://api.netflix.com/movies?ratings=G,PG-13

Now if for whatever reason you need to replace the implementation of the existing service (e.g. a better technology platform has emerged, it runs too slow and you need to move this dataset over to a NoSQL/Full-TextIndexing-backed sln) how much effort would it take to replace the OData impl (which is effectively binded to an RDBMS schema and OData binary impl) to the more intuitive impl-agnostic query below? It's not impossible, but as it's prohibitively difficult to implement an OData API for a new technology, that rewriting + breaking existing clients would tend to be the preferred option.

Letting internal implementations dictate the external facing url structure is a sure way to break existing clients when things need to change. This is why you should expose your services behind Cool URIs, i.e. logical permanent urls (that are unimpeded by implementation) that do not change, as you generally don't want to limit the technology choices of your services.

It might be a good option if you want to deliver adhoc 'exploratory services' on it, but it's not something I'd ever want external clients binded to in a production system. And if I'm only going to limit its use to my local intranet what advantages does it have over just giving out a read-only connection string? which will allow others use with their favourite Sql Explorer, Reporting or any other tools that speaks SQL.
2012-07-31 05:42 UTC
@Nikos "re-implementing the Include method": you mean that one-liner??

The testable interface abstraction allows for one-liners in the ORM-bound implementation, as well as the testable fake. Doesn't get any simpler than that.

That hardly seems like complicated stuff to me.
2012-08-31 15:44 UTC
@Demis +1. Exposing OData is a different thing altogether. Not totally convinced with that either myself.
2012-08-31 15:46 UTC
Ron #
@Demis, couldn't you use a service pattern to address those issues?
2013-02-21 16:49 UTC
Although this post is "oldish" I do think that is still not answered. I was checkinga quite nice idea of UoW and Repository pattern and EF implementation proposal and got stuck again at IQueryable question. Recently I read a lot about this question and could not find anything that suits me perfectly. Is your opinion changed after more than two years time? What about passing a Expression<Func<T, bool>> predicate to our Find method? Thanks
2014-08-06 17:46 UTC

Mario, thank you for writing. It occasionally happens that I change my mind, as I learn new skills and gain more experience, but in this case, I haven't changed my mind. I indirectly use IQueryable<T> when I occasionally have to query a relational database with the Entity Framework or LINQ to SQL (which happens extremely rarely these days), but I never design my own interfaces around IQueryable<T>; my reasons for introducing an interface is normally to reduce coupling, but introducing any interface that exposes or depends on IQueryable<T> doesn't reduce coupling.

In Agile Principles, Patterns, and Practices in C#, Robert C. Martin explains in chapter 11 that "clients [...] own the abstract interfaces" - in other words: the client, which invokes methods on the interface, should define the methods that the interface should expose, based on what it needs - not based on what an arbitrary implementation might be able to implement. This is the design philosophy I always follow. A client shouldn't have to need to be able to perform arbitrary queries against a data access layer. If it does, it's such a Leaky Abstraction anyway that the interface isn't going to help; in such cases, just let the client talk directly to the ORM or the ADO.NET objects. That's much easier to understand.

The same argument also goes against designing interfaces around Expression<Func<T, bool>>. It may seem flexible, but the fact is that such an expression can be arbitrarily complex, so in practice, it's impossible to guarantee that you can translate every expression to all SQL dialects; and what about OData? Or queries against MongoDB?

Still, I rarely use ORMs at all. Instead, I increasingly rely on simple abstractions like Drain when designing my systems.

2014-08-09 12:22 UTC

Robust DI With the ASP.NET Web API

Tuesday, 20 March 2012 16:02:51 UTC
Note 2014-04-03 19:46 UTC: This post describes how to address various Dependency Injection issues with a Community Technical Preview (CTP) of ASP.NET Web API 1. Unless you're still using that 2012 CTP, it's no longer relevant. Instead, refer to my article about Dependency Injection with the final version of Web API 1 (which is also relevant for Web API 2).

Like the WCF Web API, the new ASP.NET Web API supports Dependency Injection (DI), but the approach is different and the resulting code you'll have to write is likely to be more complex. This post describes how to enable robust DI with the new Web API. Since this is based on the beta release, I hope that it will become easier in the final release.

At first glance, enabling DI on an ASP.NET Web API looks seductively simple. As always, though, the devil is in the details. Nikos Baxevanis has already provided a more thorough description, but it's even more tricky than that.

Protocol #

To enable DI, all you have to do is to call the SetResolver method, right? It even has an overload that enables you to supply two code blocks instead of implementing an interface (although you can certainly also implement IDependencyResolver). Could it be any easier than that?

Yes, it most certainly could.

Imagine that you'd like to hook up your DI Container of choice. As a first attempt, you try something like this:

GlobalConfiguration.Configuration.ServiceResolver.SetResolver(
    t => this.container.Resolve(t),
    t => this.container.ResolveAll(t).Cast<object>());

This compiles. Does it work? Yes, but in a rather scary manner. Although it satisfies the interface, it doesn't satisfy the protocol ("an interface describes whether two components will fit together, while a protocol describes whether they will work together." (GOOS, p. 58)).

The protocol, in this case, is that if you (or rather the container) can't resolve the type, you should return null. What's even worse is that if your code throws an exception (any exception, apparently), DependencyResolver will suppress it. In case you didn't know, this is strongly frowned upon in the .NET Framework Design Guidelines.

Even so, the official introduction article instead chooses to play along with the protocol and explicitly handle any exceptions. Something along the lines of this ugly code:

GlobalConfiguration.Configuration.ServiceResolver.SetResolver(
    t =>
    {
        try
        {
            return this.container.Resolve(t);
        }
        catch (ComponentNotFoundException)
        {
            return null;
        }
    },
    t =>
    {
        try
        {
            return this.container.ResolveAll(t).Cast<object>();
        }
        catch (ComponentNotFoundException)
        {
            return new List<object>();
        }
    }
);

Notice how try/catch is used for control flow - another major no no in the .NET Framework Design Guidelines.

At least with a good DI Container, we can do something like this instead:

GlobalConfiguration.Configuration.ServiceResolver.SetResolver(
    t => this.container.Kernel.HasComponent(t) ?
        this.container.Resolve(t) :
        null,
    t => this.container.ResolveAll(t).Cast<object>());

Still, first impressions don't exactly inspire trust in the implementation...

API Design Issues #

Next, I would like to direct your attention to the DependencyResolver API. At its core, it looks like this:

public interface IDependencyResolver
{
    object GetService(Type serviceType);
    IEnumerable<object> GetServices(Type serviceType);
}

It can create objects, but what about decommissioning? What if, deep in a dependency graph, a Controller contains an IDisposable object? This is not a particularly exotic scenario - it might be an instance of an Entity Framework ObjectContext. While an ApiController itself implements IDisposable, it may not know that it contains an injected object graph with one or more IDisposable leaf nodes.

It's a fundamental rule of DI that you must Release what you Resolve. That's not possible with the DependencyResolver API. The result may be memory leaks.

Fortunately, it turns out that there's a fix for this (at least for Controllers). Unfortunately, this workaround leverages another design problem with DependencyResolver.

Mixed Responsibilities #

It turns out that when you wire a custom resolver up with the SetResolver method, the ASP.NET Web API will query your custom resolver (such as a DI Container) for not only your application classes, but also for its own infrastructure components. That surprised me a bit because of the mixed responsibility, but at least this is a useful hook.

One of the first types the framework will ask for is an instance of IHttpControllerFactory, which looks like this:

public interface IHttpControllerFactory
{
    IHttpController CreateController(HttpControllerContext controllerContext,
        string controllerName);
    void ReleaseController(IHttpController controller);
}

Fortunately, this interface has a Release hook, so at least it's possible to release Controller instances, which is most important because there will be a lot of them (one per HTTP request).

Discoverability Issues #

The IHttpControllerFactory looks a lot like the well-known ASP.NET MVC IControllerFactory interface, but there are subtle differences. In ASP.NET MVC, there's a DefaultControllerFactory with appropriate virtual methods one can overwrite (it follows the Template Method pattern).

There's also a DefaultControllerFactory in the Web API, but unfortunately no Template Methods to override. While I could write an algorithm that maps from the controllerName parameter to a type which can be passed to a DI Container, I'd rather prefer to be able to reuse the implementation which the DefaultControllerFactory contains.

In ASP.NET MVC, this is possible by overriding the GetControllerInstance method, but it turns out that the Web API (beta) does this slightly differently. It favors composition over inheritance (which is actually a good thing, so kudos for that), so after mapping controllerName to a Type instance, it invokes an instance of the IHttpControllerActivator interface (did I hear anyone say "FactoryFactory?"). Very loosely coupled (good), but not very discoverable (not so good). It would have been more discoverable if DefaultControllerFactory had used Constructor Injection to get its dependency, rather than relying on the Service Locator which DependencyResolver really is.

However, this is only an issue if you need to hook into the Controller creation process, e.g. in order to capture the HttpControllerContext for further use. In normal scenarios, despite what Nikos Baxevanis describes in his blog post, you don't have to override or implement IHttpControllerFactory.CreateController. The DependencyResolver infrastructure will automatically invoke your GetService implementation (or the corresponding code block) whenever a Controller instance is required.

Releasing Controllers #

The easiest way to make sure that all Controller instances are being properly released is to derive a class from DefaultControllerFactory and override the ReleaseController method:

public class ReleasingControllerFactory : DefaultHttpControllerFactory
{
    private readonly Action<object> release;
 
    public ReleasingControllerFactory(Action<object> releaseCallback,
        HttpConfiguration configuration)
        : base(configuration)
    {
        this.release = releaseCallback;
    }
 
    public override void ReleaseController(IHttpController controller)
    {
        this.release(controller);
        base.ReleaseController(controller);
    }
}

Notice that it's not necessary to override the CreateController method, since the default implementation is good enough - it'll ask the DependencyResolver for an instance of IHttpControllerActivator, which will again ask the DependencyResolver for an instance of the Controller type, in the end invoking your custom GetObject implemention.

To keep the above example generic, I just injected an Action<object> into ReleasingControllerFactory - I really don't wish to turn this into a discussion about the merits and demerits of various DI Containers. In any case, I'll leave it as an exercise to you to wire up your favorite DI Container so that the releaseCallback is actually a call to the container's Release method.

Lifetime Cycles of Infrastructure Components #

Before I conclude, I'd like to point out another POLA violation that hit me during my investigation.

The ASP.NET Web API utilizes DependencyResolver to resolve its own infrastructure types (such as IHttpControllerFactory, IHttpControllerActivator, etc.). Any custom DependencyResolver you supply will also be queried for these types. However:

When resolving infrastructure components, the Web API doesn't respect any custom lifecycle you may have defined for these components.

At a certain point while I investigated this, I wanted to configure a custom IHttpControllerActivator to have a Web Request Context (my book, section 8.3.4) - in other words, I wanted to create a new instance of IHttpControllerActivator for each incoming HTTP request.

This is not possible. The framework queries a custom DependencyResolver for an infrastructure type, but even when it receives an instance (i.e. not null), it doesn't trust the DependencyResolver to efficiently manage the lifetime of that instance. Instead, it caches this instance for ever, and never asks for it again. This is, in my opinion, a mistaken responsibility, and I hope it will be corrected in the final release.

Concluding Thoughts #

Wiring up the ASP.NET Web API with robust DI is possible, but much harder than it ought to be. Suggestions for improvements are:

  • A Release hook in DependencyResolver.
  • The framework itself should trust the DependencyResolver to efficiently manage lifetime of all objects it create.

As I've described, there are other places were minor adjustments would be helpful, but these two suggestions are the most important ones.

Update (2012.03.21): I've posted this feedback to the product group on uservoice and Connect - if you agree, please visit those sites and vote up the issues.


Comments

"A Release hook in DependencyResolver."
Won't happen. The release issue was highlighted during the RCs and Beta releases and the feedback from Brad Wilson was they were not going to add a release mechanism :(

The same applies to the *Activators that where added (IViewPageActivator, etc.), they really need a release hook too
2012-03-21 14:29 UTC
Why not?
2012-03-21 14:46 UTC
Jaime Febres #
That's the kind of problems when you use a framework that considers DI a second citizen and not really something that is bundled in the heart of it.
I know for some people is not an option whether use or not what MVC produces, if you are not one of them, I suggest you give a try to FubuMVC, a MVC framework built from the beginning with DI in mind.
It has its own shortcoming, but the advantages surpasses the problems.
The issue you had about not releasing disposable objects is simply non an issue in FubuMVC.
Kind regards,
Jaime.
2012-03-21 15:34 UTC
Jaime, yes, I'm aware of FubuMVC, but have never tried it for a couple of reasons.

First of all, right now I need a framework for building REST services, and not a web framework. FubuMVC might have gained REST features (such as conneg) since last I looked, but the ASP.NET Web API does have quite an attractive feature set for building REST APIs.

Secondly, last time I discussed the issue of releasing components, Jeremy Miller was very much against it. StructureMap doesn't have a Release hook, so I wonder whether FubuMVC does...

Personally, I'm not against trying other technologies, and I've previously looked a bit at OpenRasta and Nancy. However, many of my clients prefer Microsoft technologies.

I must also admit to be somewhat taken aback by the direction Microsoft has taken here. The WCF Web API was really good, so I also felt it was my duty to provide feedback to Microsoft as well as the community about this.
2012-03-21 16:02 UTC
Jaime Febres #
Hi Mark,
Yes, SM does not have a built-in release hook, I agree on that, but fubumvc uses something called "NestedContainer".

This nested container shares the same settings of your top level container (often StructureMap.ObjectFactory.Container), but with one added bonus, on disposal, also disposes any disposable dependency it resolved during the request lifetime.

And all of this is controller by behaviors that wrap an action call at runtime (but configured at start-up), sorta a "Russian Doll Model" like fubu folks like to call to this.
So all these behaviors wrap each other, allowing you to effectively dispose resources.
To avoid giving wrong explanations, I better share a link which already does this in an effective way:
http://codebetter.com/jeremymiller/2011/01/09/fubumvcs-internal-runtime-the-russian-doll-model-and-how-it-compares-to-asp-net-mvc-and-openrasta/

And by no means, don't take my words as a harsh opinion against MS MVC, it just a matter of preferences, if to people out there MS MVC or ASP.Net Web API solves their needs, I will not try to convince you otherwise.

Kind regards,
Jaime.

2012-03-21 16:42 UTC
Jaime, it's good feedback - I'm receiving it in the most positive way :)
2012-03-21 16:45 UTC
Mark do you know if the Unity.WebAPI NuGet Package are releasing controllers in a proper manor?

Im currently using it to register my repository following this guidance: http://www.devtrends.co.uk/blog/introducing-the-unity.webapi-nuget-package

Thanks,
Vindberg.
2012-03-23 08:05 UTC
Anders, no I don't know, but I would be slightly surprised if it does. Unity (2.0) has several issues related to proper decommissioning. In section 14.2 in my book I describe those issues and how to overcome them.
2012-03-23 08:34 UTC
thomas #
I've also experienced these problems that I descibed here : http://www.codedistillers.com/tja/2012/03/11/web-api-beta-and-castle-windsorwithout-idependencyresolver/. IDependencyResolver adds another layer of indirection and the API is not self describing about what parts you should implement if you want the "Release" method...
2012-03-26 10:18 UTC
Chelsea Taylor #
@Anders - We are using Unity.WebAPI and it is certainly decommissioning components correctly in our projects. Internally it uses a child container per http request.
2012-03-26 15:38 UTC
Ethan J. Brown #
I will take the opportunity here to plug ServiceStack for REST based .NET services. Hands down my framework of choice (after having used WCF for years). (Nancy is good too for other reasons, but ServiceStack has a more directly service oriented approach.)

https://github.com/ServiceStack


The ServiceStack philosophy on IoC - https://github.com/ServiceStack/ServiceStack/wiki/The-IoC-container

I'm sure you'll take issue with something in the stack, but keep in mind Demis iterates very quickly and the stack is a very pragmatic / results driven one. Unlike Microsoft, he's very willing to accept patches, etc where things may fall short of user reqs, so if you have feedback, he's always listening.
2012-03-30 00:04 UTC
Cristiano #
Great post Mark, I'm 100% on your side.
Just beyond my understanding how you can design a support for IoC container with no decommissioning.
Hopefully MS will fix this issue/lackness before RTM
2012-04-11 10:12 UTC
JD #
So I'm not really sure where this left off. I see it marked as "resolved" here: http://aspnetwebstack.codeplex.com/workitem/26

But I don't see an ASP.NET MVC release number assigned to it. There is no .Release() method in the ASP.NET MVC 4 release candidate...safe to say they aren't resolving this until the next release?
2012-06-05 18:20 UTC
Chris Paynter #
Hey Mark, I cannot find DefaultHttpControllerFactory, and Resharper is unable to resolve it automatically. Googling for a namespace or information regarding it's inclusion in the RC of MVC 4 is unfruitful. Has the name of the class been changed since you authored this post?

I've found this article very useful in understanding the use of Windsor with Web API, and I am now wondering whether anything has changed in the RC of MVC 4 that would make any part of this method redundant? Would be great to get an update from you regarding the current status of the framework with regards to implementing an IOC with Web API.

Many Thanks

Chris
2012-08-07 10:02 UTC
The class may very well have changed since the beta.

In the future, I may add an update to this article, unless someone else beats me to it.
2012-08-13 12:20 UTC

Migrating from WCF Web API to ASP.NET Web API

Monday, 19 March 2012 22:24:47 UTC

Now that the WCF Web API has ‘become' the ASP.NET Web API, I've had to migrate a semi-complex code base from the old to the new framework. These are my notes from that process.

Migrating Project References #

As far as I can tell, the ASP.NET Web API isn't just a port of the WCF Web API. At a cursory glance, it looks like a complete reimplementation. If it's a port of the old code, it's at least a rather radical one. The assemblies have completely different names, and so on.

Both old and new project, however, are based on NuGet packages, so it wasn't particularly hard to change.

To remove the old project references, I ran this NuGet command:

Uninstall-Package webapi.all -RemoveDependencies

followed by

Install-Package aspnetwebapi

to install the project references for the ASP.NET Web API.

Rename Resources to Controllers #

In the WCF Web API, there was no naming convention for the various resource classes. In the quickstarts, they were sometimes called Apis (like ContactsApi), and I called mine Resources (like CatalogResource). Whatever your naming convention was, the easiest things is to find them all and rename them to end with Controller (e.g. CatalogController).

AFAICT you can change the naming convention, but I didn't care enough to do so.

Derive Controllers from ApiController #

Unless you care to manually implement IHttpController, each Controller should derive from ApiController:

public class CatalogController : ApiController

Remove Attributes #

The WCF Web API uses the [WebGet] and [WebInvoke] attributes. The ASP.NET Web API, on the other hand, uses routes, so I removed all the attributes, including their UriTemplates:

//[WebGet(UriTemplate = "/")]
public Catalog GetRoot()

Add Routes #

As a replacement for attributes and UriTemplates, I added HTTP routes:

routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "{controller}/{id}",
    defaults: new { controller = "Catalog", id = RouteParameter.Optional }
);

The MapHttpRoute method is an extension method defined in the System.Web.Http namespace, so I had to add a using directive for it.

Composition #

Wiring up Controllers with Constructor Injection turned out to be rather painful. For a starting point, I used Nikos Baxevanis' guide, but it turns out there are further subtleties which should be addressed (more about this later, but to prevent a stream of comments about the DependencyResolver API: yes, I know about that, but it's inadequate for a number of reasons).

Media Types #

In the ASP.NET Web API application/json is now the default media type format if the client doesn't supply any Accept header. For the WCF Web API I had to resort to a hack to change the default, so this was a pleasant surprise.

It's still pretty easy to add more supported media types:

GlobalConfiguration.Configuration.Formatters.XmlFormatter
    .SupportedMediaTypes.Add(
        new MediaTypeHeaderValue("application/vnd.247e.artist+xml"));
GlobalConfiguration.Configuration.Formatters.JsonFormatter
    .SupportedMediaTypes.Add(
        new MediaTypeHeaderValue("application/vnd.247e.artist+json"));

(Talk about a Law of Demeter violation, BTW...)

However, due to an over-reliance on global state, it's not so easy to figure out how one would go about mapping certain media types to only a single Controller. This was much easier in the WCF Web API because it was possible to assign a separate configuration instance to each Controller/Api/Resource/Service/Whatever... This, I've still to figure out how to do...


Comments

Jonty #
Why is the DependencyResolver API inadequate?
2012-03-20 10:48 UTC
Jonty, here's why.
2012-03-20 16:07 UTC
Cristiano #
Hopefully they will fix configuration issue too..
webapi prev6 seems much batter than current beta
http://code.msdn.microsoft.com/Contact-Manager-Web-API-0e8e373d/view/Discussions
see Daniel reply to my post
2012-04-11 10:50 UTC

Page 57 of 73

"Our team wholeheartedly endorses Mark. His expert service provides tremendous value."
Hire me!